One of the lessons not learned by many people dealing with operating systems is that native code is important.
It is so important that it can make or break acceptance of the device for software developers, and sometimes eventual success of the product. If anything, the old way of adding more features over layers and layers of code via interpretation or virtualization is out. Thin layers that allows developers full access to the hardware is the way to go. When mobile devices are becoming more popular, it is important that direct access to the hardware is not hampered because battery life will be reduced through wasted cpu cycles on the interpretation or virtualization. Wasted CPU cycles means reduced speed, and reduced speed means certain applications will not work on the device (notably power hungry games). It is not a secret that games drives sales of many devices. In fact, it is this reason that game consoles became so successful. Interpreted code also prevents cycle counting for very high performance applications (games notwithstanding), and this leads to unpredictable behavior unsuitable for real time OS and high framerate dependent games. For example, certain interpreted languages have garbage collectors and "hotspot" compiling that can kick in or be working in uncertain time periods, which will affect the timing and cpu cycle counting. It can also affect smoothness of user interfaces and quick feedback.
If you look at mobile technology (especially cell phones), the games were nothing to talk about. There are two factors, speed (the slow java), and the screen (small size and low resolution). Apple got lucky with the iphone (took some hackers to let them realize the profit potential for native apps on the device) and they ended up creating a new segment of low cost games and applications. Piggybacking on this concept will solve the other missing piece of the equation for a popular device... big size and high resolution in a touch tablet.
Google's entry into the mobile phone segment will probably be hampered by too many different cpu's that are tied together by an interpreted language (java). Java is an interpreted language and will have the same disadvantage as all interpreted languages (like C# and its .NET libraries). Interpreted languages are good for short programs that run short bursts (like perl programs, which lead to PHP web languages). They may be good for short quick web stuff, but try doing stuff in games with them and they will start to show their disadvantages. Even allowing "hacks" to allow direct access to the cpu via Java is kind of a bad idea. The hack won't work on different CPU's, thus segmenting the market. I think google should just stick with one type of CPU architecture and allow native access to it (like what apple is doing with their iphone and iPod touch).
On consoles, native code is a must. Even adding .NET XNA for developers showed that there is little market for slow unpredictable programs (because of interpretation), which is why on the XBox360, the game sellers are those that don't use it, but access the hardware directly in C or assembly. If the PlayStation brand of consoles started adding layers of interpretation, the console won't last that long (10+ years). Each layer of interpretation or virtualization slows the hardware down, and a hardware device must get as much power as possible throughout its lifetime, which is why newer and newer games go lower and lower to access the hardware in order to get more power (which means going down to assembly level programming). It is this reason that software libraries that add too much indirection or virtualization will end up hurting the device in the long run. Interpreted languages (.NET and it's incarnations) and virtual drivers on the Vista and later operating systems ended up killing most of the productive programs and games on the platform because with each revision of the OS, the hardware needs to be upgraded to basically do the same thing. Upgrading hardware to make up for slowed down operating system (because of the thick layers of interpretation and virtualization) may be good for hardware sales in the short run, but bad for everyone (including developers, hardware manufacturers, and consumers) in the long run when the number of apps and games perform poorly on the majority except for those who upgraded their hardware to the max specs. When this happens there is no market for apps and games, and that is why the consoles and iphones (with their thin layers of OS) started taking over the market previously held by desktop operating systems.
Google will probably find out this eventually when they finally realize chrome is popular because it is fast and accesses the hardware directly, while their mobile phones don't sell that much because of the lack of good games, and UI that is not smooth like native code. Games may have something to do with it, because games just don't work right on interpreted languages, and the mobile phone provided by google require that they be made in Java (an interpreted language) with garbage collection and inability to count cpu-cycles, and lack of direct access to the hardware. Like .NET (yes lack of games using it too), the platform will probably fizz out when the number of exciting games and powerful programs become lacking because developers can't get access to the hardware directly, without going though "hacks" that don't work on all devices. The google phone may save itself if they just emulate the iPhone of keeping to one type of CPU and get rid of virtualization and interpretations and keep the layers as thin as possible to allow programs to shine.
What many don't realize is that powerful software that can do many amazing things is great, but if you stick a middle layer (the operating system and the libraries) that hampers access to the hardware, and enforce a type of language (interpretation), you limit what can be achieved on the hardware. The application decides if it wants to use interpretation. The application decides how much CPU cycles to waste. The OS needs to get out of the way and be as thin as possible. Enforcing too much constraints on the applications (games) will kill the platform. Some vendors go out and even limit who can provide software on a platform, and while this has some benefits, it also reduces the number of channels the platform can succeed. Windows didn't become a monopoly because people can use it to move windows around, it was because of the cheap hardware and games and apps made by the developers that all ran on the same platform. Consoles took over because the hardware was cheap for consumers and you didn't need to keep upgrading hardware to keep up with the slow operating system revisions released each 3-5 years. It got to the point on the desktop where slow operating systems destroyed enough good developers that the limited number of developers on consoles overcame the negative aspects of expensive hardware upgrading, and the consoles took over the gaming market. If this keeps up, apple will also take over the application market because it seems for Microsoft adding features by absorbing more layers (and competitors) is their profit strategy. Soon, Microsoft will have to deal with the situation that developers will find alternative platforms that provides them with access to the hardware directly (which means good for games and powerful apps), and not worry about being swallowed up because the OS just got more bloated because it absorbed another similar feature offering by the developer. Destroy a developer on the platform, destroy future markets on the platform offered by the developer.
So what does this mean? The future device needs to be open to a lot of developers and the OS needs to get out of the way and not enforce a language, type of access to the hardware, and be very powerful and cheap. The PlayStation just happened to fit this description with the exception of the "open to a lot of developers" and that is why the iPhone came to market and absorbed what was left.
Source: http://www.edepot.com/forums/viewtopic.php?f=12&t=3146
It is so important that it can make or break acceptance of the device for software developers, and sometimes eventual success of the product. If anything, the old way of adding more features over layers and layers of code via interpretation or virtualization is out. Thin layers that allows developers full access to the hardware is the way to go. When mobile devices are becoming more popular, it is important that direct access to the hardware is not hampered because battery life will be reduced through wasted cpu cycles on the interpretation or virtualization. Wasted CPU cycles means reduced speed, and reduced speed means certain applications will not work on the device (notably power hungry games). It is not a secret that games drives sales of many devices. In fact, it is this reason that game consoles became so successful. Interpreted code also prevents cycle counting for very high performance applications (games notwithstanding), and this leads to unpredictable behavior unsuitable for real time OS and high framerate dependent games. For example, certain interpreted languages have garbage collectors and "hotspot" compiling that can kick in or be working in uncertain time periods, which will affect the timing and cpu cycle counting. It can also affect smoothness of user interfaces and quick feedback.
If you look at mobile technology (especially cell phones), the games were nothing to talk about. There are two factors, speed (the slow java), and the screen (small size and low resolution). Apple got lucky with the iphone (took some hackers to let them realize the profit potential for native apps on the device) and they ended up creating a new segment of low cost games and applications. Piggybacking on this concept will solve the other missing piece of the equation for a popular device... big size and high resolution in a touch tablet.
Google's entry into the mobile phone segment will probably be hampered by too many different cpu's that are tied together by an interpreted language (java). Java is an interpreted language and will have the same disadvantage as all interpreted languages (like C# and its .NET libraries). Interpreted languages are good for short programs that run short bursts (like perl programs, which lead to PHP web languages). They may be good for short quick web stuff, but try doing stuff in games with them and they will start to show their disadvantages. Even allowing "hacks" to allow direct access to the cpu via Java is kind of a bad idea. The hack won't work on different CPU's, thus segmenting the market. I think google should just stick with one type of CPU architecture and allow native access to it (like what apple is doing with their iphone and iPod touch).
On consoles, native code is a must. Even adding .NET XNA for developers showed that there is little market for slow unpredictable programs (because of interpretation), which is why on the XBox360, the game sellers are those that don't use it, but access the hardware directly in C or assembly. If the PlayStation brand of consoles started adding layers of interpretation, the console won't last that long (10+ years). Each layer of interpretation or virtualization slows the hardware down, and a hardware device must get as much power as possible throughout its lifetime, which is why newer and newer games go lower and lower to access the hardware in order to get more power (which means going down to assembly level programming). It is this reason that software libraries that add too much indirection or virtualization will end up hurting the device in the long run. Interpreted languages (.NET and it's incarnations) and virtual drivers on the Vista and later operating systems ended up killing most of the productive programs and games on the platform because with each revision of the OS, the hardware needs to be upgraded to basically do the same thing. Upgrading hardware to make up for slowed down operating system (because of the thick layers of interpretation and virtualization) may be good for hardware sales in the short run, but bad for everyone (including developers, hardware manufacturers, and consumers) in the long run when the number of apps and games perform poorly on the majority except for those who upgraded their hardware to the max specs. When this happens there is no market for apps and games, and that is why the consoles and iphones (with their thin layers of OS) started taking over the market previously held by desktop operating systems.
Google will probably find out this eventually when they finally realize chrome is popular because it is fast and accesses the hardware directly, while their mobile phones don't sell that much because of the lack of good games, and UI that is not smooth like native code. Games may have something to do with it, because games just don't work right on interpreted languages, and the mobile phone provided by google require that they be made in Java (an interpreted language) with garbage collection and inability to count cpu-cycles, and lack of direct access to the hardware. Like .NET (yes lack of games using it too), the platform will probably fizz out when the number of exciting games and powerful programs become lacking because developers can't get access to the hardware directly, without going though "hacks" that don't work on all devices. The google phone may save itself if they just emulate the iPhone of keeping to one type of CPU and get rid of virtualization and interpretations and keep the layers as thin as possible to allow programs to shine.
What many don't realize is that powerful software that can do many amazing things is great, but if you stick a middle layer (the operating system and the libraries) that hampers access to the hardware, and enforce a type of language (interpretation), you limit what can be achieved on the hardware. The application decides if it wants to use interpretation. The application decides how much CPU cycles to waste. The OS needs to get out of the way and be as thin as possible. Enforcing too much constraints on the applications (games) will kill the platform. Some vendors go out and even limit who can provide software on a platform, and while this has some benefits, it also reduces the number of channels the platform can succeed. Windows didn't become a monopoly because people can use it to move windows around, it was because of the cheap hardware and games and apps made by the developers that all ran on the same platform. Consoles took over because the hardware was cheap for consumers and you didn't need to keep upgrading hardware to keep up with the slow operating system revisions released each 3-5 years. It got to the point on the desktop where slow operating systems destroyed enough good developers that the limited number of developers on consoles overcame the negative aspects of expensive hardware upgrading, and the consoles took over the gaming market. If this keeps up, apple will also take over the application market because it seems for Microsoft adding features by absorbing more layers (and competitors) is their profit strategy. Soon, Microsoft will have to deal with the situation that developers will find alternative platforms that provides them with access to the hardware directly (which means good for games and powerful apps), and not worry about being swallowed up because the OS just got more bloated because it absorbed another similar feature offering by the developer. Destroy a developer on the platform, destroy future markets on the platform offered by the developer.
So what does this mean? The future device needs to be open to a lot of developers and the OS needs to get out of the way and not enforce a language, type of access to the hardware, and be very powerful and cheap. The PlayStation just happened to fit this description with the exception of the "open to a lot of developers" and that is why the iPhone came to market and absorbed what was left.
Source: http://www.edepot.com/forums/viewtopic.php?f=12&t=3146
Last edited by a moderator: