Slow OS part of the problem
Well, it seems after an initial roundabout, most of the posts now seem to agree to a certain extent the main reason why consoles won. But there are hold-backs especially those that do not have much knowledge about virtualisation and interpretation, so I'll insert some more sources and info to back up statements made....
One of the lessons not learned by many people dealing with operating systems is that native code is important.
It is so important that it can make or break acceptance of the device for software developers, and sometimes eventual success of the product. If anything, the old way of adding more features over layers and layers of code via interpretation or virtualization is out. Thin layers that allows developers full access to the hardware is the way to go. When mobile devices are becoming more popular, it is important that direct access to the hardware is not hampered because battery life will be reduced through wasted cpu cycles on the interpretation or virtualization. Wasted CPU cycles means reduced speed, and reduced speed means certain applications will not work on the device (notably power hungry games). It is not a secret that games drives sales of many devices. In fact, it is this reason that game consoles became so successful. Interpreted code also prevents cycle counting for very high performance applications (games notwithstanding), and this leads to unpredictable behavior unsuitable for real time OS and high framerate dependent games. For example, certain interpreted languages have garbage collectors and "hotspot" compiling that can kick in or be working in uncertain time periods, which will affect the timing and cpu cycle counting. It can also affect smoothness of user interfaces and quick feedback.
Ok, some seem to not agree that interpreted stuff (whether JIT or not) are terrible performers, so I'll provide a good place to get background info for those who don't seem to understand why:
http://www.edepot.com/forums/viewtopic.php?f=12&t=3146#p5820
If you look at mobile technology (especially cell phones), the games were nothing to talk about. There are two factors, speed (the slow java), and the screen (small size and low resolution). Apple got lucky with the iphone (took some hackers to let them realize the profit potential for native apps on the device) and they ended up creating a new segment of low cost games and applications. Piggybacking on this concept will solve the other missing piece of the equation for a popular device... big size and high resolution in a touch tablet.
There is mention of hackers there that provided iPhone with an outlet for profit. So yes console security (or lack of it) is sometimes a good thing to propagate a device (especially potential games-centric ones). For those that are not up to date, the PS3 is the last holdout:
http://www.edepot.com/forums/viewtopic.php?f=9&t=3234
Google's entry into the mobile phone segment will probably be hampered by too many different cpu's that are tied together by an interpreted language (java). Java is an interpreted language and will have the same disadvantage as all interpreted languages (like C# and its .NET libraries). Interpreted languages are good for short programs that run short bursts (like perl programs, which lead to PHP web languages). They may be good for short quick web stuff, but try doing stuff in games with them and they will start to show their disadvantages. Even allowing "hacks" to allow direct access to the cpu via Java is kind of a bad idea. The hack won't work on different CPU's, thus segmenting the market. I think google should just stick with one type of CPU architecture and allow native access to it (like what apple is doing with their iphone and iPod touch).
I am hesitant to post this, but since it followed up the previous post i can't hide it, might as well reveal this one too. Yes it is damaging, but someone has to do it for the common good:
http://www.edepot.com/forums/viewtopic.php?f=12&t=3146#p5822
On consoles, native code is a must. Even adding .NET XNA for developers showed that there is little market for slow unpredictable programs (because of interpretation), which is why on the XBox360, the game sellers are those that don't use it, but access the hardware directly in C or assembly. If the PlayStation brand of consoles started adding layers of interpretation, the console won't last that long (10+ years). Each layer of interpretation or virtualization slows the hardware down, and a hardware device must get as much power as possible throughout its lifetime, which is why newer and newer games go lower and lower to access the hardware in order to get more power (which means going down to assembly level programming). It is this reason that software libraries that add too much indirection or virtualization will end up hurting the device in the long run. Interpreted languages (.NET and it's incarnations) and virtual drivers on the Vista and later operating systems ended up killing most of the productive programs and games on the platform because with each revision of the OS, the hardware needs to be upgraded to basically do the same thing. Upgrading hardware to make up for slowed down operating system (because of the thick layers of interpretation and virtualization) may be good for hardware sales in the short run, but bad for everyone (including developers, hardware manufacturers, and consumers) in the long run when the number of apps and games perform poorly on the majority except for those who upgraded their hardware to the max specs. When this happens there is no market for apps and games, and that is why the consoles and iphones (with their thin layers of OS) started taking over the market previously held by desktop operating systems.
Here is the part where people blame it on piracy. But the truth is that the return of investment through hardware upgrading (mainly blamed on slow OS) is extremely low. Nobody wants to shell out $1000 in upgrades every three years just for a game or because it was forced on them through an OS upgrade:
http://www.edepot.com/forums/viewtopic.php?f=12&t=1552
Google will probably find out this eventually when they finally realize chrome is popular because it is fast and accesses the hardware directly, while their mobile phones don't sell that much because of the lack of good games, and UI that is not smooth like native code. Games may have something to do with it, because games just don't work right on interpreted languages, and the mobile phone provided by google require that they be made in Java (an interpreted language) with garbage collection and inability to count cpu-cycles, and lack of direct access to the hardware. Like .NET (yes lack of games using it too), the platform will probably fizz out when the number of exciting games and powerful programs become lacking because developers can't get access to the hardware directly, without going though "hacks" that don't work on all devices. The google phone may save itself if they just emulate the iPhone of keeping to one type of CPU and get rid of virtualization and interpretations and keep the layers as thin as possible to allow programs to shine.
Some people will find the above hard to swallow at first, but looking through the history of successful apps on mobile devices (like PSP and iPhone), people eventually will come around to this thinking. It just takes time for some companies to realize what the real problems are. I think many of the Nexus One "main apps" uses no Dalvik java because it needs to be fast (like the browser portion). I think even internally Google realizes this. Perhaps they need to offer the java as an option, not manditory, for developers, and let people code in C (before the market leaves them behind).
What many don't realize is that powerful software that can do many amazing things is great, but if you stick a middle layer (the operating system and the libraries) that hampers access to the hardware, and enforce a type of language (interpretation), you limit what can be achieved on the hardware. The application decides if it wants to use interpretation. The application decides how much CPU cycles to waste. The OS needs to get out of the way and be as thin as possible. Enforcing too much constraints on the applications (games) will kill the platform. Some vendors go out and even limit who can provide software on a platform, and while this has some benefits, it also reduces the number of channels the platform can succeed. Windows didn't become a monopoly because people can use it to move windows around, it was because of the cheap hardware and games and apps made by the developers that all ran on the same platform. Consoles took over because the hardware was cheap for consumers and you didn't need to keep upgrading hardware to keep up with the slow operating system revisions released each 3-5 years. It got to the point on the desktop where slow operating systems destroyed enough good developers that the limited number of developers on consoles overcame the negative aspects of expensive hardware upgrading, and the consoles took over the gaming market. If this keeps up, apple will also take over the application market because it seems for Microsoft adding features by absorbing more layers (and competitors) is their profit strategy. Soon, Microsoft will have to deal with the situation that developers will find alternative platforms that provides them with access to the hardware directly (which means good for games and powerful apps), and not worry about being swallowed up because the OS just got more bloated because it absorbed another similar feature offering by the developer. Destroy a developer on the platform, destroy future markets on the platform offered by the developer.
Now, after posting the above, many people have come out and agreed on the expensive hardware aspect forced by purposely slowed down OS. Yes virtualization of the hardware and layers and layers code and interpretation has something to do with it. But there are others (like marketers on payroll) who may still be held back by need to maintain profit and monopoly. I searched around on this forum for some examples from other posters so that I don't take too much heat from this...
Marketing will sometimes skew things the wrong way:
http://forum.beyond3d.com/showpost.php?p=1242645&postcount=40
Look at the 2.5x part:
http://forum.beyond3d.com/showpost.php?p=1271685&postcount=104
I post the above because I experienced something similar talking about why PSP has better graphics than iPhone. Even when providing benchmarks to show them the truth. Now in actuality it is even worse (5x slower than reported numbers), but can you imagine if nVidia (who has rights to TBDR) starts overstating figures 2.5x? They could say, well, the hardware only does X triangles per second, but if you code it using deferred rendering, you can save some triangles during Z processing, so we will inflate the figures 2.5x because even though the hardware can't do it, we assume through coding you "save" some triangles in the long run. So even if there is no TDBR in the hardware, nVidia can start telling their marketing department to start inflating the figures 2.5x if the developers code via deferred rendering as default via software. Something is not right about that. And I hope there is more truth in people who respond to posts like this one.
So what does this mean? The future device needs to be open to a lot of developers and the OS needs to get out of the way and not enforce a language, type of access to the hardware, and be very powerful and cheap. The PlayStation just happened to fit this description with the exception of the "open to a lot of developers" and that is why the iPhone came to market and absorbed what was left.
Lastly, and for the record. I having nothing against .NET or Java, unless forced on people. Nor TBDR, unless forced on people for the wrong reasons. I find it odd that people seem to accept more and more constraints when many inherent founding principles are based on freedom.