Apologies if all of the following has been posted/discussed before.
Regarding all of the PS3 and Xbox2 hubub I have observed a number of people run around with a number of predictions/assumptions, without ever stating how they came to these conclusions.
1) Xbox2s tri-core CPU will run at 3.5Ghz+ :
Currently dual core Power5s operate at about 2Ghz on a 130nm process, I believe it is a given the first Xbox2s will ship on the 90nm process. Is it realistic to add a third core and shrink to 90nm and it will still be able to run at 3.5Ghz ? Admittedly the Power5 has 1.9MB of L2 cache but if even AMD which has a single core Athlon64 which is on 90nm is yet to release chips past 2.6Ghz without a new core stepping and most likely will not release 3Ghz+ part till the end of the year/ next year can we really expect a tri-core Power5 derivative to run at 3.5Ghz+ ?
2)Cell architecture chips may/will eventually end up in personal computers:
Why on earth would Sony even bother to undertake the aforementioned project ? I really do not see PC users jumping all over such a product. I mean zero compatibility with market dominating Microsoft products and a completely new OS, not to mention very little in the way of speed gains for web browsing or document editing anyway. The only way to push it on the desktop would be to use Linux and even then there is no incentive to stop buying a conventional PC.
3) PS3s final implementation details:
Now I am no expert on chips but a cursory glance at
www.sandpile.org reveals that Athlon64s and P4s are just less than a third the size of 1 PE (?) so I expect the cell to be a big chip is it still within the realms of reality that Sony will launch such a chip with a fully fledged BR drive and NV50 ? People are also bandying about 24 pipe configurations for GPUs in both the Microsoft and Sony consoles, is a 24 pipe card really necessary at 720p. I did a few calculations and peak pixel out put is quite high for a 24 pipe card at 500 - 600 MHz.
Is it possible that Sony will simply remove vertex shaders from Nvidias "next generation GPU" and use the PE to do the vertex shading or remove some of the vertex shaders. Is it worthwhile, how much will be saved or is it just plain stupid from a developer point of view. Could any of the resident developers give a ballpark figure on the percentage performance drop from the lack of out of order execution on the PU.
Thanks.