FatherJohn
Newcomer
For the PS3 Sony seems to have chosen the risky approach of a revolutionary new architecture. (From what I can tell it's 4 CPUs-with-embedded-RAM on one chip, plus 4 GPUs-with-embedded-RAM on a second chip, connected by a high-speed bus.)
The PS3 Cell architecture seems to be using "underpants gnome" logic:
1) Put a lot of parallel hardware into a box.
2) ???
3) Profit!
If we look at other companies that have attempted revolution in the past, we see that revolutionary architectures often fail to live up to their initial expectations. For example:
- Intel has not had much success with their revolutionary I64 architecture. AMD's much more conservative and incremental x64 architecture seems to be much more successful.
- Microsoft's revolutionary Talisman approach to sprite-compositing 2.5D graphics was trounced by other company's traditional SGI-style z-buffers.
Bob Colwel, one of Intel's CPU architects gave a talk at Stanford last year.
http://stanford-online.stanford.edu/courses/ee380/040218-ee380-100.asx
In part of his talk he discussed how the Itanium architects were able to convince Intel's managers to go ahead with the Itanium project: They had an example where one hand-optimized loop of 32 instructions was able to run much faster on the Itanium architecture than on more traditional architectures. They claimed that this speed up was typical of what the Itanium architecture would work. (And also claimed that compilers would be able to generate code as efficient as the hand-optimized code.)
In reality, billions of dollars later, it seems Intel has not been able to capitalize on the theoretical performance potential of the Itanium architecture. They seem to be gradually phasing out Itanium in favor of a clone of the AMD x64 architecture. (Which is a much more traditional RISC-like instruction set.)
What if something similar to this happened at Sony? After all, they had to decide on Cell several years ago. They basicly had to guess, based on small benchmarks and gut feeling, what would be the best way to proceed.
To give another example, the Microsoft Talisman project was an attempt to get around the slow 3D graphics, and limited bus speeds of the day by exploiting frame-to-frame coherence of 3D scenes. They would render each 3D object (tree, character, wall, etc) into a 2D sprite, and then composite the sprites together to form the full scene. The idea was that you wouldn't have to render 3D all that often (only when the character turned around.) They even had features for warping 2D sprites to get pseudo-3D transformations, so you didn't have to render the full 3D as often. (Actually, I'm not sure if they had any 3D rendering hardware at all -- I guess you were supposed to pre-render your 3D objects from different angles.)
They developed this idea, produced some demo films, and even got some company to make a chip and a board for them. But in the meantime, it turned out that you could put a full 3D renderer into a chip, and put a full frame buffer on the same board, and voila, 3D graphics cards were born.
(It also helped that 3D was much simpler to use, in practice, than this elaborate sliding-and-scaling 2.5D sprite architecture.)
Anyway, I am worried that the Cell architecture may suffer from a similar fate as these other revolutionary approaches.
For what it's worth, I can think of some revolutionary approaches that have worked out well in practice: the personal computer and the RISC processor. But notice that in both these cases the revolution was simplifying things, or starting out small, and growing from there. It's the big-bang revolutions (like Cell) that are the riskiest.
I can also think of many evolutionary approaches that have been wildly successful: MSDOS-Windows-WindowsXP, Unix-Linux, x86, NVIDIA and ATI GPUs, and so on.
If I had been Sony, I think I would have chosen a more evolutionary approach than they seem to have chosen. Perhaps the simplest way to do that would have been to have waited a few years to start PS3 design, rather than starting it right after PS2 was complete. By starting so eary Sony ran into two problems: 1) They had no idea how techology was going to evolve over the next 5 years, and 2) the only concievable way of achieving their performance goals was to adopt a revolutionary strategy.
If they waited, they would have had a much better idea of which techologies were actually going to work, and they would have been able to consider incremental improvements to existing techologies in addition to revolutionary approaches.
Now, from a business standpoint there are several reasons for Sony to choose revolution, even at the risk of producing an inferior product:
1) Since PS2 is the market leader, developers can't ignore the PS3, no matter how hard it is to program, and no matter how poor the performance is. They will have to give it their full support, simply because it's guarenteed to have at least 40% market share next-generation.
2) If it's really hard to make a game run well on both PS3 and the other consoles, developers may scale back their non-PS3 efforts. ("Put the A team on the hard-to-develop-for PS3, put the B team on the easy-to-develop-for consoles."
One way this strategy can backfire for Sony is if developers figure out an easy way of using (say) 33% of the PS3's potential performance, and don't bother to invest in using the rest of the performance. (For example, if they just use one of the Cell cores to run their game, ignoring the other three.) Then we'll end up with a situation similar to today's Xbox, where some console-specific titles look really good, but the bulk of the titles are cross-platform, and since they use the same art, they look only slightly better than their PS2 equivalent.
Ah well, interesting times.
I'm really looking forward to seeing the PS3 technical demos. I expect them to be jewels of hand-coded graphical goodness. The PS2 technical demos were one of my favorite things about last generation.
The PS3 Cell architecture seems to be using "underpants gnome" logic:
1) Put a lot of parallel hardware into a box.
2) ???
3) Profit!
If we look at other companies that have attempted revolution in the past, we see that revolutionary architectures often fail to live up to their initial expectations. For example:
- Intel has not had much success with their revolutionary I64 architecture. AMD's much more conservative and incremental x64 architecture seems to be much more successful.
- Microsoft's revolutionary Talisman approach to sprite-compositing 2.5D graphics was trounced by other company's traditional SGI-style z-buffers.
Bob Colwel, one of Intel's CPU architects gave a talk at Stanford last year.
http://stanford-online.stanford.edu/courses/ee380/040218-ee380-100.asx
In part of his talk he discussed how the Itanium architects were able to convince Intel's managers to go ahead with the Itanium project: They had an example where one hand-optimized loop of 32 instructions was able to run much faster on the Itanium architecture than on more traditional architectures. They claimed that this speed up was typical of what the Itanium architecture would work. (And also claimed that compilers would be able to generate code as efficient as the hand-optimized code.)
In reality, billions of dollars later, it seems Intel has not been able to capitalize on the theoretical performance potential of the Itanium architecture. They seem to be gradually phasing out Itanium in favor of a clone of the AMD x64 architecture. (Which is a much more traditional RISC-like instruction set.)
What if something similar to this happened at Sony? After all, they had to decide on Cell several years ago. They basicly had to guess, based on small benchmarks and gut feeling, what would be the best way to proceed.
To give another example, the Microsoft Talisman project was an attempt to get around the slow 3D graphics, and limited bus speeds of the day by exploiting frame-to-frame coherence of 3D scenes. They would render each 3D object (tree, character, wall, etc) into a 2D sprite, and then composite the sprites together to form the full scene. The idea was that you wouldn't have to render 3D all that often (only when the character turned around.) They even had features for warping 2D sprites to get pseudo-3D transformations, so you didn't have to render the full 3D as often. (Actually, I'm not sure if they had any 3D rendering hardware at all -- I guess you were supposed to pre-render your 3D objects from different angles.)
They developed this idea, produced some demo films, and even got some company to make a chip and a board for them. But in the meantime, it turned out that you could put a full 3D renderer into a chip, and put a full frame buffer on the same board, and voila, 3D graphics cards were born.
(It also helped that 3D was much simpler to use, in practice, than this elaborate sliding-and-scaling 2.5D sprite architecture.)
Anyway, I am worried that the Cell architecture may suffer from a similar fate as these other revolutionary approaches.
For what it's worth, I can think of some revolutionary approaches that have worked out well in practice: the personal computer and the RISC processor. But notice that in both these cases the revolution was simplifying things, or starting out small, and growing from there. It's the big-bang revolutions (like Cell) that are the riskiest.
I can also think of many evolutionary approaches that have been wildly successful: MSDOS-Windows-WindowsXP, Unix-Linux, x86, NVIDIA and ATI GPUs, and so on.
If I had been Sony, I think I would have chosen a more evolutionary approach than they seem to have chosen. Perhaps the simplest way to do that would have been to have waited a few years to start PS3 design, rather than starting it right after PS2 was complete. By starting so eary Sony ran into two problems: 1) They had no idea how techology was going to evolve over the next 5 years, and 2) the only concievable way of achieving their performance goals was to adopt a revolutionary strategy.
If they waited, they would have had a much better idea of which techologies were actually going to work, and they would have been able to consider incremental improvements to existing techologies in addition to revolutionary approaches.
Now, from a business standpoint there are several reasons for Sony to choose revolution, even at the risk of producing an inferior product:
1) Since PS2 is the market leader, developers can't ignore the PS3, no matter how hard it is to program, and no matter how poor the performance is. They will have to give it their full support, simply because it's guarenteed to have at least 40% market share next-generation.
2) If it's really hard to make a game run well on both PS3 and the other consoles, developers may scale back their non-PS3 efforts. ("Put the A team on the hard-to-develop-for PS3, put the B team on the easy-to-develop-for consoles."
One way this strategy can backfire for Sony is if developers figure out an easy way of using (say) 33% of the PS3's potential performance, and don't bother to invest in using the rest of the performance. (For example, if they just use one of the Cell cores to run their game, ignoring the other three.) Then we'll end up with a situation similar to today's Xbox, where some console-specific titles look really good, but the bulk of the titles are cross-platform, and since they use the same art, they look only slightly better than their PS2 equivalent.
Ah well, interesting times.
I'm really looking forward to seeing the PS3 technical demos. I expect them to be jewels of hand-coded graphical goodness. The PS2 technical demos were one of my favorite things about last generation.