Y'know, I just got to thinking all this concern over how to make multithreaded game software may be clouding the real matter here. It makes sense that a a tranparent parallel software solution would be the obvious counterpart to a parallel hardware solution. That's the holy grail of the computing future, and all concerned are watching Sony/IBM/Toshiba to see if they can actually pull this off under all that under all the hubub.
However, was it implicitly claimed that this is exactly what they would do? Is it implicitly known that this is what will be required to extract any kind of performance out of a BE computer? All of us have
assumed that it is, but it is still an assumption.
Let's step back a bit. How much software parallelism is
really required to make a game work on a BE computer? Consider that this is a
game. What do games do? A game is essentially a serial logic thread that prescribes challenging/entertaining stimuli and responses to a user input. It could be all text interaction to tell the player what is going on, but that would get boring and tedious. So we have graphics, sound, and in a different sense physics and AI to make those 2 former entities behave convincingly. Those are the truly processing hungry operations in a modern game code, not the serial logic of the game itself. Freed from that requirement, I wouldn't be surprised if the serial logic thread itself could run so fast as to update millions of times a second on even a single, most mundane modern CPU. If it needs to run on a single processing unit and is easiest to implement on a single processing unit, so be it. It's well taken care of. It's the calls to the graphics, sound, physics, and AI threads that will be done in parallel (to each other, not necessarily in parallel to the game logic; obviously, the game logic has to be the sequential driving component to the other stuff).
That said, isn't it fairly easy to imagine that the graphics, sound, physics, and AI code can be expressed to exploit parallelism rather intuitively? Each one is a conceptually simple operation- it just requires the quick processing of a huge amount of data which will result in a raster output and an audio signal (I guess you can add in motional/rumble feedback, as well). We're talking about the manipulation of brute amounts of vertices, procedural texture engine data, and per pixel processing. All of this is born to be parallel just as you would expect GPU stuff happens on a GPU.
So essentially the work of a game developer can essentially remain unchanged (though tinkering around deeper to the metal is still a possibility, if so desired). You write your serial game thread just as always, and then when it comes to ordering around legions of polys and pixels, you are only talking to a software driver abstraction. The driver abstraction just happens to orchestrate processing units in the "main CPU" (if that still has any meaning anymore), instead of a graphics card.
Please do note that by saying all of this, I do not wish to undermine the continued strive for the holy grail of pervasive parallel software code. It's still a holy grail that could indeed open new possibilities on something like the BE. I'm just saying that in the big scope of things, it is actually not a pivotal component to enabling things (namely, games) to be done on a BE.
The only question that
need be asked is if Sony/IBM/Toshiba can come up with what is essentially a software graphics driver? To that, I can only say (from my laymen's observation of CG technology), "Why is that even a question?"
MS has had one for years now. It's called DirectX.
...2005/2006 can't come soon enough! BRING IT, PS3! BRING IT!