well, one guy at IGN gave a pretty good argument about multi-processing CPU's...I won't mention his name but here's his argument:
Until the cost of accessing data is zero, or very close to it, parallel processing will ALWAYS suffer with certain types of data. It is not a matter of problems with coding, it isn't a matter of "thinking outside the box" it is simple a matter of dependancies.
If some piece of data is dependant on another, it cannot be executed until said data is executed. Period. And since information cost is not zero, another processor CANNOT "know" what another processor is "thinking" until said processor is done.
FLOPS are a measure of floating point math. This kind of math is very common in all applications, but even more so in rendering graphics, synthesizing music, physics, encryption.
Since graphics are rendered by changing the colors of individual pixels, and there are thousands to millions of pixels, it is a highly parallel operation where the outcome of one pixel is not dependant on the outcome of another pixel, so each individual pixel can be an individual process.
Physics too, on its most basic level is a highly parallel operation.. for instance, (not possible today on a large scale), individual atoms could be individual processes, their actions are influenced by outside sources, so the over reaching process could farm out little chunks, get back the data, see how each responded, etc... in a larger scale, physics becomes very dependant. It would be the responsibility of the programmer to code it efficiently.
However, a game engine is not very multi-parallel at all. While many individual parts can be "farmed" out, the main thread will need to be run linearly.
Ultimately the question arises, of, is an individual entity in the CELL system powerful enough to run a game engine and match or EXCEED the competitors at this task?
Imagine a system where there are thousands of individual entities thinking simultaneously, but only one of them is in control of the entire thing. Let's now imagine one hundred of these saying "Ok, I'm done with the graphics part" and another hundred saying "ok, I'm done with the physics part" and another hundred "music's done".. etc.. but imagine that the one entity keeping the whole thing in check is not powerful enough, he's gonna say "woah woah, slow down there, stop all talking at once" and BAM, there's the bottleneck.
Now, mind you, I'm not saying that the PS3 sucks, but what I am saying is that it is IMPOSSIBLE to make a multi-parallel processing system that doesn't suffer from these kinds of bottlenecks given a set of data that is highly dependant. It doesn't matter how much money you throw at it, it doesn't matter how smart your engineers are, what DOES matter is the data set, and how well that data set matches your processing abilities and layout.
Deep Blue is an example of what Cell is. Deep Blue has plug and play processor upgrading abilities just like cell, and Cell is likely the bastard child of deep blue in many ways. However, deep blue is not the uber computer for every application. Near peak performance is ONLY hit with a VERY limited set of applications.
Now, the peak performance of cell is VERY high. Extremely impressive, and very exciting, but it will NOT hit peak performance MOST of the time. On average, a multiprocessor system operates at 40% efficiency. This is because only specific chunks of data benefit from parallel processing.
See, this is why I'm so confused about CELL.. because a parallel system is natively VERY good at handling a task like graphics, but now they are using nVidia to handle this? Like others have speculated, I could see nVidia being on board to help with the API's, but IF nVidia is providing the GPU, then it stands to mean that the CELL is NOT as powerful as we are to believe, since the single game application that benefits the most from parallelism is being offloaded, then CELL cannot be the end all, be all chip that the claims make.
Yes, current programing paradigms have revolved around linearity, and yes CELL will require many processes to be rethought, but the fact remains that TIME is linear, and anything that happens in the NEXT time frame is dependant on what happened in THIS time frame, fundamentally, the basis of everything in this universe is linear, there will be tasks that CANNOT be split up because of dependancies, and anything that requires a lot of dependancies will be the bottleneck in a parallel processing system. This is as fundamental as the 4 natural laws.
However, IBM is trying to convince the world that they have somehow "beaten" this problem, and that CELL will do EVERYTHING optimally. But hey, they are the ones that have to convince millions of programmers to write software for the thing. It's a sales pitch.. just like any other.
I could see his point..but I would liek to see what IBM has plans for this?
-Josh378