Kutaragi talks CELL technology @ TGS2002

PC-Engine said:
1. There are many solutions to decrease the delay in a TBR, one of them being a higher clock.
2. The delay in a TBR isn't that high to begin with.

Ok, still doesn't change the fact that all present TBR's that are deemed "acceptable' have a frame of latency AFAIK. Simon, if you wouldn't mind answering, whats the latency on PowerVR based cards. IIRC correctly it's always a frame behind.

How does this differ from the Transputer? Good question!! The idea behind GRID processing was certainly used in that concept - forgot about it actually untill you mentioned it. Didn't the transcomputer use a pretty cool run-time schedualer to keep the distributed processing synced. I remember that part in particular.
 
Simon, very little obviously :) I could lift most of what mr. Kuturagi says about these "revolutionary" distributed architectures out of my decade old parallel computing text book, mainly aimed at teaching occam, including the foresight of the importance of fiber to the concept.

I wonder if he sees the irony in saying that there hasnt been anything new in 50 years, ie since the birth of the Von Neuman architecture, and then calling it Cell (many people think the Von Neumann architecture is wrongly attributed, he did do research into computational models based on cellular automata though :).
 
I think we have to separate the idea of Grid computing from CELL.

CELL is essentially putting multiple processors (and associated memory) on one die.

Using a CELL architecture for the CPU in a gaming console is an interesting idea, and I have nothing against that. Whether such an idea will outperform a traditional architecture is something up for debate, and that's fine.

Similarly, using a CELL architecture for a cluster of server machines hosting a massively multiplayer game is not an uninteresting idea, it has some merit too.

However, Grid (as defined by Globus and the OGSA) is essentially about distributing a computing task using an Internet-scale network to share computing resources among many peer machines.

This, I do have a problem with (in the context of a gaming machine) -- using a Grid to distribute almost-hard realtime game engine tasks which are traditionally performed locally is a very difficult problem, IMHO, and extremely unlikely to have any serious impact on Sony's plans for PS3.

Note that I'm not ruling it out, I'm just saying that I think it's unlikely.

I look at the slides with "HUMAN = 10 PFLOPS" and "WORLD = 100 PFLOPS" on them (doesn't this imply there are only 10 humans in the world? ;)), and I see a little too much science fiction. :D

If you want other examples of Japanese funded overly-ambitious pseudo-computer-science flights-of-fancy, just look at the Japanese Fifth Generation (ICOT) Computing project, who's lofty goal announced in 1981 was to create viable Turing-level AI by 1991 through the application of inferential logic systems and parallel supercomputing.

After a decade of work by their brightest minds, and investing billions in research and development, it was delayed over and over, and is today widely considered a staggering failure, if anyone even recalls its existence when asked.

I think the reality is Grid will basically be irrelevant to PS3 (in terms of gaming), and CELL will be a novel and interesting processor architecture, that happens to be used in a PS3.
 
Just as a random example, suppose I tell some Grid node to start calculating how this mesh simulating a piece of cloth will behave given a certain amount of wind, I will need results back from it every 30 ms, otherwise, my mesh will not update every frame.

You don't need the result every 30 ms, if your simulation is time base, you can calculate ahead of your frame, store result in memory, and just interpolates.

If the player's character collides with the cloth, I need to send that node the collision information, have it compute the results of that collision and send it back to me, again, in less than 30 ms.

You lost me here, I don't know why you want to do that.

If the network ever fails to deliver a packet on time, strange results will probably occur. The cloth might interpenetrate the player character. The cloth might stop animating. The cloth might animate with an odd delay. Etc.

The cloth cuts in through the player, that's the problem with collision code, you don't need distributed for that to happend, games like VF3 has that happend, and its not on distributed.

Cloth stop animating, cloth animate with odd delay, is the result of your simulation being frame base or other things. As far as lost of packets, you can make guards for these things. Its undesirable, but it happens so you just have to live with it.
 
V3 said:
Just as a random example, suppose I tell some Grid node to start calculating how this mesh simulating a piece of cloth will behave given a certain amount of wind, I will need results back from it every 30 ms, otherwise, my mesh will not update every frame.

You don't need the result every 30 ms, if your simulation is time base, you can calculate ahead of your frame, store result in memory, and just interpolates.

If the player's character collides with the cloth, I need to send that node the collision information, have it compute the results of that collision and send it back to me, again, in less than 30 ms.

You lost me here, I don't know why you want to do that.

The problem with predicting things is that you cannot predict the movements of the player. Assuming a flat surface the player the player can move 360° on the surface + 180° by jumping. That is space angle of 2 * Pi. His movements can be at various speed and he can take different actions (e.g. attack his enemy or just bump into him). That's a lot of freedom. Unless you want to calculate you calculate at these different events before hand there's no way you can predict any thing.
You just cannot predict something which you don't have any information about...
 
PPS. Anyone interesting in hosting a few slides from GDC 2002 and posting them? Some people, think it was Democoder, doesn't think I'm stating correct facts.

Send them my way Vince. I'll upload them to the Basement and post them here.
 
MfA said:
Simon, very little obviously :) I could lift most of what mr. Kuturagi says about these "revolutionary" distributed architectures out of my decade old parallel computing text book, mainly aimed at teaching occam, including the foresight of the importance of fiber to the concept.

Exactly. This is what have me curious: what the interconnecting fabric will be like. Will it be Transputer like with a OCCAM-like programming environment. Will it be with beefed up links (á la Hyper Transport) with a SMP (threaded) like programming model (Not strictly SMP, but not strictly NUMA either, Almost Uniform Memeory Architecture AUMA :) ) or plain old MPP like the Cray T3D/E with a MPI programming paradigm. Either way there will have to be some super duper tools/documentation to school game developers into getting the most of such an architecture.

MfA said:
I wonder if he sees the irony in saying that there hasnt been anything new in 50 years, ie since the birth of the Von Neuman architecture, and then calling it Cell (many people think the Von Neumann architecture is wrongly attributed, he did do research into computational models based on cellular automata though :).

For variable granularities of new :)
I had a professor that stated that there were only two inventions in computer science. One being Turing machines (abstract), the other being the Von Neuman architecture (program stored as data). Everything after that are just refinements.

Cheers
Gubbi
 
chap said:
Johnny maybe you would like to go here

And tell us what would be a better design for a console releasing in March 2000.
Why would he want to? It appears to contain a lot of misinformation.
esa said:
Ok, these have all been probably posted before, but maybe someone hasn't seen them yet [icon_smile.gif]

http://www.research.ibm.com/thinkresearch/pages/2001/20010611_cellular.shtml
Deja vu! Apart from the semi-shared memory, it really sounds like an array of transputers. (/me wonders if he still has his OCCAM raytracer code lying around somewhere....).
 
Then feel free to correct all teh misinformation, as i stated in the very first post. :oops:
 
Mfa,
That's a fair comment, but it seems to me there are too many people on the boards assuming that their interpretation is reality. And to extend the logic one more step, if one's interpretation of the tech implies impossiblity, what does that show about the interpretation? Is it more likely that the interpretation (assumption) is wrong or more likely that Sony, IBM and Toshiba is going to spend millions doing what many here interpret asimpossible? As I said, we really know nothing about CELL.
 
Most intriguing however, Kutaragi-san spoke at length regarding Sony's CELL processor technology which is being developed in partnership with IBM and Toshiba. Specifically, he lamented over misleading reports suggesting the technology was being developed for the PlayStation 3. He spoke primarily about the prospects of CELL and its chief application in future networking devices.
Am I the only one who read this bit?

CELL is NOT for PS3. PS3 will NOT be a network-only contraption. Wake up, please...

Sony will obviously use CELL technology in PS3. CELL is for future networking devices. PS3 will be network-ready. I mean, I can see where the misunderstandings start, but in a thread like this they are just accumulating, and getting no-one anywhere. A network-only game console, using shared computing over highspeed connections is a great thing with huge possibilities for great gaming. But start looking for it around 2107, not 2005...

Wake up, get real: Forget this nonsense about a CELL-PS3 networking thing.[/b]
 
JF_Aidan_Pryde said:
Mfa,
Is it more likely that the interpretation (assumption) is wrong or more likely that Sony, IBM and Toshiba is going to spend millions doing what many here interpret asimpossible? As I said, we really know nothing about CELL.

Fair enough. You give IBM/Sony and Toshiba the benefit of the doubt. We are alot that don't (at least I don't).

Cheers
Gubbi
 
My understanding was that each CELL would be used in each node in a GRID. This GRID was to be made up of millions of PS3s therefore CELL would have to be inside a PS3.

Since they've already publicly admitted that each CELL wouldn't be able to reach the 1 TFLOPS performance claim, they're redirecting it's application toward something else :-?
 
Since they've already publicly admitted that each CELL wouldn't be able to reach the 1 TFLOPS performance claim, they're redirecting it's application toward something else

I don't think they've ever said cell will have 1TFLOPS performance? They said something vague like PS3 will be this and that many times more powerful than PS2, but it was not about cell?
 
Well Vince was the one that insisted that ps3 would have cell, and even got into a fight with qroach about it. Vince even pointed to the Sony gdc 2002 speech to prove ps3 would have cell. And Vince is the one that kept on insisting that it ps3 will achieve 1tflops of performance. If you believe Sony's claim, thousand times the perfomance of the Emotion Engine is about 5 tflops. So when Sony fails to achieve it, Vince and friends deny that Sony ever meant to achieve 1 tflops with one chip.
 
Images courtesy of Vince-

Top two links won't work with the image tags but they both load up for me as links.

http://www.gamebasement.com/articles/Cell/'Net 2001 - 2005.jpg

http://www.gamebasement.com/articles/Cell/'Net 2005+.jpg

Autonomic%20Computing.jpg


Higher%20Preformance.jpg


Linux%20and%20PS2.jpg


Preformance%20Scaling.jpg


The%20GRID.jpg
 
Thanks Ben.


You see Sony does claim its next generation will be 1000 times as powerful. And since ps2's ee performance is about 5 gflops, that would mean ps3 would need to be 5 tflops. That would require 2 and a half 64 32-celled chips boards to achieve. Can you imagine opening a ps3 and seeing 160 chips inside???
 
You see Sony does claim its next generation will be 1000 times as powerful.

Hmm, I think they claimed they *would like* to make it that much more powerful as that is something they asked developers and heard they would like. Not that they will do that for sure.
 
Back
Top