MiniFAQ : How CELL works.

Q1 : What is CELL?
A1 : CELL is the first consumer electronics implementation of IBM's CELLULAR COMPUTING architecture, best known for Blue Gene supercomputer series. The goal of CELLULAR COMPUTING is to popularize the message-passing based massively parallel computing by offering a standardized software plaform on which developers could build their applications. SCEI's goal with CELL is to provde a consumer electronics plaform in which various types of devices would interact with each other.

Q2 : How does CELL work?
A2 : To understand how CELL works, you must first investigate Blue Gene/L, the second version of BlueGene computer. Unlike the much-hyped Petaflop Blue Gene/P, Blue Gene/L is built around 65,000 standard PowerPC nodes and has a claimed peak performance of 130 Teraflops.

Each Blue Gene/L node has two dual-core PPC ASICs(Two-way SMP), one designated as I/O engine and the other designated as compute engine. The I/O engine runs a Linux derivative and serves compute engine by providing all the I/O and message passing services expected of Linux. The compute engine runs a very simple microkernel(Not Linux) designed for the sole purpose of executing single application process.

Like BlueGene/L node which inspired CELL core, each CELL core is built around single PPC core serving as the I/O engine, while 8 VUs handles the computational tasks dispatched from the Linux kernel. It is the separation of kernel and application process that sums up the CELLULAR COMPUTING design philosophy.

Q3 : What OS does CELL run?
A3 : CELL runs Linux. The development environment is a mix of old and new, in that all the OS services and interfaces expected from Linux is present, but developers are expected to master the massage passing programming as well as VU assembly coding. The primary difference from standard Linux being

1. Separation of Kernel from user processes.
2. An MPI-like message passing API.

Q4 : What kind of parallelization support does CELL environment provide?
A4 : None. Developers are expected to manually parallelize their code using message passing. It is pretty much a "Just do it or go to Microsoft if you don't like us" kind of deal.

Q5 : Will CELL really be a quantum leap in graphics quality?
A5 : Hard to say. One of the most accurate indicator of performance is the memory capacity. The PSX2 saw a 16 time jump over the machine it replaced, but PSX3 will see a memory capacity jump of only 8 times over PSX2. You be the judge.
 
DeadmeatGA said:
Q5 : Will CELL really be a quantum leap in graphics quality?
A5 : Hard to say. One of the most accurate indicator of performance is the memory capacity. The PSX2 saw a 16 time jump over the machine it replaced, but PSX3 will see a memory capacity jump of only 8 times over PSX2. You be the judge.

Yes, because I've noticed my 64MB GeForce 2 MX 400 runs about even with a 64MB GeForce4 Ti4200...
 
cthellis42 said:
DeadmeatGA said:
Q5 : Will CELL really be a quantum leap in graphics quality?
A5 : Hard to say. One of the most accurate indicator of performance is the memory capacity. The PSX2 saw a 16 time jump over the machine it replaced, but PSX3 will see a memory capacity jump of only 8 times over PSX2. You be the judge.

Yes, because I've noticed my 64MB GeForce 2 MX 400 runs about even with a 64MB GeForce4 Ti4200...

my 128 meg ti 4200 runs on average slower than my 64 meg 4200 ti :) but move to a game that uses more than 64 megs and see what happens . So i see some truth in his post. Not alot though . Of course if it had 16xs the ram cap then of course it be better than if it only had 8xs
 
Q3 : What OS does CELL run?
A3 : CELL runs Linux. The development environment is a mix of old and new, in that all the OS services and interfaces expected from Linux is present, but developers are expected to master the massage passing programming as well as VU assembly coding. The primary difference from standard Linux being

1. Separation of Kernel from user processes.
2. An MPI-like message passing API.

There is a fair bit of research being done in this area both in the opensource and closed source community. IBM itself is doing a lot of work in this area. I believe a lot of the design patterns are being modeled after various biological models found in nature.

Q4 : What kind of parallelization support does CELL environment provide?
A4 : None. Developers are expected to manually parallelize their code using message passing. It is pretty much a "Just do it or go to Microsoft if you don't like us" kind of deal.

This is what I'm having issue with. Why are people to this day stuck in languages/paradigms that are 30 years or so old. Could we at least advance a decade. I've had some exposure to Erlang. Quite nice, really. I personally, don't see why some modifications to Erlang could yield a very viable language which supports concurrency at the kernel language level, rather than waste time with these retarded hacks that are done in C, C++, Java... and their ilk? There is a lot of computing power thanks to Cell some of that can be traded off for the signifcant gains due to such a language.

Q5 : Will CELL really be a quantum leap in graphics quality?
A5 : Hard to say. One of the most accurate indicator of performance is the memory capacity. The PSX2 saw a 16 time jump over the machine it replaced, but PSX3 will see a memory capacity jump of only 8 times over PSX2. You be the judge.

I can agree with you wrt to man made content, to a degree. Except that the procedural content will make up a significant portion of the content and that requires far less persistent storage and thus it's hard to say how things will work out. Perhaps the savings will be enough to "effectively" double that 8 times figure and thus giving us a significant boost.[/quote]
 
Message passing doesnt need to be part of a language to work well when bolted on ... and imperative programming isnt going to be replaced by functional programming :)
 
I like Occam, it is one of the few human programmable languages which ensures no aliasing occurs ... although that of course contributed to its downfall in the past (sometimes you need aliasing, you could use unsafe code with pointers ... but up until recently there was no clean way to construct things like linked lists in occam). I dont think it would be popular with developers though :)

Constructing the message passing with CSP in mind would be nice though, although again ... dunno if you could convince developers. It would make it easy for Sony to license existing tools for program analysis (to detect deadlock/race-conditions/etc).
 
MfA said:
Constructing the message passing with CSP in mind would be nice though, although again ... dunno if you could convince developers. It would make it easy for Sony to license existing tools for program analysis (to detect deadlock/race-conditions/etc).

That's just a matter of time. When they've spent 6 months chasing down race conditions on a 32 APU system they'll embrace CSP-style programming with zeal. :)

And there's already pretty good CSP libraries to both Java and C++.

Cheers
Gubbi
 
right,question from a non-profgammer here. hopefully some people would be able to answer me...

the situation is, Cell will have multiple cores which can be used, much like PS2's VUs, to do whatever the developer wants. that is fair enough and flexible enough.
question is, is it so hard to design libraries to automatically synchronise the cores? or at least not leave it to the devs to MANUALLY handle synchronisation...
at the end of the day the goal of the programmer is to have the multiple cores work parallel to each other doing different things right. the developers would enjoy the flexibility of the architecture but still they wouldn't have to worry about manually handle cores synchronisation, which already sounds dreadful to someone who will never avtually do it (see: ME).
anyone get my point? or am i babbling as usual?
 
i thought the fact that procedurally generated material will be used more extensively in the next generation meant that the need for humongous amounts of memory won't be there...

of course we will need enough memory but remember that it will never be enough anyway, whatever they put in. thats the way consoles have always been and i'm not sure the trend will stop just like that.

i think the computational power will be there to use more and more procedurally generated textures, shaders, environments, therefore dropping considerably the need for huge amounts of memory.

but maybe thats just me.
 
Message passing doesnt need to be part of a language to work well when bolted on ... and imperative programming isnt going to be replaced by functional programming :)

I think I'm starting to turn into Paul Graham. =(
 
DeadmeatGA said:
Q5 : Will CELL really be a quantum leap in graphics quality?
A5 : Hard to say. One of the most accurate indicator of performance is the memory capacity. The PSX2 saw a 16 time jump over the machine it replaced, but PSX3 will see a memory capacity jump of only 8 times over PSX2. You be the judge.

Isn't the PS2 like ~70 times more powerful? The PSone could do 300.000 textured polygons/s and de the PS2 can do let's say 15 - 20 million polygons or more per second? But I haven't taken into account the special effects like bi-lineair filtering, specular lighting,.... so it should be even more powerful!
 
Back
Top