Cell/CPu architectures as a GPU (Frostbite Spinoff)

Arwin

Now Officially a Top 10 Poster
Moderator
Legend
Which is an interesting parallel with the Cell GPU patents. Cell combined with some hardware texturing is theoretically capable of respectable rendering performance, and here we see GPU working as a texturing unit to Cell. I wonder how far explorations into the Cell GPU architecture went in terms of evaluating its performance?

Not far enough to help developers make games using this in 2005 rather than 2011 ...
 
Is Cell really a cost effective GPU anyway? I guess only repi and company can answer that question ;) Just wondering if, with compute shaders, the usefulness of Cell for GPU funcitons will go the way of the do-do bird.
 
In PS3, yes... because you can use Cell for anything (graphics, motion input, security, audio, compression, video), and it can hold it's own against the GPUs. In future consoles, their lighting slides say they prefer a combined CPU + GPU approach, with full programmability. So the stuff they learned here may still be useful.
 
In PS3, yes... because you can use Cell for anything (graphics, motion input, security, audio, compression, video), and it can hold it's own against the GPUs. In future consoles, their lighting slides say they prefer a combined CPU + GPU approach, with full programmability. So the stuff they learned here may still be useful.

Yeah, I meant in the future (PS4, etc). Obviously the PS3 is a fixed platform, so it's just a matter of finding the best ways to use it.
 
Is Cell really a cost effective GPU anyway? I guess only repi and company can answer that question ;) Just wondering if, with compute shaders, the usefulness of Cell for GPU funcitons will go the way of the do-do bird.

Looking at the DirectX11 slides, I get the impression that we currently don't fully know yet how good the GPU performs here because DirectX11 still has a few major snafu's that apparently DICE is working with Microsoft to solve.

The philosophical question remains whether in a console you need a CPU and a GPU, and what level of flexibility you need them for. It's a very difficult question to answer still if you ask me - current GPUs are more and more designed to do everything at once in a game, and so are already pretty much CPU + GPU in a sense. In a situation where you can't afford to waste resources too much (having the CPU idle, for instance, as would happen in many PC games I'm sure) then what kind of CPU and GPU combination is ideal?

What DICE is saying about future consoles is that research into good use of DirectX11 is good preparation for developing for next-gen consoles. That is, by the way, the strongest hint (apart from Microsoft's job-adds ;) ) that we're entering the last two-three years before the next console cycle (not that anyone would really be surprised)
 
Looking at the DirectX11 slides, I get the impression that we currently don't fully know yet how good the GPU performs here because DirectX11 still has a few major snafu's that apparently DICE is working with Microsoft to solve.

The philosophical question remains whether in a console you need a CPU and a GPU, [ you mean more CPU or more GPU power, right?] and what level of flexibility you need them for. It's a very difficult question to answer still if you ask me - current GPUs are more and more designed to do everything at once in a game, and so are already pretty much CPU + GPU in a sense. In a situation where you can't afford to waste resources too much (having the CPU idle, for instance, as would happen in many PC games I'm sure) then what kind of CPU and GPU combination is ideal?

What DICE is saying about future consoles is that research into good use of DirectX11 is good preparation for developing for next-gen consoles. That is, by the way, the strongest hint (apart from Microsoft's job-adds ;) ) that we're entering the last two-three years before the next console cycle (not that anyone would really be surprised)
A bit OT
Indeed that's a realy interesting subject and a bit of the ten billions dollars question. GPGPU is still tricky and may not yield that great results on that much workloads, on top of it it seems to require quite a lot of efforts from coders (=> costs). I'm not sure the figure is better if we take in account the perf per watt on the same workload. Maybe Intel were somehow right with Larrabee but underestimated the amount a specialized hardware they needed to make the thing convinient and performing well.

For the sake of it I would like to see Cell like devices attached to "buffer/RT" filler instead of full blown GPU, the idea is that you take away the nightmare of texturing latencies away from the design of you vectors processors. Actually I believe tiny off the shelves GPU could be be good candidates for the job.
 
You need to explain your math here. Your quote says 3 (then 5) for PC/360 vs 4 PS3 (which came later).

Also Capcom released at least 2 successful 360 games before they released anything on PS3, they clearly spent much more time in developing 360 version.
Finally, rendering engines are not drag and drop replacements. Some other developers need to use them. It's not like all those capcom games are developed by only 9 devs.

I'll continue this over PM if you don't mind, I really don't care to derail with this subject. :smile:
 
Is Cell really a cost effective GPU anyway? I guess only repi and company can answer that question ;) Just wondering if, with compute shaders, the usefulness of Cell for GPU funcitons will go the way of the do-do bird.

Surely not, but it's a case of, this is what you have in the machine, it cant be changed at this point, now use it as best as possible. Which is the story with PS3 and Cell all along. It's a sunk cost.
 
Surely not, but it's a case of, this is what you have in the machine, it cant be changed at this point, now use it as best as possible. Which is the story with PS3 and Cell all along. It's a sunk cost.

^_^ Cell was designed to handle graphics jobs as well as general purpose work. According to the IBM folks in an interview, they profiled games when designing Cell (e.g., to determine LocalStore size). Remember they did raytracing on Cell too ?

They have a fixed budget and they needed to spend them appropriately. You should be able to find Kutaragi's comment about "it would be a waste to use Cell solely for graphics". It's designed to do more. Security is another key responsibility for the CPU.

EDIT: It usually takes a long time to write software, especially software for a new architecture _and_ a "different" domain (graphics on CPU). IMHO, I don't think they are done/solved yet. Much work remains.

If Kutaragi had his way, the next item would be optical interconnect to link multiple Cell (systems) together.
 
Yes, but would that have been possible with the hardware available to PS3's designers?

Of course it would have! Imagine if they took all that time, money and research that they spent on cell and instead and handed it all over to Ati or Nvidia to develop a new custom gpu tailored to their gaming needs, they may have ended up with something better than the Nvidia 8800 at that time for all we know. Instead they ended up with spu's which were ahead of their time, but at graphics tasks have difficultly competing to a many years old gpu design named rsx, which is basically a tweaked NVidia 6 series design really. Personally they should have gone with a bog standard cpu, and dropped all that time money and research into the gpu. I think that idea is vindicated by the fact that spu's on ps3 are primarily used for graphics work anyways, ie, they are used mostly as a co-gpu. As a bonus to this the entire coding pipeline and tools would have been dramatically simplified.
 
^_^ Cell was designed to handle graphics jobs as well as general purpose work. According to the IBM folks in an interview, they profiled games when designing Cell (e.g., to determine LocalStore size). Remember they did raytracing on Cell too ?

They have a fixed budget and they needed to spend them appropriately. You should be able to find Kutaragi's comment about "it would be a waste to use Cell solely for graphics". It's designed to do more. Security is another key responsibility for the CPU.

EDIT: It usually takes a long time to write software, especially software for a new architecture _and_ a "different" domain (graphics on CPU). IMHO, I don't think they are done/solved yet. Much work remains.

If Kutaragi had his way, the next item would be optical interconnect to link multiple Cell (systems) together.

Well this is going to drift way off topic so i'll try not to post on it anymore, but I always looked at it as Sony would have been a lot better of putting more money into the GPU, than putting cell in there which ended up being shoehorned into graphics tasks it's not really good at (or, not as good as a GPU, although much better than a standard CPU) and is difficult to program for, which also ends up causing a lot of the whole bad PS3 ports issues, which you must admit are a problem which plagues the machine. It's not a case of cell vs not cell, it's of course a case of, cell vs not cell and extra money spent elsewhere (on the gpu I think) and which would have given you more return. Always seemed a fairly cut and dry thing to me I didn't realize there were still people who thought Cell was a good idea :p (proof seems to be, how much of a non starter it seems to be in any future projects from Sony or elsewhere from all PS4 rumors we have heard, I dont even think intelligent PS3 fanboys want to see Cell(s) in PS4!)

But again it's not all bad. Cell is in PS3, devs would be dumb not to use it to it's max against the main competition xbox 360, and it can obviously do a lot. Everything is set in stone now and everybody has to use what they have available cleverly.

Edit: jokers post above makes my point exactly too.
 
Last edited by a moderator:
nVidia 8800 consumed too much power 5 years ago. And the development of PS3 started even earlier.

You can like or dislike Cell (I don't really care), but that chip is planned to do graphics work and much more in the PS3 from day one. ^_^ It's not a bad investment considering that it also helped to keep piracy at bay.

Whether the developers can take advantage of it... well, we have seen and may continue to see more effort. Naturally, not everyone will or can use it effectively for various reasons. I just don't buy their products. ^_^

If they continue to forge ahead, the software approach makes innovation more open ended. e.g., today, GoW3's MLAA implementation is comparable or better than modern PC GPUs's implementation.


EDIT: To answer joker454's post, the SPUs are used primarily for graphics work, but its full programmability allows for more flexibility than any GPUs at that time (or even today). The other way to look at this is... even if Sony invested R&D in the GPU years ago, they may (or may not) come to the conclusion that full programmability is better for a closed box. The DICE slides noted that the next console should have combined CPU + GPU with full programmability instead of API-based. Cell and RSX are already heading in that direction this gen (because they can !). I believe a fully programmable setup is by default more time consuming to do unless the software library is there. They are building up that software library and expertise now.
 
Cell was fine, it just ended up being paired with sub-par GPU :)

Also, wasn't the initial PS3 meant to only have Cells in it for doing all the CPU and GPU work?
 
Cell was fine, it just ended up being paired with sub-par GPU :)

Also, wasn't the initial PS3 meant to only have Cells in it for doing all the CPU and GPU work?

As versatile as Cell is, it can't beat GPUs in embarrasingly parallelizable jobs, especially straight forward or custom/hw assisted ones. They probably need both the CPU and GPU to cover the entire spectrum of work. That is why RSX (or any GPU) complements Cell.
 
Of course it would have! Imagine if they took all that time, money and research that they spent on cell and instead and handed it all over to Ati or Nvidia to develop a new custom gpu tailored to their gaming needs, they may have ended up with something better than the Nvidia 8800 at that time for all we know.
Okay, that's a fair argument, but are you suggesting the only reason we got 7800 at that time instead of 8800 is because nVidia/ATi didn't have enough money to invest, and with more money could ahve designed a whole generation of current thinking? I don't believe any research can be accelerated just by investing money. I don't beleive if someone handed nVidia a trillion dollars in 2000 they'd have produced the geForce 400 series architecture? There are limits in understanding that only come with experience, and I'm not sure how much better a GPU could be designed and manufactured for want of more investment.

Instead they ended up with spu's which were ahead of their time, but at graphics tasks have difficultly competing to a many years old gpu design named rsx, which is basically a tweaked NVidia 6 series design really. Personally they should have gone with a bog standard cpu, and dropped all that time money and research into the gpu. I think that idea is vindicated by the fact that spu's on ps3 are primarily used for graphics work anyways, ie, they are used mostly as a co-gpu. As a bonus to this the entire coding pipeline and tools would have been dramatically simplified.
Had Cell got a proper GPU coprocessor, it would have offered a scalable architecture, not as efficient at graphics tasks as a GPU but more versatile - certainly for 2005 when GPU programmability was still pretty constrained.

But that's not a terribly exciting topic, nor one we can ever realise. I'm more interested in what is truly possible on a software renderer featurewise, and how efficiencies and tricks can overcome power-based solultions. Like MLAA versus MSAA. MLAA has flaws, but extend it to something like SRAA and we're getting similar results to high sample based AA at lower workloads. And that's a grand vision for the future, working smarter rather than harder so instead of rendering tens of millions of poly's a frame, just as many as needed for the pixels. Frostbite 2 is showing some of the really clever solutions that map well to Cell, and the question I have is what other solutions are possible and how do they compare, working smarter versus harder, to GPU architectures that wree realisable at the time.
 
Of course it would have! Imagine if they took all that time, money and research that they spent on cell and instead and handed it all over to Ati or Nvidia to develop a new custom gpu tailored to their gaming needs, they may have ended up with something better than the Nvidia 8800 at that time for all we know. Instead they ended up with spu's which were ahead of their time, but at graphics tasks have difficultly competing to a many years old gpu design named rsx, which is basically a tweaked NVidia 6 series design really. Personally they should have gone with a bog standard cpu, and dropped all that time money and research into the gpu. I think that idea is vindicated by the fact that spu's on ps3 are primarily used for graphics work anyways, ie, they are used mostly as a co-gpu. As a bonus to this the entire coding pipeline and tools would have been dramatically simplified.

I am still darn sure that Sony originally had something along the line of PS2 GS or PSP GPU for PS3. You know your basic Texture units, setup engine, raster and eDRAM, with Cell doing all the shading. Obviously that design would have been underpowered compare to 360, because 512 MB XDR would have been astronomically expensive for one and they only ended up with One Cell, where they previously had hoped to squeeze in more PE and SPE. Increasing LS to 256k and their failure to get to 65nm on schedule pretty much did that..

But anyway Sony didn't spent that much on the development of Cell though, the cost was share with Toshiba and IBM.

But looking back they had to include PS2 hardware into PS3 for BC and eventually killing BC altogether, Sony defenitely screw up by going with RSX. They should have delayed PS3 altogether, IMO, At that point the market will probably wait for them, since PS2 was still going strong.

So how does one turn Cell into GPU ? From their original patent they wanted to rip out 4 SPE and put in texture units, raster and eDRAM into it, but would that work ? Would bolting just texture units suffice now days ? and let Cell do software raster ?
 
EDIT: To answer joker454's post, the SPUs are used primarily for graphics work, but its full programmability allows for more flexibility than any GPUs at that time (or even today). The other way to look at this is... even if Sony invested R&D in the GPU years ago, they may (or may not) come to the conclusion that full programmability is better for a closed box. The DICE slides noted that the next console should have combined CPU + GPU with full programmability instead of API-based. Cell and RSX are already heading in that direction this gen (because they can !). I believe a fully programmable setup is by default more time consuming to do unless the software library is there. They are building up that software library and expertise now.

Well the heart of the Playstation family is actually the GPU anyway. I mean with PS2, Sony actually made the GS first before the EE. The GS was pretty much done before they contacted Toshiba to work on EE. That's probably why GS was so outdated by the time PS2 was released. So I wouldn't be suprised if Sony had their own GPU ready for PS3 too before they go to IBM and Toshiba for Cell. Sony invested on eDRAM, and it wasn't even used for PS3 in the end. Their 32MB GS was able to output 1080, they probably had its successor for PS3 in mind before they go with Cell. They probably just wanted to put the power of GSCube into PS3. But I guess the world had moved on by that point.
 
Back
Top