Cell able to offload RSX?

MistaPi

Regular
To what extend can Cell do rendering operations (what types etc). I read somewhere that Cell is doing some kind of raytracing in Warhawk.
 
Cell can do ALOT but its a choice between using Cell's SPE's for Graphics rendering or having them do physic's. Personally i would choose physic's.
 
Last edited by a moderator:
Why? Easier?
Physics make for a more believable interactive world. Graphics will only take you so far, and physical interactivity and animation can bridge a lot of the believability gap.

If you don't have the SPEs at work on physics, then you've either got it on the PPU or not at all. For the most part, it would be a foolish waste of resources to have the RSX doing physics, so it's unlikely we'll have a lot of physics being processed on it (at least early on). Besides, if the CELL is too busy doing something else to have to offload physics to the RSX, then exactly what the hell is it working on? (I'd really be interested in reading about some theoretical scenarios there.) The CELL would handle physics better, make them more easily usable in an interactive (instead of purely cosmetic) fashion, and frees up the GPU to make the pretty.
 
Oh. I thought it was more powerful.

Than a full fledged GPU? No way.

The Cell is very fast at graphics compared to a CPU , compared to a high end GPU its slow, very slow. Unless that GPU is a Geforce 2 MX :p

Interesting bit of history that you might be interested in:

Sony originially planned to use 2 Cells in their PS3, one acting like a CPU and one like a GPU. These plans quickly got shredded after every developer asked about it complained about how ridiculusly hard it would be to code for and performance wise, it would not be worth it at all (Graphically)
 
Last edited by a moderator:
Sony originially planned to use 2 Cells in their PS3, one acting like a CPU and one like a GPU. These plans quickly got shredded after every developer asked about it complained about how ridiculusly hard it would be to code for and performance wise, it would not be worth it at all (Graphically)

Total urban myth. ;)

Anyway yes, Cell is better than any modern GPU at raytracing/casting - which may what launched MistaPi down this topic road. But the vast majority of games are obviously not going to be raytraced; enter the GPU.
 
The cell isnt very powerful graphically compared to a GPU. But its excellent at physics.
Depends on what you want Cell/the GPU to do. In warhawk, a Cell SPU calculates volumetric clouds, you can't really do stuff like that on a GPU. They're not programmable enogh for that, or at least not enough to do it efficiently or we'd seen it in games by now. So your claim is sort of true sometimes, and not at other times.
 
From the Mike Acton interview @ psinext.com on Cell rendering tricks...

Mike Acton: Well, each developer is going to have their own tricks and ideas, and it's hard to speculate on what sort of amazing things we'll see over the lifetime of the Playstation 3. There are a few that seem like obvious choices:

o Compression; both image and geometry data.

o Improved 2D graphics. Not just user interface elements, which in my opinion don't get nearly enough attention in games, but in any 2D image used in the game.

o More complex character skeletons. Transforming a character's skeleton animation data on the GPU generally means limiting the number of influences each vertex can have.

o Lighting - I hope that we'll finally see the death of baked-in lighting.

o Particle systems and effects. And I don't mean a shower of points or sparks! Effects are especially well suited to offloading to the SPUs, so I expect this is the first place we'll see the complexity of animation, physics and graphics really come together.

o Vertex animation. There's almost certainly going to be a mountain of water effects, plasma-type effects, velocity field effects, etc.

o Terrain generation.
 
Total urban myth. ;)

You sure about that?

Kutaragi reveals that there was once the idea of using two Cell chips in the PS3, with one used as the CPU and the other used for graphics. However, this idea was killed when it was realized that Cell isn't appropriate for the functionality required for shaders, software tools that are used to draw images to the screen. The decision to go with a separate GPU was made in order to create the most versatile architecture possible.
 
Oh. I thought it was more powerful.
Measuring what? If you're talking pixel throughput, rasterization, texture reading, raster tests like alpha/depth/stencil test, rapid framebuffer read/write operations like alpha blending, then hell no. Not within a million miles. If you're talking raw vertex processing, then yeah, for the most part. Exceptions still exist in this case, though.

There are a number of cases where Cell can just help out on the vertex side to make things go a little smoother on the GPU side. Most obvious ones are cases where the end result vertex has fewer attributes going out than coming in (e.g. skinning, shadow volume extrusion).
 

Well, for the context in which I'm saying it, yes I'm sure.

Obviously Sony considered it - it's in their patent afterall - but the way Kutaragi admits to it it's an 'in passing' sort of a feel. Not only that, but the quote I was directly refuting indicated Sony went to devs and they gave their input on it. Yet not one dev ever has ever spoken of such a plan actually set into motion at Sony.

And not for lack of this topic coming up a million times before, mind you...
 
Well, for the context in which I'm saying it, yes I'm sure.

Obviously Sony considered it - it's in their patent afterall - but the way Kutaragi admits to it it's an 'in passing' sort of a feel. Not only that, but the quote I was directly refuting indicated Sony went to devs and they gave their input on it. Yet not one dev ever has ever spoken of such a plan actually set into motion at Sony.

And not for lack of this topic coming up a million times before, mind you...

Ok, I think see what you mean. You're saying the urban myth is that they approached developers with it (perhaps a prototype or sim) and got hands on feedback? Even though we're working with a second hand translation, well, summary of a translation, it still seems like it was considered more then just in passing. Perhaps even talking about the concept to some of its internal teams (SCEI based, probably), and getting feedback on that. Not hands on, but discussion about it conceptually. Of course, that's just an assumption of mine. I don't think I've ever read any interview, with an internal or 3rd party dev, mentioning that possibility. But, then again, I don't explicitly remember that being brought up either. I just get the feeling this is something Ken seriously considered. Especially since, IIRC, the original patent is actually in his name (or was at least created by him). :D
 
Ok, I think see what you mean. You're saying the urban myth is that they approached developers with it (perhaps a prototype or sim) and got hands on feedback? Even though we're working with a second hand translation, well, summary of a translation, it still seems like it was considered more then just in passing. Perhaps even talking about the concept to some of its internal teams (SCEI based, probably), and getting feedback on that. Not hands on, but discussion about it conceptually. Of course, that's just an assumption of mine. I don't think I've ever read any interview, with an internal or 3rd party dev, mentioning that possibility. But, then again, I don't explicitly remember that being brought up either. I just get the feeling this is something Ken seriously considered. Especially since, IIRC, the original patent is actually in his name (or was at least created by him). :D

Yeah, that's what I'm saying in terms of the 'urban myth.' And I completely agree that - probably for a good year or two honestly - it must've been under strong consideration internally, just because KK and SCEJ seems like those kind of people. ;)

And why not! Hell, I for sure thought it was exciting... and I still love browsing through that patent and seeing those 'visualizers' where SPEs (APUs back then) would be. :)
 
Total urban myth. ;)

Anyway yes, Cell is better than any modern GPU at raytracing/casting - which may what launched MistaPi down this topic road. But the vast majority of games are obviously not going to be raytraced; enter the GPU.

Well I guess Carmack bought into that urban myth too..

I think it is also in that Xbox book. What are your sources?

And if you watched his Quakecon address, Carmack while definitly against the idea of CPU as GPU..he did seem to think there was some merits too the idea, far more positive than I expected him to be.
________
LIVE SEX
 
Last edited by a moderator:
Well I guess Carmack bought into that urban myth too..

I think it is also in that Xbox book. What are your sources?

And if you watched his Quakecon address, Carmack while definitly against the idea of CPU as GPU..he did seem to think there was some merits too the idea, far more positive than I expected him to be.

My sources? It's more like my lack of sources. How can you have sources for something you're claiming never occured?

But if you search this forum, you'll find countless posts by devs that have railed against this concept ever having been taken beyond the conceptual stage.

Carmack's commentary on it doesn't really mean anything; he knew of this idea the same as I knew and anyone who had seen the patent knew. Not to mention, he knew what the deal had been with EE+GS. What he wanted in PS3 was a full GPU. And I feel he was addressing an idea and theoretical scenario, rather than an actual choice that was presented to him.

If there's something solid on this in that XBox book, by all means bring it out. It's not like I'm not interested in this.
 
Oh I dont have the Xbox book. But I thought there was mention of this. There was also a Toshiba GPU config? I guess the only debate here is how far each project got conceptually towards reality. Anyways it's over.
________
BUY ROOR BONGS
 
Last edited by a moderator:
Oh I dont have the Xbox book. But I thought there was mention of this. There was also a Toshiba GPU config? I guess the only debate here is how far each project got conceptually towards reality. Anyways it's over.

Well the Toshiba GPU thing... I think honestly people usually mean the same thing by this; there was talk of a GS2/3 or whatever, but it was never clear whether that was a standalone chip or in fact was the 'Visualizer' implementation for a modified Cell.

I don't know - I admit to being a victim of the murk myself.

I used to wonder openly about what happened to these would-be designs, but after seeing enough here on this forum, came to the conclusion that either they never took shape, or simply even among devs, it must be the absolute rarest of knowledge.

Maybe someone here will step in and shine a light on what little info there might be - Archie would seem a good candidate, though I'm sure he's posted on this very matter before.
 
If there's something solid on this in that XBox book, by all means bring it out. It's not like I'm not interested in this.

Been posted before. Hang on...

Edit - here you go:

But no one knew that inside Sony, something was going terribly wrong. Sony had created a new game system, dubbed GS Cube, with 16 Emotion Engine chips. It proved to be a technological dead end. In parallel, IBM fellow Jim Kahle had proposed Cell, a radically different computing architecture. Instead of a microprocessor and a graphics chip, the system for the PlayStation 3 was originally supposed to have two Cell microprocessors. One would handle the system while the second one would handle graphics. The game developers couldn’t make heads or tails of this non-traditional architecture. Sony scrapped that plan. Then it commissioned both Sony’s and Toshiba’s chip designers to create their own graphics chip. The graphics chip was going to be a screaming monster that relied totally on one kind of processing, dubbed fill rate, to handle the graphics. That was what Sony and Toshiba’s engineers knew how to create, based on their work on the PlayStation 2. But in the meantime, both ATI and Nvidia had pioneered the use of shaders, which were subprograms that added the nuance and texture to the surface of an object. This technique simplified the process of creating art for games. To create a new effect, the developer had to simply create a new shader. The Sony and Toshiba team were far behind on shader technology. Game developers once again objected to the solution that they were proposing. Sony had to cancel the graphics chip altogether. The console just wasn’t going to launch in 2005.

Great read, and the e-version is only 15 bucks: http://www.spiderworks.com/books/xbox360.php
 
Last edited by a moderator:
Back
Top