Cell able to offload RSX?

Well I guess Carmack bought into that urban myth too..
And he's wrong as anyone else that is believing to the same myth.
The pre-RSX GPU story is completely different and its a story about a custom GPU called RS (can you see any pattern here?) which was not using any CELL tech..
I think it is also in that Xbox book. What are your sources?
Better sources than Takahashi's.. :)
 
Last edited:
Depends on what you want Cell/the GPU to do. In warhawk, a Cell SPU calculates volumetric clouds, you can't really do stuff like that on a GPU. They're not programmable enogh for that, or at least not enough to do it efficiently or we'd seen it in games by now. So your claim is sort of true sometimes, and not at other times.

I'm not sure what kind of volumetric clouds Warhawk implemented, but in case of CFD, it can be calculated on GPU like Mark Harris demotrated 3 years ago (http://www.markmark.net/cloudsim/). I've tested a real time smoke simulation program about one year ago, and it took about 20ms ~ 30ms to finish the fluid simulation part with a 64x64x64 volume texture on a 6800GTX card. The drawback holding any game to use CFD is not the computing for dynamics, but the volume rendering and the possible huge dataset. With the massive amount of blending operation required by volumetric lighting, it easily becomes an overkill for fillrate, let alone anything like Cell (I doubt anyone would use Cell to do alpha blending). I also doubt Warhawk uses real-time CFD. For flight simulating, a simple cellular automaton with an advected texture add-on is enough, which can be easily handled by GPU.

Just to clarify that. By the way, this is my first post.
 
Cal said:
With the massive amount of blending operation required by volumetric lighting, it easily becomes an overkill for fillrate, let alone anything like Cell
Exactly how does volumetric lighting require blending (or an actual framebuffer, for that matter)?
 
Exactly how does volumetric lighting require blending (or an actual framebuffer, for that matter)?

I was referring to Joe Kniss's volumetric lighting algorithm, which is a general slice-based method which can be applied to heterogeneous volume data (light absorption only). GPU based volume raytrace does not require blending, but 50~100 samples per pixel does not improve on performance. As far as I know, algorithms like per-pixel depth calculation can be only applied to homogeneous data. Well, I didn't keep up with the latest volume rendering techs for a long time, so may be there's one already which can render CFD result with high performance?
 
Well you get better effects from it.

The cell isnt very powerful graphically compared to a GPU. But its excellent at physics.
I think Phase 5 are rendering all frames on CELL (Polygons only I guess) and sending the scenes over to RSX to pretty them up (shaders, textures etc.etc.) so it might be the case that in some real world graphical tasks, CELL does actually out perform a GPU (or is a close enough match it's realistic for devs to spend extra GPU resources on other things)
 
I think Phase 5 are rendering all frames on CELL (Polygons only I guess) and sending the scenes over to RSX to pretty them up (shaders, textures etc.etc.)
What exactly do you mean by 'rendering'? Because rendering officially is drawing the polygons as coloured pixels. If Cell passes the polygons to RSX to fill in the details like shaders and textures, and then draw the polygons, then RSX is rendering them. What you're describing is more the setup, which is the early stage(s) of rendering. Certainly Cell isn't rendering anything in that example if it isn't drawing triangles to the (display) buffer.
 
Back
Top