1st International Symposium on CELL Computing

Interesting

Mapping Deferred Pixel Shaders onto the Cell Architecture
I know at least one (big) Dev house working on that already.
 
Deferred pixel shading on Cell prolly deserves its own thread, but I suppose we'll just have to wait.

I dare say you could have a lot of fun constructing a tile based deferred renderer in software on Cell, with a deferred lighting engine used to shade each pixel in a tile.

It seems to me that you could hide texturing latency quite nicely. When you know in advance per-pixel materials, effects plus anisotropy you should be able to construct a really efficient texture fetch stream.

Jawed
 
Vysez said:
I know at least one (big) Dev house working on that already.

A game developer?

Where would RSX fit in this? I presume you're not going to idle it..

edit - just had a look, and it's a SCEA presentation, so I guess that's at least one game developer..;) And no, they don't idle RSX at all. I'm very unclear on the specifics of this though :p What kind of pixel shading is Cell doing? It talks about returning 'shadow textures' - so shadowing/lighting ala this article by DeanoC..?
 
Last edited by a moderator:
I'm a little confused. Is this deferred shading in the sense of working out visible fragments for subsequent shading on the GPU, or is it pixel shading on Cell with certain stages deferred (e.g. rasterisation, most obviously), which will occur on the GPU? I'm guessing it's the latter given Jawed's first reply, but just like to be very clear..:)
 
Last edited by a moderator:
Vysez said:
I know at least one (big) Dev house working on that already.

Don't tell us the company, but can you or nAo explain how this will make games look better? Like what does Mapping Deferred Pixel Shaders onto the Cell Architecture actually mean?

Thanks.:D
 
pc999 said:
Cant see any presentation in the link. Anything on AI? I am curious about AI on the SPEs.
From the site's Goals page


Specific topics of interest, that will be discussed at CCS-06 include:
  • Programming models unifying local, in-process, and remote communication.
  • New languages, language constructs and programming models for leveraging the CELL.
  • New libraries, and tools for the CELL.
  • Design techniques for factoring multicore algorithms.
  • Numerical methods for the CELL.
  • SIMD, software managed cache and thread programming optimizations.
  • Intrinsic based AltiVec / SIMD programming techniques.
  • Experience reports in targeting game / high quality rendering engines to the CELL.
  • Applications perspective for leveraging next generation processor architectures.
  • Experience reports on building CELL based clusters and other peer-to-peer distributed systems.
Nothing specific about AI there or in the panel. Seems mostly focussed on both multicore/CELL development and graphics, our friend Realtime GI appearing as a topic. Looks like a very interesting couple of days. What's the likelihood of info from this being released here?
 
mckmas8808 said:
Can you answer what Mapping Deferred Pixel Shaders onto the Cell Architecture actually means?

Deferred rendering uses a different order of rendering to 'immediate' rendering. With immediate rendering, many pixels are computed, then each is tested for visibility. Only the visible ones (those that pass Z and stencil tests) are displayed. With deferred rendering, the visibility of pixels is determined first, then only the visible pixels are calculated using pixel shaders. So the key performance benefit is that no pixel shading work is wasted on occluded pixels.

Because DX9 GPU's architecture is organised for immediate rendering, you have to jump through loops to make it do deferred rendering. With Cell, it's totally programmable, so it can perform deferred rendering more naturally.

I'm not sure about performance though. Anyone care to speculate?
 
JF_Aidan_Pryde said:
I'm not sure about performance though. Anyone care to speculate?
First guess, it can't be too hot. Otherwise why is RSX there and not a Cell renderer? From the off it was speculated that Cell would be great at vertex work, not so great at pixel work. If this is good at pixel work, the idea of a purely Cell GPU would have been feasible. Though maybe it was working but he tools weren't there and that's the sole point to nVidia? :???:

I'm guessing this is something that isn't capable of producing whole games on it's own, but can work well with RSX. Also, it may have application in alternative devices like a mobile platform, with Cell doing simpler shading work. That is an area for Cell research if it's to broaden it's appeal.

There's my tu'penny-ha'penny worth!
 
My knowledge of this isn't so great at all, but here goes..

I'm presuming Cell and RSX together do the initial "generate your per-pixel attributes" and then RSX handles subsequent passes using those? That's where a second Cell chip might have fallen down?

My question is, is the shading being done here on Cell symmetric with the shading on RSX? Or are you doing some specific steps on Cell and then something else entirely on RSX?

Bah, I'm probably not making any sense :p It'd be nice if someone could outline the steps involved and the possible delegation of work on Cell.
 
Titanio said:
Bah, I'm probably not making any sense :p It'd be nice if someone could outline the steps involved and the possible delegation of work on Cell.
That's where the presentation/paper comes in. I expect it'll be public :!:

Ghost Recon: Advanced Warfighter uses a deferred lighting graphics engine for the PC version of the game. Dunno what the XB360 version uses.

I don't know why you'd use Cell to do any of this, to be honest. I don't mind believing PS3 games will use deferred lighting (i.e. like GRAW, with the graphics done by RSX) but it seems to me the only way deferred pixel shading on Cell is going to be useful is when you have a Cell-only computing environment, i.e. not PS3.

Jawed
 
Jawed said:
I don't know why you'd use Cell to do any of this, to be honest. I don't mind believing PS3 games will use deferred lighting (i.e. like GRAW, with the graphics done by RSX) but it seems to me the only way deferred pixel shading on Cell is going to be useful is when you have a Cell-only computing environment, i.e. not PS3.

If the paper's claim is true (i.e. Cell+GPU>GPU for this), presumably it'd be useful if using deferred shading and bound where Cell could offer relief, or perhaps if you wish to use more complicated lighting evaluation (though that might depend on the extent of Cell's involvement).
 
Jawed said:
:???: How do you know what the paper claims? All we have is a title...

Jawed

Well, the abstract too:

This paper studies a deferred pixel shading algorithm implemented on a Cell-based computer entertainment system. The pixel shader runs on the Synergistic Processing Units (SPUs) of the Cell and works concurrently with the GPU to render images. The system's unified memory architecture allows the Cell and GPU to exchange data through shared textures. The SPUs use the Cell DMA list capability to gather irregular fine-grained fragments of texture data generated by the GPU. They return resultant shadow textures the same way. The shading computation ran at up to 85 Hz at HDTV 720p resolution on 5 SPUs and generated 30.72 gigaops of performance. This is comparable to the performance of the algorithm running on a state of the art high end GPU. These results indicate that a hybrid solution in which the Cell and GPU work together can produce higher performance than either device working alone.

I guess it's a question of whether they follow up on that 'indication' and provide numbers on GPU, Cell (if you could bench Cell on its own..again, a question of how far Cell is involved) and GPU+Cell. But I presume/hope they will..(!) I'm guessing that sentence isn't coming out of nowhere, and they've tested that.
 
Last edited by a moderator:
Back
Top