Deferred rendering

Where can I learn about deferred rendering? I had an idea about improving rendering speed, but then I heard some scraps about deferred rendering and I think maybe I just invented that too late instead :cry:
 
Interesting, thanks for the links. Seems similar to what I had in mind except I wasn't thinking of splitting into tiles - isn't that a separate issue from the deferred shading part? I just wanted to store the pixel shader and inputs used for each pixel in a huge buffer and overwrite that if the depth test passes, then blast through the screen and execute each shader at time of SwapBuffers/Present. If you require the current pixel colour for blending then you need to execute the shader immediately though.

Seems to be even more important now, if we're all going to be executing massively long "cinematic" shaders for every pixel. No point running it if it's just going to be covered by something else... why aren't Nvidia & ATI doing this then?

Edit: Dave, the text on the first page of the 1st article is a little screwed up!
 
PowerVR chips split the scene into tiles so the framebuffer can fit into on-chip memory so it can be accessed very fast.
 
Myrmecophagavir said:
splitting into tiles - isn't that a separate issue from the deferred shading part?

Yes. For example, Bitboys (yeah, I know, I know) used tiles to split the rendering workload between multiple chips, but there wasn't any deferring.

And I think the venerable Voodoo already used a tiled framebuffer to make texturing more cache-friendly.

The PowerVR tiling (rendering smartly to a small on-chip tile buffer) of course wouldn't make much sense without the PowerVR deferring and sorting/binning of polygons first -- it makes sure that all the stuff ending up in a tile is in the bag and you can safely render the tile only once (render the tiles one after another, from top left to bottom right); otherwise you'd still need to random access a full (off-chip) buffer. (AFAIU, the Kyros have the full buffers too, but mostly for compatibility reasons. Not sure of this.) Note that this is a separate benefit from the elimination of (opaque) overdraw!

Seems to be even more important now, if we're all going to be executing massively long "cinematic" shaders for every pixel. No point running it if it's just going to be covered by something else... why aren't Nvidia & ATI doing this then?

This should be what the Doom 3 engine does, and what S3 appears to plan for their comeback part. Do a depth & visibility run first, then get down to the heavy PS business, but now selectively.

Funny, I once imagined something like this: creating only a Z-buffer first, and while at it, creating some coarser versions of it. (I posted something in the old B3D Boards.) When ATI later introduced the "hierarchical Z-buffer" I had trouble understanding what that meant... perhaps because I couldn't believe that something I had thought of privately could actually be good :p
 
Myrmecophagavir said:
ISeems similar to what I had in mind except I wasn't thinking of splitting into tiles

Think about it some more :)

isn't that a separate issue from the deferred shading part?

In theory it is, in practice deferred shading is infeasible without tiling (you can do 2-pass rendering, filling the Z-buffer before doing a rendering pass with shading, but I dont consider that deferred shading).

I just wanted to store the pixel shader and inputs used for each pixel in a huge buffer and overwrite that if the depth test passes

Huge buffer is right, just consider how much data you need per pixel.
 
Back
Top