more memory efficient deferred rendering idea?

Inane_Dork said:
I've read in several places that Epic argued against mandatory MSAA on X360 because their shadowing gets run in this deferred shadowing method. So, if true, why can't they do 1/N the quality of shadow mapping where 'N' is the number of AA samples per pixel? Most shadow map shaders I've seen can easily be split into smaller chunks, and reading from different parts of the shadow map per AA sample should not be too difficult
Unfortunately that's called supersampling (it's slooooow) :)
If you're using multisampling all your per primitive samples beloging to one pixel would read the same value from your non-multisampled shadow/occlusion buffer -> edges in some cases will use the wrong occlusion term!
Mintmaster said:
I still think it's more efficient to do it all in one pass with DB for limiting lights if you really want to, but nAo has repeatedly said that shadow mapping is much more efficient this way. I can see how that's the case if the stencil culling helps, but otherwise I don't see the reason for it. Deferred shadow mapping would hurt coherency if anything.
Theoretically speaking you're right, deferred shadow mapping would (and will!) hurt coherency more than a standard approach but you can't believe how good are these modern GPUs at caching texels :) (and I'm talking about huge shadow maps) so this is a NO issue.
Another good reason to go deferred (regarding shadows) is due to shading efficiency:
running a shader using a full screen pass is much more efficient than trying to fill every pixel with a lot of primitives. in the first case all your quads are working all the time, in the second case quads scheduling is more complex and a lot of quads are only partially filled with pixels to shade (primitive edges :( )
There's even a third reason: your shadow map sampling shader can be fairly complex and if you don't go deferred all your color pass shaders get more complex -> use more registers, expecially if you want to use DB.. this can hurt performance even more, so multipassing shading can be the right thing to do if it lets you extract more performance from the underlying hw.

Marco
 
I just want to clear up one little thing that is bothering me

depth textures:
My understanding is these basically are useless unless you want to do greater/less than comparisons to known values (such as with shadow mapping). You cannot simply read the value back and get the depth? (correct? - this is my experience on my nvidia laptop, haven't tried on my ati desktop)
So the only way to get the depth into a texture is with an R16F/R32F render target texture and have your shaders output the depth by sending the transformed vertex via a texture coordinate (or similar)?

The original idea was reliant apon being able to read back depth to approximate position.

Some of the comments here suggest you can do this another way? (I may be reading this wrong again, it's 2am here atm :)
 
nAo said:
Unfortunately that's called supersampling (it's slooooow) :)

If you're using multisampling all your per primitive samples beloging to one pixel would read the same value from your non-multisampled shadow/occlusion buffer -> edges in some cases will use the wrong occlusion term!
Yes, without the occlusion buffer being super sampled, that would happen. I don't know how noticeable the effect would be, though.

If we're doing 1/N the work per sample, though, then N super samples shouldn't affect performance.
 
Graham said:
depth textures:
My understanding is these basically are useless unless you want to do greater/less than comparisons to known values (such as with shadow mapping). You cannot simply read the value back and get the depth? (correct? - this is my experience on my nvidia laptop, haven't tried on my ati desktop)
So the only way to get the depth into a texture is with an R16F/R32F render target texture and have your shaders output the depth by sending the transformed vertex via a texture coordinate (or similar)?
On ATI hardware that supports depth textures if you sample the texture you will get the depth, so you should not need to render into a colour buffer.
 
Graham said:
I just want to clear up one little thing that is bothering me

depth textures:
My understanding is these basically are useless unless you want to do greater/less than comparisons to known values (such as with shadow mapping). You cannot simply read the value back and get the depth? (correct? - this is my experience on my nvidia laptop, haven't tried on my ati desktop)
So the only way to get the depth into a texture is with an R16F/R32F render target texture and have your shaders output the depth by sending the transformed vertex via a texture coordinate (or similar)?

Yes on NVidia hardware if you map something as a depth texture you get back 0 or 1, or the PCF value if you have Bilinear filtering on.

But you can still render it as a depth texture then map it as a different type for the read. Although I have no idea how they map this functionality in D3D or open GL in a PC.
 
Back
Top