Why Deferred Rendering is so important?

It's not a myth and it's completely possible as many games do it.
Shadows are computed before lighting, so that when you're lighting your scene you have already available a shadowing term (typically sampled from a texture)
and where are then the 'deferred shadows' in this?
 
and where are then the 'deferred shadows' in this?
In a canonical renderer shadows are computed in the color pass as all the other lighting terms: so you a rendering a mesh with a single shader/pass and this shader takes care of everything.
In this deferred shadowing case shadows are computed BEFORE the lighting pass and stored in some texture(s).
It works like a deferred rendeder, but just for shadows.
 
And what's the advantage of doing so? I don't believe this. If they would do it like that, then there would be no explanation for not supporting AA under Direct3D 9.
 
And what's the advantage of doing so?
Decoupling shadow maps filtering from geometry complexity, working around GPU inefficiencies (GPUs work on 2x2 quads, and often quads are not completely filled with fragments to shade), reducing shaders combinatorial explosion, enabling your engine to fetch a per-pixel dynamic number of samples from the shadow map without using dynamic branching, etc..etc..
I don't believe this.
I personally developed this tech for a PS3 title, so yeah, games do that.
A lot of developers discovered on their own, more or less in the same time frame, how useful this technique can be.
If they would do it like that, then there would be no explanation for not supporting AA under Direct3D 9.
Of course there is, and it's related to the fact taht you can't read back subsamples of your multisampled z buffer in DX9 (and DX10.0 as well..), the only way to work around this problem on PC is to supersample your shadows..and this can be fairly slow.
On consoles we can do much better than that since we work closer to the metal
 
That's a typical statement for standard deferred shading. You have to do the lighting calculations and stuff per subsample.

I just checked Medal of Honor Airborne with forced AA by the driver and the result are typical deferred shading artifacts at edges (wrong vectors).

When I have my Bioshock copy I will do further investigations but until then I highly doubt the "only shadows are deferred" stuff. That makes no sense at all to me anyway.

Is there some paper of the technique you talk about? Seems interesting :???:
 
Last edited by a moderator:
so i can call it traditional shadow mapping then ;)
No, you can't, cause I'm talking about shadow maps sampling, not shadow maps rendering.
Traditional shadow maps implementations render shadow maps first and sample shadow maps later
within the color pass, with a deferred shadowing approach there's a third phase in between
the aforementioned passes that takes care of sampling shadow maps.
 
[...] a deferred shadowing approach there's a third phase in between
the aforementioned passes that takes care of sampling shadow maps.
ok, now i get the difference. now i also understand the problem with AA and this technique... thanks!
 
I agree that doing deferred shadowing indeed has some benefits although I'm personally a proponent of fully deferred lighting, but that may be a few more years down the road...

Still, I'm interested in the particular benefits that you see:
Decoupling shadow maps filtering from geometry complexity
Do you mean just in terms of overdraw? Theoretically a pre-Z pass will accomplish the same thing (which is effectively what you're doing anyways), no?

working around GPU inefficiencies (GPUs work on 2x2 quads, and often quads are not completely filled with fragments to shade)
Fair enough, but do you actually see a real speed improvement from this? Furthermore it's precisely this step that forfeits being able to compute the shadow map coordinate derivatives and thus do "proper" filtering (unless you compute the derivatives analytically, which is actually pretty cheap in this case and maybe the way to go...).

reducing shaders combinatorial explosion
Definitely a very compelling reason, although fully deferred rendering does much more than just deferred shadowing to this end.

enabling your engine to fetch a per-pixel dynamic number of samples from the shadow map without using dynamic branching
Here I'm a bit confused... how is this accomplished? Predication? Stencil? Multiple passes to generate the screen-space shadow buffer?

Thanks in advance.
 
Do you mean just in terms of overdraw? Theoretically a pre-Z pass will accomplish the same thing (which is effectively what you're doing anyways), no?
I see this in terms of "my color pass shader is now shorter, I removed a cost that was also linked to geometric complexity in camera view".

Fair enough, but do you actually see a real speed improvement from this?
Oh yea, big speed improvements! at lest on one architecture.. :)

Furthermore it's precisely this step that forfeits being able to compute the shadow map coordinate derivatives and thus do "proper" filtering (unless you compute the derivatives analytically, which is actually pretty cheap in this case and maybe the way to go...).
Not having used VSM with cascaded shadow maps I obviously never had any issue as I coulnd't use any hw filtering, but I see your point here :)

Here I'm a bit confused... how is this accomplished? Predication? Stencil? Multiple passes to generate the screen-space shadow buffer?
Multiple passes + early depth bounds test -> as fast as a single pass (overhead is really minimal) but I can change shader per pass..thus I can change the number of samples (or other things..) per per shadow split shadow map.
 
Not having used VSM with cascaded shadow maps I obviously never had any issue as I coulnd't use any hw filtering, but I see your point here :)
Well I'm actually starting to think that one should just compute the derivatives analytically anyways (it's just a few ALU ops), since this is necessary for shadow mapping with deferred shading as well. Furthermore even forward-rendered VSM+CSM requires either analytically computed derivatives or some pixel quad hacks (if you remember the PSVSM thread :)), so losing derivatives isn't really critical in this case.

Multiple passes + early depth bounds test -> as fast as a single pass (overhead is really minimal) but I can change shader per pass..thus I can change the number of samples (or other things..) per per shadow split shadow map.
Ah I was wondering if that's what you were doing. I'm not surprised that there's minimal overhead as early-Z pass is pretty fast nowadays... then again so is dynamic branching on "most" new graphics architectures ;)

Thanks for the info - definitely clears a few things up for me. I'm eagerly awaiting the release of your game :)
 
Why is it possible to force AA with G80 series per CP in eg Bioshock or MoH: Airborne?

It's always possible - you just have to grab the right buffer to write to.

edit:
Of course, it's not that easy. But choosing the right buffer from the plethora floating around is one of the main reasons, it's quite difficult to force FSAA nowadays. But since at least Bioshock uses a UE3-Engine, chances are, that the correct buffer(s) have already been singled out by devtech.
 
Last edited by a moderator:
That's a myth and not possible. You need the shadows when you do the lighting calculation. You can't add shadows in another pass.

The movie VFX industry has been doing exactly that for ages. I think a simple multiply operation is enough in most cases.
Obviously they also have to be used as masks for the appropriate specular passes, so it requires heavy multipassing... Edit: or as nAo has mentioned, you can pre-calculate shadow maps and use them in the shader, too. Big VFX houses render out shadow maps (even for every frame if lights/objects are animated) and store them on disk, so it's also a possible option.

Reason: ability to re-use shadows for many test renders, ability to blur/transform the shadows in screen space etc.
 
You have no idea on how many games use that, hey, even Crysis does!
Oh I do have an idea, and IMO Crysis' shadows aren't exactly perfect... they use very high resolution shadow maps, but many of the screenshots reveal rather poor filtering. Furthermore they use a bizzare texture-space jittering scheme as well which I'm not entirely sure is a good idea... they display the results as if they are impressive, but they just look ugly to me ;)

I guess screen-space-blurred shadows aren't *that* much worse than screen-space DOF, but the artifacts are still very visible and distracting IMHO. It's too bad that people consider minor light bleeding a deal-breaker in some cases (admittedly it can get bad in some scenes) but tolerate screen-space blurs...

I believe World in Conflict uses some sort of screen-space blurring for their shadows and while it looks reasonable in screenshots it looks *terrible* ingame with moving cameras/objects/etc. I can't begin to describe the severity of the artifacts that appear (try the demo/beta... it's *very* obvious). While WiC may be worse than some implementations, there's just no real way to make it look good.

Anyways, that sort of extremely hacked "solution" is a particular pet-peeve of mine... maybe I don't want to work in the games industry after all ;)
 
The thing with screen space blur is that it has a mask, and that's a simple gradient that's tracked to the real shadow in a compositing app, and so its strenght is used to simulate real soft shadows - the further away from the shadow caster, the more the shadow gets blurred. Now that's a bit harder to reproduce in a realtime 3D enviroment, though I think it might be possible...
 
Back
Top