Why Deferred Rendering is so important?

The thing with screen space blur is that it has a mask, and that's a simple gradient that's tracked to the real shadow in a compositing app, and so its strenght is used to simulate real soft shadows - the further away from the shadow caster, the more the shadow gets blurred.
Regardless of how the blur is being applied it should be done in *light space*, not screen space. The latter is just so terribly wrong...
 
With a mask it ends up working exactly as a DOF effect in image space, I guess we need some fast bileteral filtering implementations then :)
 
With a mask it ends up working exactly as a DOF effect in image space, I guess we need some fast bileteral filtering implementations then :)
Right, and doing DOF in image space is also pretty wrong, but generally less objectionable IMO since at least in that case you're blurring in something close to the proper axes. With shadows not only do you have haloing problems at edges (even with fancier filtering), but your blur isn't related to the proper geometric arrangement of the light/occluder/receiver at all! Indeed moving the camera around will have a noticeable warping effect on the so-called "penumbra" of the shadow. It's wrong to the point that I wouldn't even bother scaling filter widths based on occluder/receiver ratios because what you get isn't even "plausible", let alone physically correct.

The fundamental problem is that the occlusion that is relevant for soft shadows comes from the light's point of view, not the camera's. So while DOF remains plausible as long as the blurs aren't too large (they're a reasonable approximation to what you can see anyways), the same is not true for lights. Moreover it's not just the distance ratios that are relevant (which can be "masked" certainly from the original geometric data), it's the actual geometric projection in light space.

I dunno, perhaps I'm being too hard on the technique, but the fact that "so many games use it", regardless of the existence of much more correct (and comparatively cheap) algorithms, makes me depressed and less motivated to continue researching as it probably won't be used anyways... :(
 
Last edited by a moderator:
Well, for shadows you could set all surfaces to black, and treat the calculated color value as the light modifier. For soft borders and AA you would want an edge detect modifier for that as well.
 
I have only read the first and last page, but I can't believe this hasn't been discussed more. How are GG achieving AA with DR in KZ2?.
 
The article about deferred shading and the engine of STALKER , by Oles Shishkovtsov in GPU Gems 2
em_smile_32.gif
, in the blog of Oles .

http://oles-rants.blogspot.com/

http://www.4a-games.com/209_gems2_ch09.pdf

9.7 Conclusion
Deferred shading, although not appropriate for every game, proved to be a great rendering
architecture for accomplishing our goals in S.T.A.L.K.E.R. It gave us a rendering
engine that leverages modern GPUs and has lower geometry-processing requirements,
lower pixel-processing requirements, and lower CPU overhead than a traditional forward
shading architecture. And it has cleaner and simpler scene management to boot.
Once we worked around the deficiencies inherent in a deferred shader, such as a potentially
restricted material system and the lack of antialiasing, the resulting architecture
was both flexible and fast, allowing for a wide range of effects. See Figure 9-8 for an
example. Of course, the proof is in the implementation. In S.T.A.L.K.E.R., our original
forward shading system, despite using significantly less complex and interesting
shaders, actually ran slower than our final deferred shading system in complex scenes
with a large number of dynamic lights. Such scenes are, of course, exactly the kind in
which you need the most performance!
The new engine with deferred rendering of Oles Shishkovtsov for METRO 2033

METRO 2033 engine

uyityityi.jpg



what is deferred supersampling ?
 
Last edited by a moderator:
You render the scene multiple times, each with a slight, sub-pixel offset, and average.
 
In a number of engineering programs I've written I've implemented super sampling by just rendering the screen to a higher resolution (multiplies of 2 x width by 2 x height) back-buffer than was going to be displayed and, after rendering was complete, progressively down-sampling the target by 1/4x each time until it was the same size as the back buffer. Not sure what they meant by "deferred super sampling" but I think this .. somewhat.. fits the description.


Not the most elegant solution in the world, but it was actually surprisingly efficient both in terms of development and in execution. The whole process only required adding a couple lines of code and about 5 minutes of development, and you could drop this into pretty much any application the same way. As far as execution goes - you only have to draw the scene once and it works on all hardware with bilinear filtering (even DX7 and below cards), after that the downsampling is extremely cheap. Most of the graphics cards at the company were around Geforce2MX level, but were all still able to run everything at 64-256+ samples and still stay within the range of acceptable framerates for engineering concerns.
I still use this technique in any application that has small render windows (like, for instance, the level editor I use for my side projects) as I'd much rather be able to see a clear, crisp image of what I'm working with at 60fps than a jagged, unusable image at 400fps.
 
Last edited by a moderator:
A few months ago that I did not prove Stalker. This is with the the Nhancer and the compatibility for HDR +AA of Obliivion . I had proven this method in August/September with the demo of Medal of Honor, Airborne but not worked in Stalker, neither MSAA nor SSAA nor no method. Later I have used it in also in the U3 and GOW in DX9.

MSAA 4X

http://i4.photobucket.com/albums/y117/jonelo/capturas 2/mhjgkjhg.jpg

NO AA

http://i4.photobucket.com/albums/y117/jonelo/capturas 2/pytit-copia.jpg

MSAA 4X

http://i4.photobucket.com/albums/y117/jonelo/capturas 2/XR_3DA2008-01-0400-57-12-72.jpg

NO AA

http://i4.photobucket.com/albums/y117/jonelo/capturas 2/XR_3DA2008-01-0400-56-25-99.jpg


MSAA 2X 1080P

http://i4.photobucket.com/albums/y117/jonelo/capturas 2/XR_3DA2008-01-0405-23-54-45.jpg
 
Last edited by a moderator:
The latest forceware has suport for the MSAA in Stalker and deferred shading , from the NV Control Panel


My drivers are the 169.25 , but found since the 169,xx. Simply is the AA of control panel , override any application seting .



http://forum.beyond3d.com/showpost.php?p=1113108&postcount=558


Extend with one click after opening the link , MSAA 2X 1080p

http://i4.photobucket.com/albums/y117/jonelo/capturas 2/XR_3DA2008-01-0601-46-56-38.jpg

http://i4.photobucket.com/albums/y117/jonelo/capturas 2/XR_3DA2008-01-0509-22-59-33.jpg


http://i4.photobucket.com/albums/y117/jonelo/capturas 2/XR_3DA2008-01-0509-22-24-03.jpg


http://i4.photobucket.com/albums/y117/jonelo/capturas 2/XR_3DA2008-01-0509-22-25-19.jpg
 
Last edited by a moderator:
hi again,
just to clear some things up for me:
so if i understand this correctly you do your shadow map creation per light as before. but then in a second pass you render this shadow map into a screen size map so that you have the shadow information per pixel precalculated. this is done with a prerendered depth texture (depth to light space transformation and shadow compare).

you can pack 4 lights shadows like this into one texture. do i understand this correctly?
 
Back
Top