Unreal Engine 3's Shadowing Tech

poly-gone

Newcomer
I was wondering about UE3's shadowing techniques after watching that 170 MB trailer. In the scene where the lantern moves through the corridors of the castle, the narrator says that soft shadows are generated by interpolating between crisp and blurred cube maps along with an attenuation sweep to generate soft shadows. This doesn't sound right to me.

If they're doing shadow mapping (which is what I think they're doing in that scene), blurring a shadow map will simply destroy all the depth information. So, this wouldn't be possible at all. I think what they're doing is simply using 2 projective cubemaps for the lantern's own shadow, to achieve that soft shadowing effect, whereas the rest of the scene casts soft (if they say so) shadows using shadow maps with higher order PCF. This would explain why the shadows cast by all the objects in the scene, like that ragdoll model or the pipe for example, remain hard.

Blurring the shadowed scene in screen space and re-projecting it back would give much softer shadows.
 
We pretty much concluded the same thing in the office just the other day about the lanterns.

Not sure about screen space blur approach, surely shadow leakage would be a problem?
 
Not sure about screen space blur approach, surely shadow leakage would be a problem?
Not if you use high resolution blur maps. We've implemented this for both shadow maps and shadow volumes in our engine using variable sized blur maps. The resolution depends on screen size. You can actually blur the maps based on light-to-fragment distance to achieve a "depth-dependent" blurring. Works like a charm, though the framerates dip a lot on current generation hardware.
 
Maybe the crisp shadow is a single shadow map and the blurred shadow is serveral jittered shadow maps averaged.
 
Could be, but then that doesn't explain why the shadows cast by all the other objects are hard, including that walking creature in front of the colored rotating light. The narrator claims it to be soft, but doesn't look that way to me at all.
 
Heh, I thought that part of the video was pretty clear. Guess its just me. What you discribe is exactly what I thought. Its simply 2 cube map masks that it lerps between. One is sharp, the others blury. From the more I read about it I'm getting the feeling they arent even doing real soft shadows. In that scene in specific the real shadows coming from the walls and stuff are without a doubt stencil shadow volumes. The unreal tech site says thats what they use for moving lights.

The character with the colored light projecting on him would be using the 16x oversample buffers, whatever that is supposed to mean. All the pictures and the video have characters that cast mildly soft shadows but are all blured about the same amount. I think now that they are merely a 16 sample pcf that isnt based on distance from occuder, which means they arent real soft shadows but are merely non sharp shadow buffers. While technicly not being hard means its soft shadows, to me and I;m sure other people, soft shadows is a term meaning shadows with penumbra. It makes sense for them to use buffers for the characters instead of the volumes so they can put the animation in the vertex shader. The pics that arent characters that have soft shadows must be just static light maps cuz otherwise they would be stencil shadows. So, that pic on their site that shows how good the soft shadows are is basically just "Hey we can do lightmaps". Big deal. I wish they would push the other features they have instead of something they most likely dont and even if they do isnt used everywhere.

edited to include link:
http://www.unrealtechnology.com/html/technology/ue30.shtml

poly-gone, I saw your post on gamedev talking about this soft shadow thing youre doing yourself as well. I'm curious as to how thats working. I looked at that rendermonkey shadow example and from what I could tell it only works projecting to a plane and with no way to have self shadows. Is this what yours does or does it work in a fully general enviorment?
 
Vogel said:
Static light, static object:
precomputed light occlusion term either per vertex or texel (if mesh is uniquely unwrapped)

Static light, dynamic object:
per object shadow depth buffer, resolution based on the screenspace size of the shadow bounding sphere

Dynamic lights:
stencil shadow volumes

The different approachs work together nicely as they only contribute the light occlusion at a given pixel which is then used by the lighting shader so the only visual difference you see is the amount of fuzzyness in the shadow. The reason for mixing the three approachs is that neither of them alone would be sufficient for the kind of vivid environments we want to create.

-- Daniel, Epic Games Inc.
- http://www.beyond3d.com/forum/viewtopic.php?t=12288&postdays=0&postorder=asc&start=80
 
interesting, there is few stuff I haven't understand in this thread.

do you mean blurring the shadow map directly, or when compositing the shadow map and the scene?

how can you blur a shadow map? what are blur map?

and what you mean by:

" You can actually blur the maps based on light-to-fragment distance to achieve a "depth-dependent" blurring"

any answer or link to incormation would be greatly apreciated :)
 
interesting, there is few stuff I haven't understand in this thread.

do you mean blurring the shadow map directly, or when compositing the shadow map and the scene?

how can you blur a shadow map? what are blur map?

and what you mean by:

" You can actually blur the maps based on light-to-fragment distance to achieve a "depth-dependent" blurring"

any answer or link to incormation would be greatly apreciated :)
 
poly-gone, I saw your post on gamedev talking about this soft shadow thing youre doing yourself as well. I'm curious as to how thats working. I looked at that rendermonkey shadow example and from what I could tell it only works projecting to a plane and with no way to have self shadows. Is this what yours does or does it work in a fully general enviorment?
Yes, that was a technique we were using. Since it couldn't handle self-shadowing, we don't use it anymore. Now we have 2 dynamic shadowing techniques in the engine (apart from PRT for static objects).

The first is based on stencil shadowing. We render 2 shadow volumes into a buffer, the first shadow volume is generated from an unjittered light position and rendered into the red channel whereas the second is rendered from a jittered position and rendered into the green channel. Then we blur the red channel based on the green channel to achieve soft shadows. This technique is described in Shader X2 : Programming tips and tricks...

The second method is based on shadow mapping. The shadow map is generated as usual, but after doing the depth comparison, we "capture" the shadow term at each pixel into a buffer. This is blurred using a seperable gaussian filter (you could even use a variable poisson filter) and projected back onto the scene in screen space. This results in better looking shadows than what is done by Unreal Engine 3.

The pics that arent characters that have soft shadows must be just static light maps cuz otherwise they would be stencil shadows.
They're using spherical harmonic lighting for static geometry.

EDIT: "the first shadow volumes is generated from an unjittered light" corrected to "the first shadow volume is generated from an unjittered light position".
 
do you mean blurring the shadow map directly, or when compositing the shadow map and the scene?
The technique is explained above.

" You can actually blur the maps based on light-to-fragment distance to achieve a "depth-dependent" blurring"
The texel sampling distance used in the blur filter is "modified" using the light-to-fragment distance. In code, it would look something like this :-

float4 vSampleCoord = vTexCoord + fLightToFragDist * vTexelOffset;

So, the further a fragment gets from the light, the more the shadow map gets blurred in region around that fragment.
 
poly-gone said:
They're using spherical harmonic lighting for static geometry.

I disagree partly. I dont disagree that they are using sphereical harmonics cuz they have said they are. What I disagree with is that that is used for the static shadows. The exact quote from their site is "Ultra high quality and high performance pre-computed shadow masks allow offline processing of static light interactions, while retaining fully dynamic specular lighting and reflections." Its just a mask meaning its a lightmap that has been used for years. Obviously from the pictures its very high res so it looks nice. There would be no reason to use SH for static shadows. Youd use SH if things were changing, not if they were static. It does say, "Support for all modern per-pixel lighting and rendering techniques including ... pre-computed shadow masks; and pre-computed bump-granularity self-shadowing using spherical harmonic maps." So, it is for self shadowing bumpmaps instead of using horizon maps and not for the gross object shadows. The masks do the static object shadows. I would guess theyd use 4 component SH so itd fit in one rgba texture.

Which do you find to be faster of your two techniques? Why use one of these methods over the other? In other words why have 2 if one works fine. The buffer you capture to, is that screen space? I dont see how you could have that in light space cuz you cant see any shadows there but you say you project it back to the scene which makes me think its not in screen space. If it were in screen space how would you prevent shadow bleeding? Is there some place that explains this? From the way you explain it, it still seems to me that this wont do self shadows.

I am very interested in your second method. From the research I've done all the practical methods rely in some way on the geometry such as the one from shaderx2 or smoothies. I would rather have a image space method but I havent found any that work in general enviorments other than this one you explain so I would like to try it my self.
 
What I disagree with is that that is used for the static shadows.
Not static shadows, static geometry. The geometry CANNOT deform in any way, of course the new DX9 Summer Update 2004 contains Local Deformable PRT, but with the current technique employed in the Unreal Engine, PRT cannot be done on dynamic geometry.

And yeah, they're probably using 4th order spherical harmonics.

Which do you find to be faster of your two techniques?
Shadow mapping, of course. The stencil shadow algorithm works fine for simple models (<5-10K) but for complex geometry like characters, the shadow mapping technique gives far better results.

Why use one of these methods over the other? In other words why have 2 if one works fine.
Because the stencil shadow technique produces visually accurate soft shadows but is slower, whereas the shadow mapping technique produces almost uniformly soft shadows but is much faster.

The buffer you capture to, is that screen space? I dont see how you could have that in light space cuz you cant see any shadows there but you say you project it back to the scene which makes me think its not in screen space. If it were in screen space how would you prevent shadow bleeding?
Yup, that's done in screen space. The screen-space buffer size scales with the screen resolution. This solves shadow bleeding, but with a performance penalty.

From the way you explain it, it still seems to me that this wont do self shadows.
Believe me, it does handle self-shadowing. That's the whole point of shadow mapping. Here's a couple of screenshots.

SoftShadows-HighRes.PNG

SoftStencils-HighRes.PNG
 
shouldn't be ocluder-receiver distance instead of light-receiver distance?

because it's this distance which define the amount of "soft" shadowing you have.
 
Well I guess I disagree with that too :D. From what their site says it seems they just use it for texture space shadows that come from the bumpmap, not from static geometry. Since the textures themselves are static that could even be used on dynamic meshes. In fact I dont think they are doing PRT at all. It only says SH which does not mean PRT. They just use the SH basis to store bumpmap shadows.
The screen-space buffer size scales with the screen resolution. This solves shadow bleeding,
Sounds to me that it doesnt really solve the bleeding, just makes it small enough to not be noticeable. I dont quite understand how that matters though unless the actual blur scales with res. Couldnt you save that stuff to the alpha channel, unless youve got something else there?
whereas the shadow mapping technique produces almost uniformly soft shadows
So, it doesnt get softer as the distance from the occluder to the occludie increases? That would just be nice antialiased shadows which I think is exactly what ue3 is doing. I was expecting that the two methods had similar output. I guess the question I really want to know at this point is does this fake what real soft shadows would look like, such as having a penumbra that starts at zero width at the occluder and increases? If not then I guess its not really what I was looking for so I wont bug you anymore for the details :). Btw thanks a lot for the pics, that helped a lot, and they look purty.
 
shouldn't be ocluder-receiver distance instead of light-receiver distance?
Yes it should, but this is quite hard to do since the occluder already covers the reciever. Light to reciever distance is an approximation. But I'm experimenting with a technique that samples all the texels around the current texel and chooses the one with the "greatest" depth as the approximate distance to the reciever. By subtracting the color value of the center from this texel, one can get the approximate occluder-to-reciever distance.

just makes it small enough to not be noticeable.
Which in other words solves the problem.

If not then I guess its not really what I was looking for so I wont bug you anymore for the details
I could let you know once I've solved that "problem".

So, it doesnt get softer as the distance from the occluder to the occludie increases?
As I mentioned earlier, I'm working on that.

Btw thanks a lot for the pics, that helped a lot, and they look purty.
You're welcome, and thanks.
 
ok tonight girlfriend is away, so I ll have time to modify the shadow mapping in my engine to test that :)

just a last question what is the shadow term?
 
just a last question what is the shadow term?
The result of the depth comparison test :-

Code:
float fShadowTerm = tex2Dproj( ShadowSampler, IN.vProjCoord ).r < IN.fDepth ? 0.0f : 1.0f;
 
poly-gone said:
shouldn't be ocluder-receiver distance instead of light-receiver distance?
Yes it should, but this is quite hard to do since the occluder already covers the reciever. Light to reciever distance is an approximation. But I'm experimenting with a technique that samples all the texels around the current texel and chooses the one with the "greatest" depth as the approximate distance to the reciever. By subtracting the color value of the center from this texel, one can get the approximate occluder-to-reciever distance.
I was experimenting with this sort of thing about a year ago with shadow maps, but had a crapload of image artifacts from endcases I didn't foresee.

A problem with searching for an occluder is that it's discrete in nature, and you only have the topmost depth value in your shadow map. If something is creeping along behind an occluder, once a single pixel of this thing is visible, your nearest occluder suddenly changes, and you get a sudden change in a large part of your shadow since the blur radius suddenly changes.

Another problem with artificially blurring shadows is when you have shadows behind shadows, but I was trying to blur in the same space as the shadow map. If you're blurring in screen space, how do you get the blur right? Does the blur get bigger as you get closer to it? If you were looking at a floor from a glancing angle, screen space blurring would make the shadow blur too far, or you might blur it onto another object. The larger buffer just seems to make the blur small enough that it doesn't affect the image too much, which really mutes the soft shadow effect. Sounds like it would look like a halo (as used in HDR techniques) for shadows.

I've sort of given up on the idea of variably soft shadow maps without doing it the correct slow way with multiple samples, but this thread has sort of inspired me to give it another go. I'm also a fan of SH PRT with neighborhood transfer, but the performance cost seems pretty high.
 
Back
Top