SIGGRAPH 2009: Advances in Real-Time Rendering in 3D Graphics and Games Slides

Noticed nAo and Andrew got some big "thanks" from Bungie concerning shadows--nice job guys :)
 
Indeed - that is a really awesome presentation IMHO... explained and laid out a ton of options and details that come up when actually implementing modern shadow mapping techniques. If only this presentation were available earlier I could have answered about 100 threads with just a link to it ;) I expect to be spamming the link from now on though!

Kudos to Bungie on laying this all out so clearly!

I also highly recommend checking out the LBP one... lots of clever ideas in there, and it's neat to finally see how they pulled off a lot of the cool stuff they did. I'm sure the other presentations are good too, but those are the two I've read so far :)
 
Last edited by a moderator:
So Bungie can render atmosphere with a continuous transition into space... now why would they need such a capability? :)
 
Makes you wonder what was left out of the game based on time ... hopefully Reach allows them to deploy some more ambitious gameplay ideas and designs.
 
Very good presentation indeed !

In my engine I am currently using cascade VSM (Andy you might remember me asking so many questions some month ago ;)

Anyway, I am considering adding the exponential part (I haven't assimilated the paper though !)

The code given by Bungie is the following:

Code:
float ComputeEVSM( float2 vShadowMapUVs, float fReceiverDepth, float fCascadeIndex ) { 
//depth should be 0 to 1 range.
float2 warpedDepth = WarpDepth(fReceiverDepth);
float  posDepth = warpedDepth.x;
float  negDepth = warpedDepth.y;
float4 occluder = tCascadeShadowMaps.Sample( sShadowLinearClamp,                            float3( vShadowMapUVs, fCascadeIndex ));
float2 posMoments = occluder.xz;
float2 negMoments = occluder.yw;     

// compute derivative of the warping function at depth of pixel and use it to scale min 
// variance
float posDepthScale  = fESMExponentialMultiplier * posDepth;
float posMinVariance = VSM_MIN_VARIANCE * posDepthScale * posDepthScale;
float negDepthScale  = fESMExponentialMultiplier2 * negDepth;
float negMinVariance = VSM_MIN_VARIANCE * negDepthScale * negDepthScale;

//compute two Chebyshev bounds, one for positive and one for negative, and takes the  
// minimum
float shadowContrib1= ComputeChebyshevBound(posMoments.x, posMoments.y, posDepth,  posMinVariance);
float shadowContrib2= ComputeChebyshevBound(negMoments.x, negMoments.y, negDepth,  negMinVariance);

return min(shadowContrib1, shadowContrib2);
}

Now, I don't see where any exp are executed, it might be in the WarpDepth() function ? Any idea what this function actually does otherwise ?

Also, I was wondering what kind of values would have fESMExponentialMultiplier (and thus fESMExponentialMultiplier2).

Finally, I guess they use a kind of R16fG16fB16fA16f RT, or does one need a full 32 bits target per channel ?

Thanks for your help !

Cheers,
Gregory
 
Now, I don't see where any exp are executed, it might be in the WarpDepth() function ?
Yup. The other exp functions are run when rendering the ESMs themselves (either during the rendering of the shadowmaps or their post processing).
 
Here's mine:
Code:
// Input depth should be in [0, 1]
float2 WarpDepth(float depth, float2 exponents)
{
    // Rescale depth into [-1, 1]
    depth = 2.0f * depth - 1.0f;
    float pos =  exp( exponents.x * depth);
    float neg = -exp(-exponents.y * depth);
    return float2(pos, neg);
}

You really need fp32 per component though, or else your exponents have to be so small that you'll get tons of problems. I believe the two "fESMExponentialMultiplier" 1 and 2 are just the "positive" and "negative" exponents that are being passed to this function in the second parameter. Normally you'll want to use something like 40 and -20 respectively (although often scaled to the range of your partition) or something, but play with it.
 
Thanks Andy.

When generating the EVSM shadow map, before blurring actually occurs, what do you output in the shadow texture ?
Simply the following :
Code:
OUT.Color.xy = WarpDepth(...); 
OUT.Color.z = OUT.Color.x*OUT.Color.x; 
OUT.Color.w = OUT.Color.y*OUT.Color.y;
and then blur it ?

Also another question: I am not sure how they perform their shadow map blur, when using EVSM. They talk about log-space filtering, but I am not sure what they exactly mean.

Does anyone know some pseudo code for that ?
 
Last edited by a moderator:
Simply the following :
Yes.

Also another question: I am not sure how they perform their shadow map blur, when using EVSM. They talk about log-space filtering, but I am not sure what they exactly mean.
Just blur it normally. The log-space filtering stuff is mainly useful in the context of ESM, where the fp32 range isn't really large enough to bump your exponents up high enough for large scenes.
 
Just blur it normally. The log-space filtering stuff is mainly useful in the context of ESM, where the fp32 range isn't really large enough to bump your exponents up high enough for large scenes.
Even in the case of using EVSM ?

I am asking this because the shadowed area is much "wider" than what it should be. The wider the filter kernel, the wider the shadow.
I found out that if I don't comment out the [0..1] to [-1..1] remap line in the WarpDepth() function you gave me, the shadow is even larger...

I might have an issue in my shader code. The blur is a standard separable filter (linear).

Below is the shader code I wrote. Do you see any issue with it (or with number signs) ?


Code:
#define g_fShadowExponentialPos		40.0f
#define g_fShadowExponentialNeg		20.0f

float2 WarpDepth(float depth, float fExponentPos, float fExponentNeg)
{
    // Rescale depth into [-1, 1]
    //depth = 2.0f * depth - 1.0f; // commented out otherwise shadow is even larger ??!
    float pos =  exp(fExponentPos * depth);
    float neg = -exp(-fExponentNeg * depth);
    return float2(pos, neg);
}


Shadow map generation:

Code:
OUT.Color.xy = WarpDepth(fDepthAdj, g_fShadowExponentialPos, g_fShadowExponentialNeg);
OUT.Color.z = OUT.Color.x*OUT.Color.x;
OUT.Color.w = OUT.Color.y*OUT.Color.y;



Shadow test:

Code:
float4 occluder = oShadowTexture.Sample(SamplerShadowMap, vProjCoords[iSplit].xyz);
float2 warpedDepth = WarpDepth(fDepthAdj, g_fShadowExponentialPos, g_fShadowExponentialNeg);
float  posDepth = warpedDepth.x;
float  negDepth = warpedDepth.y;
float2 posMoments = occluder.xz;
float2 negMoments = occluder.yw;
		
// compute derivative of the warping function at depth of pixel and use it to scale min
// variance
float posDepthScale  = g_fShadowExponentialPos * posDepth;
float posMinVariance = g_VSMMinVariance[iSplit] * posDepthScale * posDepthScale;
float negDepthScale  = g_fShadowExponentialNeg * negDepth;
float negMinVariance = g_VSMMinVariance[iSplit] * negDepthScale * negDepthScale;

// compute two Chebyshev bounds, one for positive and one for negative, and takes the 
// minimum
float shadowContrib1 = ChebyshevUpperBound(posMoments, posDepth,  posMinVariance);
float shadowContrib2 = ChebyshevUpperBound(negMoments, negDepth,  negMinVariance);
fShadowContrib = min(shadowContrib1, shadowContrib2);

blur width=5 and 15 / doesn't look right to me...
evsm.png


+ I get some hard edges sometimes, no idea why:
evsm2.png
 
Last edited by a moderator:
Yeah that's clearly not right but I don't have time to debug the code. You should probably take it to a separate thread in any case. Only thing off the top of my head is for precision reasons you should make sure your blur kernel is accumulating 1/N * sample, rather than summing all samples then multiplying by 1/N.
 
I have found my issue: I forgot to update the blur shader and was actually blurring 2 components instead of 4 :(

Anyway, now I have (still!) another issue. See the screenshots below, it seems I have an issue where several objects overlap. I have tried to increase the LBR up to 0.9, but the artefacts are still visible... I didn't have this issue when using the upper bound only.

Any idea what can be causing this ?

evsm.jpg

evsm2.jpg


Thanks !
 
If everything is implemented properly, the "negative" bound will only ever make things darker (hence the min). Thus you'll still have issues even if you test your code with just the positive bound. This looks like it may well be a "non-planar receiver" issue. Your options here are to increase your negative exponent (i.e. more negative... -20 -> -40 say), decrease your blur kernel size or possible decrease your positive exponent (althought this will make any typical light bleeding worse).

Also make sure that you're tightly clamping your depth function in light space to the view frustum that it covers, and per-partition if your using cascades. See the presentation from Bungie for more info, but this is pretty important as it both eliminates problems with offscreen objects and increases the effective steepness of the exponentials in your view.
 
Thanks for your help Andy.

I have tried what you suggested. I am not totally sure what you mean by "clamping your depth function", I guess you mean map it to [0..1] which is what I do, before compute the moments.

The only thing that seems to help, appart from reducing the blur kernel size (which is set to 5), is reducing the (absolute) value of negative exponent - from -20.0 to -10.0. This is the opposite of what you suggest, so I am not sure why I get this effect..

So right now my positive exp. is 40, negative is -10.

Am I hitting some floating point limit (my depth range before remapping is quite high, about 2000m in 4 slices)...
 
I have tried what you suggested. I am not totally sure what you mean by "clamping your depth function", I guess you mean map it to [0..1] which is what I do, before compute the moments.
As per Bungie's presentation, I mean mapping it such that the [0, 1] range covers just the bits of the depth range that are visible in the camera frusta. i.e. everything outside the current partition should get clamped to 0 or 1 (depending on which side of the frusta it is relative to the light).

Otherwise yeah just play around with the values and see what works best for you. Still, the clamping should give you better results if you're not already doing it.
 
OK, thanks Andy, very kind. I will give the clamp a try, I don't do it yet (actaully, all my values should be in the [0..1] range already - "in theory" :).
 
Back
Top