Problem with Variance Shadow Map and directional light

RicarDog

Newcomer
Hello,

I've been trying to implement variance shadow map with a directional light for my terrain viewer (OpenGL + CgFX), however I'm getting the following result:

vsmft8.jpg


The terrain is 128x128 with the origin at the center, so it ranges from -64 to 64. Traditional depth-based shadow map works fine. My VSM implementation was also working when I had a point light and perspective projection for it.

I have checked the depth output of the shadow map generation pass and the render pass, and the texture coordinates transformed by the texture matrix, they all look fine. The artifacts only appear in the values fetched from the shadow map texture.

I set the framebuffer as follows:

Code:
glGenTextures(1, &texShadowMap);
glBindTexture(GL_TEXTURE_2D, texShadowMap);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F_ARB, shadowMapSize, shadowMapSize, 0, GL_RGBA, GL_FLOAT, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glGenFramebuffersEXT(1, &fboShadowMap);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fboShadowMap);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, texShadowMap, 0);

Light projection matrix:

Code:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-65, 65, -65, 65, -200.0, 200.0);

Texture matrix:

Code:
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
glTranslatef(0.5f, 0.5f, 0.5f);
glScalef(0.5f, 0.5f, 0.5f);
glMultMatrixf(lightProjection);
glMultMatrixf(lightModelView);

Shadow map generation setup:

Code:
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fboShadowMap);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glPushAttrib(GL_VIEWPORT_BIT);
glViewport(0, 0, shadowMapSize, shadowMapSize);
land->Draw();
glPopAttrib();
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);

Shadow map generation shader (snippets):

Code:
//vertex
float lightSpaceZ = mul(lightModelView, position).z;
float nearClipPlane = 200;
float farClipPlane = -200;
oDepth = (lightSpaceZ - nearClipPlane) / (farClipPlane - nearClipPlane);

//fragment

float dx = ddx(depth);
float dy = ddy(depth);
return float4(depth, depth * depth + 0.25*(dx*dx + dy*dy), 0.0f, 1.0f);

Terrain rendering shader:

Code:
//vertex	
float lightSpaceZ = mul(lightModelView, position).z;
float nearClipPlane = 200;
float farClipPlane = -200;
OUT.lightDepth = (lightSpaceZ - nearClipPlane) / (farClipPlane - nearClipPlane);
OUT.texCoordProj = mul(textureMatrix, position);

//fragment

float chebyshev_upper_bound(float2 moments, float t)
{
    float p = (t<=moments.x);
    float variance = moments.y - (moments.x*moments.x);
    variance = max(variance, 0.00001);
    
    float d = t - moments.x;
    float p_max = variance/(variance + d*d);
    
    return max(p, p_max);
}

float2 moments = tex2Dproj(shadowMap, IN.texCoordProj).xy;
float shadowCoeff = chebyshev_upper_bound(moments, IN.lightDepth);

Tested on GeForce 8800 GTS. Any help is appreciated. Thanks!
 
I've got VSM working with an orthographic projection just fine. One thing I noticed between the article in GPU Gems 3 and the example code on the DVD was that they didn't end up using the biasing at all. It was enough to just clamp the minimum variance. I found the same thing while implementing it. It doesn't seem like it should matter, but maybe try taking the biasing out and see what it does.

Now, if I can just get my frustum parameters for the directional light to work for most scenes, I'd be in great shape. ;)
 
That looks a bit odd... have you tried outputting various intermediate values to see where those discontinuities are coming from? Is it the texture coordinates? The data read from the texture? The reference depth?

One thing to note is that you shouldn't really need tex2Dproj for an orthogonal projection. It won't necessarily hurt you, but you don't need the perspective divide at all in this case (just scale and bias x/y), so there's no need for the projection.

Another unrelated note is that you should be able to get away with a 2-component texture for VSM on the GeForce 8 with the latest drivers. If you render to "luminance alpha", it seems to work properly now, while perhaps being a bit convoluted in typical OpenGL legacy style ;)

As Brian Richardson notes, you also probably don't need the pixel variance computation in the shadow rendering pass in most cases. I believe that I mentioned this in the chapter IIRC, but it just tends to cause as many problems as it solves unless you're dealing with really high detail geometry, low-resolution shadow maps and bad LOD... in which case your shadows are gonna look terrible anyways ;) In practice the minimum variance clamp is generally sufficient.
 
That looks a bit odd... have you tried outputting various intermediate values to see where those discontinuities are coming from? Is it the texture coordinates? The data read from the texture? The reference depth?
Yes, the discontinuities only appear in the data read from the texture.

Here is the output of the shadow map pass:

vsm2qv7.jpg


And the projective texture coordinates:

vsm3sn5.jpg


I've removed the ddx/ddy part and also tried different texture filtering modes (GL_NEAREST and GL_LINEAR_MIPMAP_LINEAR), but there were no significant changes.

I've also tried luminance-alpha mode by using the call
Code:
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE_ALPHA32F_ARB, shadowMapSize, shadowMapSize, 0, GL_LUMINANCE_ALPHA, GL_FLOAT, NULL);
but then the shadows disappeared completely. Am I missing something?

Thanks for your help!
 
Are your clipplane constants flipped?

float nearClipPlane = 200;
float farClipPlane = -200;

Your glOrtho call has -200, 200;
This is on purpose since glOrtho negates the values internally in order to have nearplane < farplane. Using the same values as glOrtho makes the shadow grow to the opposite side.
 
The shadow data and projective texture coordinates do appear correct (assuming that your shadow map actually covers the whole texture and it's just being displayed in the bottom corner of the window for visualization purposes.

So that means something odd is going on with the lookup. Have you tried using a standard tex2D as I suggested? Also ensure (for testing) that you're using nearest neighbour lookup, no aniso and no mipmaps. Once that works you can add back in those features. (And of course make sure that if you're using mipmaps that you generate them after you render and optionally blur your shadow map!)


I've also tried luminance-alpha mode [...] but then the shadows disappeared completely. Am I missing something?
Make sure you're writing to (and probably reading from?) the red and *alpha* components in your shader. Odd - I know - but I seem to remember that's how GL is setup for legacy reasons.
 
So that means something odd is going on with the lookup. Have you tried using a standard tex2D as I suggested? Also ensure (for testing) that you're using nearest neighbour lookup, no aniso and no mipmaps. Once that works you can add back in those features. (And of course make sure that if you're using mipmaps that you generate them after you render and optionally blur your shadow map!)
Yes, I'm now using tex2D and GL_NEAREST. No deal :cry:

Make sure you're writing to (and probably reading from?) the red and *alpha* components in your shader. Odd - I know - but I seem to remember that's how GL is setup for legacy reasons.
It worked now, but with the same incorrect results as with RGBA.

I'm starting to think that this may be a driver issue (tested with 174.74 and 169.44) or a problem with the Cg compiler (already had some problems before). On the other side, the problem only happens when I use glOrtho for the texture matrix. I think I'll just keep a point light far away simulating the directional light, it's better than having no shadows at all :rolleyes:
 
An update: when I set the glOrtho size as being exactly double of the shadow map size, the shadows look almost correct, but only when GL_LINEAR is enabled for GL_TEXTURE_MAG_FILTER. With GL_NEAREST, I get correct shadows plus artifacts everywhere else.

For instance, I've set shadow map size to 512, and the projection as:
Code:
glOrtho(-512, 512, -512, 512, -200.0, 200.0);

With linear filtering for texture magnification I get:

vsm4oi8.jpg


With nearest I get:

vsm5nz1.jpg



But I still don't have any ideas about the cause of this issue.
 
Interesting - you may want to look at your matrices again and make sure that the same ones are getting into your shader as were used to render the shadow map. Honestly I'd use general uniforms nowadays rather than the named GL matrices as it just makes everything a hell of a lot simpler.

With nearest I get:
But I still don't have any ideas about the cause of this issue.
Actually that's the expected result with linear filtering and the light at such a low angle. Without interpolation the discretization of the depth buffer causes those sorts of bands. Linear filtering is indeed what you want to be using anyways... I just asked you to turn it off to try and debug what was wrong.

So yeah, looks to be kind of working now modulo debugging the matrix issues. Next thing you'll want to do is generate mipmaps for the shadow map and enable full trilinear and anisotropic filtering, and enjoy your filtered shadows :)
 
Back
Top