Perspective shadow maps -- the good and the bad?

MfA said:
That sounds rather strange, the perspective projection in PSM type methods is there exactly to distribute samples like that for the ground plane (note the logarithmic progression in the distances). Carmack's method doesnt seem to tackle the problem of the sample distribution not being optimal for other planes any better than PSM (you can use multiple buffers for that, but not in the way he describes) so I dont quite see the point in abondoning it.

I think a lot of people here are thinking in an optimization-oriented way ("how many samples per pixel might you get?", etc), but need to drop that for a second and think about fundamentals.

The most important thing is, "does the technique work, all the time? Can I depend on it?" Cascading shadow maps are rock solid. They just work. PSM can't even dream of being rock solid. It is squishy.

It doesn't matter how many samples per pixel you can get from PSM sometimes. What matters is the basic quality of user experience that you are able to guarantee. PSM gives you very feeble guarantees. It's not worth the algorithmic complexity. Throw it away.


Also, nothing about this technique is about calibrating samples for the ground plane. In my version (and I assume in John C's also), the shadowmap is set up to be orthogonal to the light source. You're setting up the maps to provide resolution based on distance from the viewpoint, regardless of what the ground plane is.
 
The PSM alikes are cheaper (and occasionally patented). Im just not quite sure using orthogonal views from the light source will be very efficient, even in the cascaded scheme ... I dont see why it has to be either or.

Feel free to proove me wrong :) (Although I guess that will take a while.) I am hesitant about the "it just works" thing for now, Ill believe there arent any nasty edge cases when I see it.
 
MfA said:
The PSM alikes are cheaper (and occasionally patented). Im just not quite sure using orthogonal views from the light source will be very efficient, even in the cascaded scheme ... I dont see why it has to be either or.

Feel free to proove me wrong :) (Although I guess that will take a while.) I am hesitant about the "it just works" thing for now, Ill believe there arent any nasty edge cases when I see it.


That's fair. But, the best way to see it is to implement it. Cascading shadow maps are really pretty easy to implement (especially if you just stick to a parallel light source, as I am in my current game). They're a lot easier than PSM, so if you've tried PSM, it's easy to do both and compare.

I'll just say that I've tried many variations of PSM/TSM, and given up on them all. So you know where I'm coming from.

I was actually surprised at how fast it is to render the multiple shadow maps. It's really not a problem, even on my lightweight dev machine (a Radeon Mobility FireGL T2, which is basically a Mobility 9600).

About nasty edge cases... you already know that PSM has a lot of nasty edge cases. So I'm not sure why you'd hammer on that point. But so far I have had zero problems with the cascading shadow maps. They're great, and the only outstanding issues are things you would have with any kind of shadow map (i.e. what happens when the surfaces come at a grazing angle to the light direction).

I am hoping to try out the stencil thing next week, and I will put up screenshots when I do. But I put it to you that even without the stencil thing, it works a lot better than PSM -- you just can't do PSM on a scene like the one I put up there, and have it work (lots of objects casting shadows from behind the viewpoint, frustum full of casters so that you can't do any frustum-trimming tricks, etc).
 
Jonathan Blow said:
All the shading for the surface occurs within one pixel shader, on development hardware that has a 64-instruction pixel shader limit. Most of the instructions are already used up by the rest of the shader, which does a lot of stuff.
Yeah, didn't realize you were trying to do it with the rest the lighting equation all in one pass. For lots of multiple samples it definately will have to be multipass for anything but the latest video cards.

Jonathan Blow said:
Also, you are missing something about "true" PCF. Actually a lot of people don't quite get PCF, and ATI even has a demo on their web site that claims to be doing PCF but isn't actually. What you posted above gives you highly quantized results -- if you have 4 samples per pixel, then your possible values are 0/4, 1/4, 2/4, 3/4, 4/4. That is ass. (And it's what the ATI demo does). In order to get better results you want to at least bilerp between the samples in order to get the right answer. That's at least 3 lerp instructions (for 4 samples), plus however many instructions you need in order to compute the lerp factors, which I haven't sat down and thought hard about, but hey, it's more instructions.

Curious, how are you planning to use those lerps? Every PCF algorithm I've seen has used an average. Nvidia graphics cards as far as I know use the same PCF algorithm as well (for the hardware shadow testing). There is no really appropriate way I can see to lerp without some additional knowledge of the shadows (and papers I've read using a 2x2 PCF average filter look decent as long as its not trying to make soft shadows get larger with occluder distance).

Jonathan Blow said:
Plus, I am doing this in HLSL so God knows what the HLSL compiler is doing.

Hehe, noted
 
Cryect said:
Curious, how are you planning to use those lerps? Every PCF algorithm I've seen has used an average. Nvidia graphics cards as far as I know use the same PCF algorithm as well (for the hardware shadow testing).

The core idea behind PCF is that you can't lerp between depth values in the shadow texture and get anything appropriate -- clearly, the result of the lerp is going to depend on how far apart two texels are in Z, which is only a little bit related to whether they are both in shadow or not.

PCF is just about replacing those depth values with 0 or 1, meaning "in shadow or not", and then doing whatever texture filter you would have been doing.

So, just imagine that you took the entire shadow map texture and replaced all the depth values with 0s and 1s. Then you could use this texture just like you would use a plain color texture, with bilinear interpolation turned on. That is what PCF is trying to replicate. You use the lerps to reproduce the bilinear interpolation that the hardware usually does for you.

Though I will admit that I didn't actually read the PCF paper very carefully, since I understood the basic idea readily. If the original PCF doesn't actually do those lerps, then it is ass also.

You can extend this indefinitely -- instead of doing a small bilinear filter, you can have a big Gaussian filter kernel that covers a bigger area, or something like that, all depending on what you want.
 
Cryect said:
Curious, how are you planning to use those lerps? Every PCF algorithm I've seen has used an average.

The way I understood the algo, is that effectively the results (0 or 1) of the 4 depth comparison of the 4 nearest samples are bilinear-filtered.
So it is a weighed average based on the sampling position between these 4 nearest samples in the u and v direction, just like normal texture filtering does.
Because these weights can be arbitrary precision, you can get better results than a regular average where all 4 weights are 0.25.

edit: oh, I was just too late :)
 
The original PCF algorithm did just what it says, the amount of shadowing was proportional to the percentage of passed samples. The bilinear thing was not in. The bilinear filter after compare does offer pretty good results for a small performance hit though. Especially on nvidia hw, where it is free.
 
Ahh, my mistake then!

I can't say I like the non-lerped version. You can spend a lot of samples and still get very banded-looking results.
 
Just wanted to bring this back up since Gary hadn't written to me in a long time and I thought you guys might want to read what he wrote me (I'm sure Gary wouldn't mind... always good to hear from him) :

Gary Tarolli said:
Sorry for the delay - I've been out of the application realm for a long time, so I don't have any hands-on experience. My first reaction while reading your email was mip-maps, but they often have their own set of problems when used for surface attributes (or shadows). I see Carmack suggested that also. Much depends on what you do with the shadow map values, ie what equation or hardware you feed the values into.

The only other things I can think of are this
1) if you directly blend the shadow map, you might be able to use alpha to soften the edges a little
2) if you feed the shadow map into a shader, then you probably need to use multiple samples to try to soften things or average over some parameter, ala t-buffer , motion blur, soft-shadows etc.
 
Jonathan Blow said:
Ahh, my mistake then!

I can't say I like the non-lerped version. You can spend a lot of samples and still get very banded-looking results.
I think if you do good jittering, you get more dithered than banded results. Carmack seems to like shadow maps a lot better this way than with bilinear PCF, and there are others in this forum that also mentioned how bilinear PCF sort of sucks.

I can see where his argument's coming from. With good jittering, you can get an aliased edge pretty smooth (although noisy), but with bilinear filtering an aliased edge still has very visible steps.
 
Back
Top