Bug in shadow mapping algorithm?

hoom

Veteran
I've been quite intrigued lately by all the talk of shadow mapping as the future of game shadowing & stencil shadowing has no future.

I'm intrigued because there seems to be an error in the typically used algorithm when applied to self shadowing of things that are not on the ground.

When light hits a nearly parallel surface, you get this kind of artifacting: (banding & patchiness of shadow in the lower left quarter of the image)
Wolverine.jpg


This is from the soon to be released Nexus: The Jupiter Incident.
I've also seen much the same in Homeworld2 (there were plenty of complaints about it too) and also in screenshots of LoMac.

It seems to me that its some kind of oddity that perhaps doesn't show up often in FPSes, this being presumably the type of game which the algorythm was originally developed, but it is a fairly common occurence with air & space games.

Any thoughts? Is there a solution?
Or since this is the future of gaming, do I just have to learn to live with it? :rolleyes:
 
I honestly can't see shit from the screenshot you posted, it's too dark and filled with compression artifacts, but it sounds like your talking about "shadow acne" which is caused by quantisation errors when rendering the shadow map. It can be helped somewhat by rendering back faces into the shadow map instead of front faces and using a bias. Too large a bias causes the shadow to get pushed away from the shadow caster though, which will make objects look like they're floating above ground. For some extra cost you can render the average depth of front and back faces in the shadow map which pretty much removes all of these issues, but of course is roughly twice the cost.
 
Uhm, this is a shadow map resolution problem. There are ways to alleviate the problem a bit (increasing resolution, using the resolution where it's needed), but resources are finite, and pushing the problem as far back as to being invisible isn't quite possible yet.
 
The problem with shadowmapping in general is getting the shadowmaps to map nicely (evenly distributing the resolution in the shadowmap over the screen) on the scene with regard to the camera. When the mapping is not good, you'll get coarse, blocky samples in some areas while you waste precision in other areas (undersampling and oversampling problems).

This problem is not solved yet. There are a few different ways of projecting the shadowmaps, but they all have issues in certain cases.
I don't know if anyone will ever find a completely robust solution, but as videocards continue getting more powerful and have more memory, it will be less of a problem. Just increasing the resolution of the shadowmaps everywhere will reduce the bad sampling areas (that's the approach used for non-realtime shadowmapping with Pixar and such).
 
arrrse said:
IAny thoughts? Is there a solution?

I have found the offsetting the sampling position slightly along the normal solves the problem most of the time.
 
IMO using multiple shadow maps (like this guys work) represents one of the best solutions to perspective aliasing possible at the moment ... not quite adaptive shadowmapping, but the best you can do with common day hardware.
 
Back
Top