adaptive shadow maps

I remember reading about these while browsing through a pdf when i was reading and downloading papers about shadow maps. I really don't remember anything about them besides the name and that the pictures seem to show that the technique seemed to reduce the blockiness of the shadows in some cases. With all the talk on these boards about shadow maps lately (yes i've been lurking) with the release of 3dm05, i was wondering why nobody has mentioned them? Are PSM's just that much better? or are ASM's not easily/implementable in hardware? are they patented? Oh and if you could explain how they work that would be nice as well.

Thanks in advance for the info.
 
:oops: Yup that would be the one, its still sitting here on my computer waiting for me to read... I was just wondering in the world of shadow maps where exactly do asm's stand?
 
The problem with them is that nobody has figured out how to efficiently implement them using hardware. They need variable-resolution render targets, which have to be emulated by slicing up the render target in tens or hundreds of slices and re-rendering everything...

Basically, game developers achieve the same effect by creating individual shadowmap pieces for different objects, cache them, recalculate them lazily and stich them together at rendering time.

Google for Tom Forsyth, shadowmaps and Startopia, or search for Yann L's shadowmaps posts at gamedev.net for more info.
 
Thanks for your input assen, I am in middle of reading the paper now. It seems kinda short, when i had first got the paper i was just trying to get the concept of shadow mapping so i left the paper in my 'in pile'. But with a quick scan I already noticed that it seems the cpu needs to read back data and so i have my answer as to why nobody is using them. Any takers on the is it patented question?


OH and thanks for the google keys assen, i look into that. :D
 
Well, I really am beginning to think that the only realistic way to render shadow maps for high performance would be to do it in screen space. Unfortunately, this would require the use of programmable z-buffer positions, and will further require the use of some sort of sorting algorithm on the GPU.
 
Chalnoth said:
Well, I really am beginning to think that the only realistic way to render shadow maps for high performance would be to do it in screen space.

Well I going to look like a fool but um, maybe its just terminology but isn't screen space the same as saying image space? and if so you have in one sentence just thoroughly confused me, undoing a good two hours of drawing pictures on paper napkins and relating the matrixes to said pictures. :oops:

Chalnoth said:
Unfortunately, this would require the use of programmable z-buffer positions, and will further require the use of some sort of sorting algorithm on the GPU.

I'm not even going to try to fathom that statement. :oops: Well thank you for making me want to read every shadow mapping and texture projection paper i have all over again... maybe i need some hand puppets to help me. :D

Seriously I am now confused, so i'm gonna brush up on all those papers because I really thought i had a good grasp on the fundamentals but now I really doubt that.
 
Look up "irregular z-buffer"

It's been discussed on these forums (and it's where I got the idea).

Anyway, the basic issue is that for the best possible shadow mapping you would really want to do one z-compare for each pixel on-screen. But there's a problem: the z-compare direction of the pixels on-screen is not orthogonal to the screen. This essentially means that a ray from the light source to the object will potentially pass through multiple objects. So, this forces you to consider using the screen as a sort of table of the various places you'll want to take z-compares at. When these are looked at from the point of view of the light, they'll not be in a grid like standard rendering gives you.

So, it turns out that instead of using a grid for the z-buffer, you just have a list of directions from the light source that you want to do a depth compare at. When you render a triangle, you must have a reasonable method of finding which of these points, which can be at arbitrary positions, happen to fit within the triangle.

Once you're done, though, this method has the benefit of each value in the shadow buffer corresponding directly to a pixel on-screen, meaning you could get the same accuracy as you get with stencil shadowing. Unfortunately, it seems like you lose the easy soft shadows of normal shadow mapping, but there may be clever ways around this.
 
For shadow mapping what you really want to do is project each pixel from screen space back to the shadow map, noting the exact sampling position and then render the shadow map depths only at exactly those sample points, that way there is no over or undersampling.

Of course to do that you have to render in a radically different fashion.

ASM does some of this with a feedback loop, but there is no efficient way to use it currently.

Stencil shadows do resolve exactly in screen space, but they have their own set of trade offs. Most of the interesting work I've seen lately seems to involve using shadow maps to do the bulk fast rejection and an additional technique where they traditionally fail.
 
Small update:
It may be just as easy to do soft shadows with screen-space shadow mapping. Now that I think about it, using screen-space shadow maps with an irregular z-buffer shouldn't introduce any additional problems, though it may require some additional calculations to get the soft shadows just right.
 
Back
Top