Perspective shadow maps -- the good and the bad?

Reverend

Banned
This is related to 3DMark05 and the use (pros/cons) of PSM in a variety of scenarios (strictly-indoors, strictly-outdoors, a mix) in a game (not a non-retail app). This was touched on in the "legit 3DMark05 thread" here in this same forum but I thought PSM deserves a dedicated thread.

The only other reason I'm starting this is that I have J.Carmack's thoughts on PSM (which I won't reveal, for now).

I am sure Patric/FM would appreciate thoughts on this. Discuss.
 
Reverend said:
The only other reason I'm starting this is that I have J.Carmack's thoughts on PSM (which I won't reveal, for now).
Is this an update from his statements about PSM from the last Quakecon?

Here's a pretty good summary of the algorithm, in case anyone wants it.

From my point of view, PSM is too much work to "just do" and see the effects of. I'd have to be sold on the tech before I set out to implement it. And so far, I have not heard any game programmer sing its praises. So as of now, I'm not in favor of PSM.
 
My 2c...
While PSMs solve/improve upon a certain subset of problems asociated with using regular ShadowMaps, they still have tons of issues of their own, some which are difficult or nigh impossible to solve reliably.
From implementation standpoint, PSMs have proven to be more of a source of frustration for most people that tried them, then something that would actually yield good results, which spawned a cute saying "Friends don't let friends implement PSM" ;).
And, while variations on the PSM theme, like TSM improve some issues, none of these linear projections offer a complete solution to the problem.

On current hw, robust solutions all involve multiple buffers per light, additional scene distribution/management etc. That said, I do believe now (I didn't use to) that they offer more flexibility for use in any type of scenarios then competing shadow solutions (volumes, to be specific). Also, in more advanced uses, like implementing soft shadows, they seem to be decisively more efficient then volumes could ever be.



Anyway, if you're interested in the topic of shadowmaps, I highly recommend you browse the gdalgorithms archives.
http://sourceforge.net/mailarchive/forum.php?forum=gdalgorithms-list

There have been several good discusions on the topic just recently, I recommend you check "General-purpose shadowbuffer implementation" and "Filtering shadow map edges" in particular. (the second thread discusses the advantages of varying shadow solutions in different conditions quite a bit).
 
TSM is patent encumbered BTW.

Some variations on PSM which arent are LiSPSM and perspective optimal shadow maps.
 
If I understood Carmack's QuakeCon keynote correctly, he is not using perspective shadowmaps, but regular shadowmaps in a cubemap configuration.
 
I suppose it's been said in this thread already... PSMs basically introduce at least as much problems as they solve. So they're not a good unified (oh god, there's that word again) solution.
The cubemap approach basically only has one problem, and that is sampling. If you can use maps of high enough resolution, and/or apply enough filtering, this may not be a problem.

I believe this is pretty much what Carmack's thoughts on shadowmaps were in the QuakeCon keynotes, but it's been a while since I've seen them.
 
He also talks about dual-buffered maps with interpolation between back and front faces and dynamically changing sampling depending on light/occluder/surface position.

The reason why he's not to keen on PSM is:

John Carmack said:
There’s been some work recently by people exploring perspective shadow mapping where you try and use a perspective warp to get more detail from a given shadow map resolution where you are, and I don’t think it’s going to be a really useable solution for games because there will always be a direction that you can turn into the light where the perspective warping has either very little benefits or even makes it worse where you wind up with more distorted pixel grain issues on there.
 
The problem with perspective shadow maps is that they make the sampling evenly distributed for the ground plane, in screen space. So it's only really suited for shadows of objects on terrains, not indoor scenes with vertical walls. Plain and simple: PSM is suited for terrain, cubemaps are best for indoor. No silver bullet.

Maybe we can warp shadow maps in such a way that big polygons in screen space are always mapped to big polygons in the shadowmap. :rolleyes:
 
Nick said:
The problem with perspective shadow maps is that they make the sampling evenly distributed for the ground plane, in screen space. So it's only really suited for shadows of objects on terrains, not indoor scenes with vertical walls. Plain and simple: PSM is suited for terrain, cubemaps are best for indoor. No silver bullet.

But if you rotate the scene 90 degrees, aren't those walls now floors? :)
 
Yes, the more generic statement would be: perspective shadowmaps are aligned to be evenly distributed on a single plane (which inspired the 'plane of interest' idea, I suppose).
But that demonstrates a problem: anything that is orthogonal to that plane, will receive no samples at all.
 
Heh well ideally for a FPS you want to have several PSM's generated for several different angles and the one with the highest dot product with the normal be the one you want to use. But, now you have a huge amount of poly pushing to do for something like that :p
 
For the most part PSM do what you want in outdoor type environments, but even there are a number of special cases that you need to catch while rendering, most notable the caster behind the viewer issue or the worse case of that caster stradling the eye plane.

Using multiple maps for multiple planes of interest is attractive and not that expensive. For the most part the quality issue is only really a noticeable issue close to the camera, if you can resolve it in the first 30 ft, you've pretty much solved it.

I think this is why Carmack is suggesting segmenting the frustra along the view vector and using different resolutions as you move away from the viewer. The problem with this approach is the rate of change of pixels/sample close to the viewer.

The fundamental problem is that ideally you need to sample in Screenspace not lightspace, which is basically what stencil volumes give you, the problem is that their fill requirements can become extreme.

As I've said before all current shadow algorithms suck to varying degrees.
 
MfA said:
With programmable tesselation you could in theory implement adaptive shadow mapping.

The only problem I see with that solution is in the absolute worst case (parrallel light and reciever close to parallel with the beams), one texel in the shadow map will cover an almost unbounded number of pixels. I've used shadow maps on relatively small areas with resolutions upto 4Kx4K and this type of aliasing is still an issue.

What you almost want to do is have mapping that varies based on the screen space size of the recieving polygon, I'm just not sure I see a way of accomplishing it.
 
ERP said:
What you almost want to do is have mapping that varies based on the screen space size of the recieving polygon, I'm just not sure I see a way of accomplishing it.
Well it's easy isn't it? You just need pixel pipelines that can optionally output Zvalues to arbitrary addresses instead of on a regular grid :p:p
 
What I had in mind was to basically using the software raytracing method people have described for pixel shaders, but using the tesselation unit to determine which tiles of rays/samples to test geometry against ... instead of having a huge amount of unnecessary intersection tests.

The downside would be that the CPU would have to build the textures the tesselation unit could use to find the correct tiles of samples for the shadow map, from the viewspace z-buffer, and after the shadowmap was filled the CPU would again have to step in do a forward mapping of the shadowmap samples to viewspace pixels. Both just operations on 2D buffers without dealing with geometry at all, but still a pain of course. Would be pixel precise just like shadow volumes though.
 
Fafalada said:
ERP said:
What you almost want to do is have mapping that varies based on the screen space size of the recieving polygon, I'm just not sure I see a way of accomplishing it.
Well it's easy isn't it? You just need pixel pipelines that can optionally output Zvalues to arbitrary addresses instead of on a regular grid :p:p

Hum, actually...
What if you render these addresses to a texture in the first pass, and then use vertexshader texturing to calc the proper pixel in the second pass?
Don't feel like giving it a lot of thought right now, but perhaps we already have the tools.
 
Scali said:
Fafalada said:
ERP said:
What you almost want to do is have mapping that varies based on the screen space size of the recieving polygon, I'm just not sure I see a way of accomplishing it.
Well it's easy isn't it? You just need pixel pipelines that can optionally output Zvalues to arbitrary addresses instead of on a regular grid :p:p

Hum, actually...
What if you render these addresses to a texture in the first pass, and then use vertexshader texturing to calc the proper pixel in the second pass?
Don't feel like giving it a lot of thought right now, but perhaps we already have the tools.

Yes I'd thought of that, the problem is that you would still have to render each occluder for each pixel, which is basically ray tracing the shadow. The problem isn't so much the arbitrary writes as it is the arbitrary queries. With none linear outputs, none of the iterators work.

At the moment my thought is to store the edge equations and coverage of the occluders in the shadow map and then transform it forwards into screenspace, but again the problem is I don't have the edge equations ARRGGGHHH!

Unless I store extra data obviously.
 
Actually thinking about this, it might work pretty well, using a two pass mechanism.

For tristrips I might even be able to do it with no additional data (although not on PC) using the set the stride to be less that the vertex size trick.
 
Back
Top