VSM Technical Discussion (was part of R600 thread)

I don't think we're talking about the same "z" here... what I use for point lights as the depth metric is "distance to light". i.e. length(PositionInViewSpace), potentially scaled and biased to fall into [-0.5, 0.5] for fp formats. That has a uniform distribution over the "world" (spatially), unlike Z which has a lot of precision near the light, and very little as you move away from it. The difference between these two metrics in my experience is quite pronounced.
Well, I'd call that "distance", then. :D
I'm talking about storing 1/Zview in the depth buffer as FP32. Unless the FOV isn't extremely large Z shouldn't be too far off from distance, and 1/Z has about the same precision distribution.

How is the scale and bias supposed to help?

As a small footnote to this, you need to store 1 - z/w. I've found that the precision is worse than 24bit fixed-point otherwise, which is itself is not accurately reversable in practical situations.
Which w and which z is that? ;)
Using 1/Zview (or more accurately: Znear / Zview) is perfectly fine for floating point Z.
 
No I haven't actually - I've played mostly with spot/point lights. Have you tried?
I've tried only with Z24S8 depth buffers (it was not accurate enough), don't have a dx10 card now, so I can't use a fp32 z buffer :)

It seems to me that it may be possible to do such a thing with directional lights (where you can linearly interpolate the distance metric properly), but not with spot lights. Is this your experience as well?
Not my experience, but yes..I'd expect it to work well, though I can't count the times I expected something to work well when in the end it did not ;)
 
Which w and which z is that? ;)

Hush! :)

Using 1/Zview (or more accurately: Znear / Zview) is perfectly fine for floating point Z.

As I said, from experience you need to subtract from one, which counteracts the normal distribution of precision. See:
http://portal.acm.org/citation.cfm?id=311579

Funnily enough, there seems to be a patent on this too.

Edit: Also, I think I misread your post, so don't interpret it as a direct response to z vs 1/z in FP, but about accurately recovering linear z.
 
Last edited by a moderator:
As I said, from experience you need to subtract from one, which counteracts the normal distribution of precision. See:
http://portal.acm.org/citation.cfm?id=311579
You only need to invert the depth range if you use the typical default projection matrix which looks something like this:
Code:
w  0      0      0         Zclip = Zview * f/(f-n) - nf/(f-n)
0  h      0      0    ->   Wclip = Zview
0  0   f/(f-n)   1         Zscreen = Zclip / Wclip
0  0  -nf/(f-n)  0                 = (f - nf/Zview)/(f-n)
This means that the near and far clipping planes in view space map to 0 and 1, respectively. That's bad for a float depth buffer, so you can modify the matrix or invert the depth range.

If you have a float depth buffer and don't care about the far clip plane you can use this instead:
Code:
w  0  0  0         Zclip = n
0  h  0  0    ->   Wclip = Zview
0  0  0  1         Zscreen = Zclip / Wclip
0  0  n  0                 = n / Zview
This way 1 in screen space maps to the near plane in view space while 0 is infinitely far away.
 
Last edited by a moderator:
This means that the near and far clipping planes in view space map to 0 and 1, respectively. That's bad for a float depth buffer, so you can modify the matrix or invert the depth range.

Yes, that was all I was getting at.

If you have a float depth buffer and don't care about the far clip plane you can use this instead:
Code:
w  0  0  0         Zclip = n
0  h  0  0    ->   Wclip = Zview
0  0  0  1         Zscreen = Zclip / Wclip
0  0  n  0                 = n / Zview
This way 1 in screen space maps to the near plane in view space while 0 is infinitely far away.

Good point.
 
This means that the near and far clipping planes in view space map to 0 and 1, respectively. That's bad for a float depth buffer, so you can modify the matrix or invert the depth range.
Ah yes I've seen such a technique. However I haven't played with floating point depth buffers yet... maybe they will work just fine (all of the previous info that I posted applies to typical fixed-point depth buffers of course). That said, are we sure that they get the same double-speed rendering, etc. benefits as fixed-point buffers?

Even if they do, I still maintain that typical shadow maps are too inflexible for modern shadowing implementations, and most of the alternatives require storing more data.
 
As I said, from experience you need to subtract from one, which counteracts the normal distribution of precision. See:
http://portal.acm.org/citation.cfm?id=311579
I haven't looked through this in detail, but all generations of PowerVR (e.g including Dreamcast) used (or had an option to use) a depth value that was 1/w. This is the ideal mapping for 'sensible scenes' for a floating-point Depth value, which the DC also used.
Funnily enough, there seems to be a patent on this too.
What date? It's unlikely to be valid given the date of that paper and that there would therefore be prior art.
 
Back
Top