Newbie Linear depth buffer question

pinkfish

Newcomer
I am playing around with shadow maps using xna and C#,
I wrote a shader to debug the depth map, which works ok.

depthmapdebugia8.th.png

http://img88.imageshack.us/my.php?image=depthmapdebugia8.png


Now I wanted to try implementing a Linear Z depth buffer.

I based myself on the linear z article: http://www.mvps.org/directx/articles/linear_z/linearz.htm

but I feel there is something wrong with the way things are done.

The problem shows in this screenshot:

depthmapdebugproblemlinnk9.th.png

http://img88.imageshack.us/my.php?image=depthmapdebugproblemlinnk9.png


The 'small' objects look ok, but the grid which is a large cube is wrong, now if i take the camera behind the grid to it also looks ok...

I looked at the calculations in detail, and I think there is it cannot be done the way the article suggests.

Normally, using a LHS projection matrix, and projecting a point x,y,z,1
we get a point x',y',z',w'

where:
z' = zf * (z - zn) / (zf-zn)
w' = z

if we multiply this by w' as the article suggests we get
z' = (zf * (z-zn) * z) / (zf - zn)
(ignoring for now that the the projection matrix m33 and m34 are divided by zf)

now the problem is that I think this cannot be interpolated across the triangle because it is not linear ... it contains z^2 ... and I think this is the reason the 'big' object looks wrong, and the smaller objects look wrong but the error is not noticeable (I suppose).

I tried an alternative method of outputing a float depth from the vertex shader
with the value depth = z' / zf = z - zn / zf - zn

and then in the pixel shader outputting the resulting interpolated depth using shader model 3 which worked correctly:

depthmapdebuglinearzxp2.th.png

http://img88.imageshack.us/my.php?image=depthmapdebuglinearzxp2.png


but this has some performance disadvantages based on msdn:
http://msdn.microsoft.com/en-us/library/bb172915(VS.85).aspx
It is important to be aware that writing to oDepth causes the loss of any hardware-specific depth buffer optimization algorithms (i.e. hierarchical Z) which accelerate depth test performance

* am I right about the problem with the article (I feel not)? or am I doing something wrong?
* is it normal to change the depth in the pixel shader? is it a bad thing?
 
* am I right about the problem with the article (I feel not)? or am I doing something wrong?

Your analysis is correct. That article is a bunch of crap that has fooled a lot of people, including myself. It just plainly doesn't work.

* is it normal to change the depth in the pixel shader? is it a bad thing?

Writing depth has its uses, but is generally considered to be a bad thing. It has awful performance implications, and anything rendered with depth output won't be antialiased with multisampling because you can only output one depth value which replaces the interpolated depth for all samples.

Are you looking for linearity, or mainly want better distribution of depth values for better precision? If the latter, I'd recommend you use a float depth buffer and reverse the depth range so that far plane is at 0.0 and near plane is at 1.0. The easiest way to achieve that is to simply swap the near and far parameters when you compute the projection matrix. While this technique doesn't give you linear depth you get the two non-linearities of z and floating point representation cancelling each other out and you get precision comparable to linear depth.
 
thx alot for the reply!:!:

the intent behind linear depth was just to have a shot at making linear depth 'work', to use it in later stages when needed.

thx for the tips for precision, they will be useful for me later on.

but to get back at the article? is my analysis correct about the reason of it's failure? (I have not enough self confidence to be sure about my analysis)
 
but to get back at the article? is my analysis correct about the reason of it's failure? (I have not enough self confidence to be sure about my analysis)

The perspective divide in vertex shader does not work properly, as the perspective correct interpolation of the precalculated Z/W does not equal the Z/W division done for each pixel. This works fine for small far away objects, as the error does not become that large. The same problem was visible on older fixed function projected texture mapping implementations of some DX5 hardware chips. At least Matrox chips calculated the perspective divide in vertex precision, making the projected texture sampling look pretty bad near the camera (on large polygons).
 
thank you for the explaining it and sorry for not letting go yet.

the perspective correct interpolation of the precalculated Z/W does not equal the Z/W division done for each pixel

is not the reason for this the fact that z/w is a function of z^2 as shown in my derivation below? and therefore cannot be interpolated linearly (irrespecitve of perspective correctness?)



or is there another reason?
 
Yeah, the author should have realized that it didn't make a lot of sense purely for the reason that if it actually worked, why would anyone have used a non-linear Z-buffer in the first place? If a simple projection matrix tweak could have produced a W-buffer, I'm pretty sure no one would have started using Z-buffers ;)
 
Back
Top