I am playing around with shadow maps using xna and C#,
I wrote a shader to debug the depth map, which works ok.
http://img88.imageshack.us/my.php?image=depthmapdebugia8.png
Now I wanted to try implementing a Linear Z depth buffer.
I based myself on the linear z article: http://www.mvps.org/directx/articles/linear_z/linearz.htm
but I feel there is something wrong with the way things are done.
The problem shows in this screenshot:
http://img88.imageshack.us/my.php?image=depthmapdebugproblemlinnk9.png
The 'small' objects look ok, but the grid which is a large cube is wrong, now if i take the camera behind the grid to it also looks ok...
I looked at the calculations in detail, and I think there is it cannot be done the way the article suggests.
Normally, using a LHS projection matrix, and projecting a point x,y,z,1
we get a point x',y',z',w'
where:
z' = zf * (z - zn) / (zf-zn)
w' = z
if we multiply this by w' as the article suggests we get
z' = (zf * (z-zn) * z) / (zf - zn)
(ignoring for now that the the projection matrix m33 and m34 are divided by zf)
now the problem is that I think this cannot be interpolated across the triangle because it is not linear ... it contains z^2 ... and I think this is the reason the 'big' object looks wrong, and the smaller objects look wrong but the error is not noticeable (I suppose).
I tried an alternative method of outputing a float depth from the vertex shader
with the value depth = z' / zf = z - zn / zf - zn
and then in the pixel shader outputting the resulting interpolated depth using shader model 3 which worked correctly:
http://img88.imageshack.us/my.php?image=depthmapdebuglinearzxp2.png
but this has some performance disadvantages based on msdn:
http://msdn.microsoft.com/en-us/library/bb172915(VS.85).aspx
* am I right about the problem with the article (I feel not)? or am I doing something wrong?
* is it normal to change the depth in the pixel shader? is it a bad thing?
I wrote a shader to debug the depth map, which works ok.
http://img88.imageshack.us/my.php?image=depthmapdebugia8.png
Now I wanted to try implementing a Linear Z depth buffer.
I based myself on the linear z article: http://www.mvps.org/directx/articles/linear_z/linearz.htm
but I feel there is something wrong with the way things are done.
The problem shows in this screenshot:
http://img88.imageshack.us/my.php?image=depthmapdebugproblemlinnk9.png
The 'small' objects look ok, but the grid which is a large cube is wrong, now if i take the camera behind the grid to it also looks ok...
I looked at the calculations in detail, and I think there is it cannot be done the way the article suggests.
Normally, using a LHS projection matrix, and projecting a point x,y,z,1
we get a point x',y',z',w'
where:
z' = zf * (z - zn) / (zf-zn)
w' = z
if we multiply this by w' as the article suggests we get
z' = (zf * (z-zn) * z) / (zf - zn)
(ignoring for now that the the projection matrix m33 and m34 are divided by zf)
now the problem is that I think this cannot be interpolated across the triangle because it is not linear ... it contains z^2 ... and I think this is the reason the 'big' object looks wrong, and the smaller objects look wrong but the error is not noticeable (I suppose).
I tried an alternative method of outputing a float depth from the vertex shader
with the value depth = z' / zf = z - zn / zf - zn
and then in the pixel shader outputting the resulting interpolated depth using shader model 3 which worked correctly:
http://img88.imageshack.us/my.php?image=depthmapdebuglinearzxp2.png
but this has some performance disadvantages based on msdn:
http://msdn.microsoft.com/en-us/library/bb172915(VS.85).aspx
It is important to be aware that writing to oDepth causes the loss of any hardware-specific depth buffer optimization algorithms (i.e. hierarchical Z) which accelerate depth test performance
* am I right about the problem with the article (I feel not)? or am I doing something wrong?
* is it normal to change the depth in the pixel shader? is it a bad thing?