24bit zbuffer vs 32bit zbuffer

MrB

Newcomer
With the recent Derek Smart thread about the loss of 32bit Z-buffer in the 9xxx line of ATI cards I became interested in what that exactly meant in terms of negative effects in games and apps.

Could anyone refer me to an article on z-buffer and what is gained with a larger buffer.

Thank you!
 
More bits = more precision = less quirkiness on games which use truely expansive fields of view.
 
I think it's worth mentioning that there is currently NO (gaming) card out there that supports a 32 bit z-buffer, so "loss" isn't really the right word. I'd rather say that ATIs reluctance to include it in their next generation card is what Mr Smart is ranting about.
 
BRiT said:
More bits = more precision = less quirkiness on games which use truely expansive fields of view.

field of view doesn't get involved here, it's almost exclusively about the near clipping distance; you could have as large FOV as you like with any clipping distance*.

apparently derek's problem originates from the fact that in his game universe the player can look from the POV of very small and very large objects (relatively to each other, that is) alike, e.g. from the cockpit of a battlecruiser, just as well as from the visor of a helmet. problem is, in the case of looking from the helmet's visor, visible objects may come much closer to your POV than in the battlecruiser's cockpit case (cases taken realtively to each other). now, this really matters if you want to preserve the same universe resolution as seen from the marine's and from the battlecruiser's viewpoints alike, i.e. you indeed have to move the near clipping distance closer to the POV for the marine case.

of course, precision of the z-buffer is something people have been tackling with since the very invention of the beast. the final 'efficiency' of the z-buffer (i.e. how well it serves its occlussion purpose) is a function of just two arguments: the raw precision of the buffer (i.e. it having n-bits per entry), and the near and far clipping ranges of the (perspective**) projection transform of your pipeline (although the near clipping distance is the far more important among those two). usually when none of the two factores can be effectively controlled (i.e. when you can neither set higher z-buffer bitness nor push the near clipping distance further ahead) some tricks come into play - most commonly said tricks come down to some depth-partitioning scheme which would allow us to divide our scene into depth "layers", and render those front-to-back, each time clearing the z-buffer. this would provide each of those layers with the full 'precision' of the z-buffer. of course, such a trick costs performance, as it can be though as imposing extra 'passes' to render the scene.

bottomline of it all being, striving for higher bitness of the z-buffer is good, but no matter how many z-bits the lates-'n'-greates hardware has, there always will be large-enough universes populated with small and large 'characters' of sufficient 'disproportions' to make that z-buffer bitness look bad.


* FOV - in 2d (for clarity), the angle formed from looking at a certain view-span from a certain view-distance.

** of course, it only matters iff the projection transform is perspective, as with a parallel projection z-buffer would be completely equi-precise across its whole range.
 
PeterT said:
I think it's worth mentioning that there is currently NO (gaming) card out there that supports a 32 bit z-buffer, so "loss" isn't really the right word. I'd rather say that ATIs reluctance to include it in their next generation card is what Mr Smart is ranting about.

actually there are gaming cards which support 32-but z-buffer, and they are not even that few. of course, all of them share the z-buffer with the stencil buffer, so using z-buffer and stencil-buffer simultaneously would automatically deprive you of the full 'bitness' of the z-buffer, but nevertheless.
 
Check this out

http://www.beyond3d.com/forum/viewtopic.php?p=41326#41326

And the Radeon 7x00 and 8x00 (for those who won't bother checking my link) does support a "real" 32 bit Z-buffer (ie not just 24+8).

You can use it in 3D Mark 2001 which will only recognize 24+8 as 24. Same thing goes for Project Eden where you can set a "24+8 bit Z-Buffer (Recommanded)" or "32 bit Z-buffer" on the Radeon cards. (You have to enable support in the driver though as it's not enabled by defaut.)
 
From MSDN documentation:
Depth-buffer precision is affected by the values specified for znear and zfar. The greater the ratio of zfar to znear is, the less effective the depth buffer will be at distinguishing between surfaces that are near each other. If

r = far/near

roughly log2 r bits of depth buffer precision are lost. Because r approaches infinity as znear approaches zero, you should never set znear to zero.

As Derek uses 0.001f, 20000 I can certainly see why problem occures.

log2 (20000 / 0.001) = 24.3
 
Back
Top