Colour precision has increased but why not depth precision?

K.I.L.E.R

Retarded moron
Veteran
Wouldn't higher depth precision help rid of aliasing over distances and texture shimmering?

I would also like to know why the stencil buffer hasn't got increased precision?
 
K.I.L.E.R said:
Wouldn't higher depth precision help rid of aliasing over distances and texture shimmering?

I would also like to know why the stencil buffer hasn't got increased precision?

Off hand I don't know if it would fix texture shimmery since I thought the texture co-ordinates were calculated on the fly rather then the depth value.
 
24 bits of depth buffer precision are roughly enough IMO. There's no real correlation between scene complexity and required depth precision. I.e. you can render a 2+ Mtri Stanford Bunny with 16 bits of depth buffer precision (many have done so) and you won't suffer any artifacts. That's because the bunny is a closed mesh.

It's more a question of enforcing certain rules on your data once you start throwing together many meshes to form scenes. If you render closed meshes without intersections most of the possible issues with low depth precision simply don't apply.

More bits in the depth buffer are really only marginally useful. You'd "benefit" the most with intersecting geometry and layered decals. The former is best avoided, and very few things actually rely on it -- it is more of an artifact in itself as it happens mostly due to imperfect collision detection/response, such as arms sticking inside walls -- and the latter, while being quite an issue with 16 bits of depth, is pretty much a solved problem now AFAICS.

There's another use case for higher depth precision which occurs frequently to authors of astronomical scale renderers, where there are vast ranges in depth between objects. What's really biting these people is not depth buffer precision in itself, but rather non-linear distribution as a result of perspective divide. There are very effective techniques for avoiding any issues there once you wrap your head around it.

Depth buffer precision has no influence whatsoever on texture quality. Texture shimmering occurs because graphics vendors ignore age-old recognized good practices to maximize performance. I.e. they fuck you on purpose, not because it would be hard to do better.
 
K.I.L.E.R said:
I would also like to know why the stencil buffer hasn't got increased precision?
The stencil buffer keeps integer values, and it's usually either used for masking or for counting. Since you rarely have hundreds of overlapping surfaces, a range of 0-255 is plenty, and 8 bits per sample allow for an efficient memory organization.
 
  • Like
Reactions: Geo
geo said:
Trying to summon Derek Smart here, are we? ;)
Haha, nice one, I didn't even think about it before you mentionned it ;)
Something that always kind of confused me is that, back in the GFFX days, a joint NVIDIA/ATI optimization presentation implied that on NVIDIA hardware, it was more important to only use a 16-bit Z buffer rather than a 24-bit one. I wonder if it was improved since then, I'd say most likely so... And I'd assume that to be be a compression algorithm limitation. heh.

Uttar
 
geo said:
Trying to summon Derek Smart here, are we? ;)

I'm surprised he hasn't poked his head in to share his thoughts on this, yet. I'm sure BattleCruiser could make use of expanded depth precision...
 
rwolf said:
Can you enlighten those of us who are slow on the uptake?

Oh, try searching for posts by member "Derek Smart [3000AD]" and "w buffer". Much fun and adrenaline had by all, roughly following R300 release. :LOL: Whatever else Derek is, he always provides much entertainment. :p
 
Oh god, I just went back and looked at that thread. Even more fun than I recalled. :p

http://www.beyond3d.com/forum/showthread.php?t=1706

Didn't it used to be the case a few gens ago that you could even tell the driver in the control panel settings to do 16-bit Z instead of 24? Or was it 24 instead of 32? Something like that, and it did make a difference in performance --I recall some semi-scandal/accusation type things about it. Like maybe one of the IHV's was substituting lower Z depth in a benchmark or something?

Ah yes, good times. :LOL:
 
Last edited by a moderator:
geo said:
Oh god, I just went back and looked at that thread. Even more fun than I recalled. :p

http://www.beyond3d.com/forum/showthread.php?t=1706

Didn't it used to be the case a few gens ago that you could even tell the driver in the control panel settings to do 16-bit Z instead of 24? Or was it 24 instead of 32? Something like that, and it did make a difference in performance --I recall some semi-scandal/accusation type things about it. Like maybe one of the IHV's was substituting lower Z depth in a benchmark or something?

Ah yes, good times. :LOL:

Pre PS nVidia cards needed the Z-buffer depth to match the color depth (16bit with 16bit, 32bit with 32bit) altough the driver might accept other combinations - and adjust the Z-buffer depth.

Starting from GeForce 3 the depth do not need to match, but you get higher performance when they do. Of course since GF3 no longer took a large hit from 32bit rendering as previous cards there was no point using 16bit Z anymore.
(Altough DST still had the matching depth limitation.)
 
One common issue with 16-bit depth in games is decals. Since decals are extra geometry placed on top of other surfaces, the hardware can have a hard time knowing whether the decal should be above or below the surface with low depth precision, and the decals will tend to flicker.

For some games, even 24-bit depth isn't enough for the decals. Take UT, for instance. If you play on CTF-Face, you'll notice that the decals near the opponent's base flicker even with 24-bit depth.
 
geo said:
Oh, try searching for posts by member "Derek Smart [3000AD]" and "w buffer". Much fun and adrenaline had by all, roughly following R300 release. :LOL: Whatever else Derek is, he always provides much entertainment. :p
That was pretty good entertainment. Thanks :)
If the posts weren't so old, I'd feel inclined to reply there that not even Voodoo cards (which are the reason why w buffering existed in Direct3D in the first place) support w buffering ... only something that's called w buffering but really isn't ... a per-vertex preprocessing trick but with plain old linear integer interpolation afterwards. And now I've done it here.
 
The Baron said:
Well, that decides it. I'm getting a PhD in CS for the sole reason that it's a prerequisite for being the biggest forum troll ever.

Hmm. You've got the basic skillz, me lad. Just crank the ego by an order of magnitude (well, okay, maybe two orders of magnitude --I don't know that current tech has actually been able to accurately measure Derek's) and you could be a contenda! After all, every college-age person should have a dream, no? :LOL:
 
Last edited by a moderator:
Back
Top