Z-compression articles?

Nick

Veteran
Hi all,

I'm looking for articles, papers, tutorials or other online detailed information about z-compression theory and implementation.

Recently I got interested (again) in hidden surface removal and occlusion culling. So I'd like to understand more about how it's implemented by modern software (not necessarily for real-time rendering) and hardware. I found lots of information about hierarchical z-buffers and many variants of occlusion maps but not z-compression and/or it's interaction with existing techniques. Knowing the details could help a lot to efficiently render complex scenes.

So if you know any resources or would like to share some knowledge, please inform me. Other ideas about improving z-buffer performance is welcome as well. Thanks!

Nick
 
DDPCM - Differential differential pulse code modulation

is the general method used in hardware I believe....
 
I dunno, it can't be that hard. I mean, triangles are flat. So break up your dataset into tiles (you still want a location-based structure). Within each tile, have two types of objects: pixels, and planes. Each pixel would just store depth (in the case that compression adds data). Each plane would store depth and the slope in two directions. There would be a coverage map of the tile which contains pointers to the relevant data for each pixel.
 
krychek said:
Real time rendering (2nd ed) has some info about z-compression.
Ah, I got the first edition. :? Is it explained in detail or just some general information?

I've seen those ATI slides before, just totally forgot about them! DDPCM and the 'entropy encoder' are rather vague though. I know DPCM (for voice compression) but couldn't find anything about the 2D variant and how to use it here. Anyway, they're talking about 1/2 and 1/4 compression, while nowadays it's up to 1/24 lossless...

Thanks!
 
Chalnoth said:
I dunno, it can't be that hard. I mean, triangles are flat. So break up your dataset into tiles (you still want a location-based structure). Within each tile, have two types of objects: pixels, and planes. Each pixel would just store depth (in the case that compression adds data). Each plane would store depth and the slope in two directions. There would be a coverage map of the tile which contains pointers to the relevant data for each pixel.
Sounds really advanced! So, if I get this right, you propose to have:

NxN pixel tiles, a full-resolution z-buffer, M buffers for storing plane information (per tile), and log2(M + 1) bits per pixel indicating whether it uses one of the planes, or the regular z-buffer.

So for example we could work with 8x8 pixel tiles, 3 plane buffers, and 2 bits per pixel indicating 'coverage'. Assuming we always have to read the 3 planes per tile and the coverage information, we need 28 bytes per tile (assuming 32-bit z-values). That's a compression of nearly 1:5 and it allows three triangles per tile.

Awesome algorithm. Thanks!
 
nick, the Z3 Antialiasing Paper contains some information about storeing Z-values.

To make best use of CPU caches the information for one compresed tile should be the same size as one cache Line. The information about the coding of each tile should be stored in an other memory block. If you want you can add a clear state. This gives you a very fast Z-clear.
 
Nick said:
krychek said:
Real time rendering (2nd ed) has some info about z-compression.
Ah, I got the first edition. :? Is it explained in detail or just some general information?

I've seen those ATI slides before, just totally forgot about them! DDPCM and the 'entropy encoder' are rather vague though. I know DPCM (for voice compression) but couldn't find anything about the 2D variant and how to use it here. Anyway, they're talking about 1/2 and 1/4 compression, while nowadays it's up to 1/24 lossless...

Thanks!

It doesn't handle the compression, it just mentions it and refers to two publications.
I guess that you allready found the "ATI Radeon HyperZ Technology" paper by Steve Morein.

The other one is "Digital Image Processing" by Gonzalez & Woods.
[ot]
There actually seems to be something wrong in the reference for that book becouse it says it's the 3:re editition from 1992 but the only one I can find is 2:nd edition from 2002.
[/ot]
The ISBN number for that book is however 0201180758
 
ATI has some patents about compressing framebuffer values. For example this:

US20030038803: System, Method, and apparatus for compression of video data using offset values.
 
video data != framebuffer data. That sounds like it's talking about movies, and is probably related to what MPEG does.
 
Have you even bothered to take a look to it?

I don't remember MPEG using depth and Z buffers though ...

The abstract for the non believers.

A system, method, and apparatus for compression of video data is presented. The compressed block includes a plurality of offset values, each indicating an offset between a corresponding one among the plurality of pixel values and a reference value. Exemplary methods are described wherein minimum and maximum reference values are derived from the block of pixel values, and a flag associated with each offset value indicates an appropriate reference value. Application of embodiments of the invention to the transfer of depth (Z) information are discussed.
 
Thanks all for the information! I'll keep you informed if any of it is succesfully used in my project(s).
 
Back
Top