R 9700Pro : Q about Framebuffer and Z-Buffer compression

OpenGL guy said:
So when you have data that doesn't compress, then you are actually using more space than you would if the data wasn't compressed at all.

Yes, but it's very unlikely to happend in real world applications.

OpenGL guy said:
But since you have to plan for the worst case scenario, you have to reserve memory so you can add mode blocks. In other words, you aren't saving any real space at all.

Not if you handle all memory allocation on the graphic card dynamically in hardware. It's probably not an easy task, but probably doable.
 
darkblu said:
Dio said:
.. and if that includes 2-pixel wide quads that go from the nearplane to the farplane the hardware has to handle them, even if they are completely useless.
huh? what do you mean by 'completely useless' - extreme view angles at large planes just happen! the fact that huge planes can come down to stripes of 2 (or even 1) pixels of width is fault of the discrete math - don't blame coders!
That's my point. These cases would be very unlikely to generate in real world application (when did you last see a single triangle clip off near and far planes simultaneously in a real-world app?) but have to be included because they are possible, no matter how unlikely.
 
Humus said:
Seriously though, we don't want a full malloc() implementation in hardware. But by applying some restrictions we might get a good enough solution to cut down storage space.
So far, those restrictions haven't been enough to make variable-length compression possible. I know that when DXTC was designed, there were other schemes looked at that had variable compression ratio, but DXTC ended up fixed compression, and there's a very good reason for it.

Even then it still doesn't get around the problem that in the worst case, your Z-buffer can still be bigger than it was originally, so you still have to allocate for the worst case.

Humus said:
Yes, but it's very unlikely to happend in real world applications.
Is it acceptable to generate a bad frame because it did happen?
 
OpenGL guy said:
arjan de lumens said:
we need 8*8*4*2*4/2.5 = 819 bytes per block on average. In addition to this memory, we waste on average ~64 bytes because our compressed data are forced into 128-byte blocks and about 15-20 bytes of data to pointers, for a total of about 80 bytes, reducing comression ratio from 2.5:1 to about 2.25:1.
So when you have data that doesn't compress, then you are actually using more space than you would if the data wasn't compressed at all.
That's true with any compression algorithm, including the current ones by nvidia/ATI.
So addressing of a dynamically allocated compressed framebuffer isn't that hard to solve, even without killing off the effect of compression.
But since you have to plan for the worst case scenario, you have to reserve memory so you can add mode blocks. In other words, you aren't saving any real space at all.
Well, dynamic allocation makes worst-case memory usage worse (by about 100 bytes per block with the stated method, which could probably be reduced to ~50 with multiple block sizes) - so it only really makes sense to use in a scenario where you can predict the amount of framebuffer memory actually needed (by e.g. taking the actual memory need from the previous frame+a safety margin) and then, if your prediction is off, throw a page fault or something similar and stall the renderer to allocate additional memory when needed. If the renderer is too inflexible to allow its framebuffer memory pool to be expanded at run-time, then, yes, you have to allocate for the worst-case scenario and dynamic allocation stops making sense.
 
Dio said:
That's my point. These cases would be very unlikely to generate in real world application (when did you last see a single triangle clip off near and far planes simultaneously in a real-world app?) but have to be included because they are possible, no matter how unlikely.

dio, i was jesting. yet, for the pure sake of argument, there's nothing wrong with primitives puncturing both the near and far clip planes in real-world apps, e.g:
a) when a depth partitioning scheme is used - a larger primitive could exceed a depth layer from both ends.
b) even w/o artificaially narrowing the view volume depth, a flattish-ground, outdoors area could be composed of a minimal number of tris, a single one of which would be stretching for quite some view volume distance (using either a tiled or a larger base texture).
 
Dio said:
Humus said:
Seriously though, we don't want a full malloc() implementation in hardware. But by applying some restrictions we might get a good enough solution to cut down storage space.
So far, those restrictions haven't been enough to make variable-length compression possible. I know that when DXTC was designed, there were other schemes looked at that had variable compression ratio, but DXTC ended up fixed compression, and there's a very good reason for it.

Even then it still doesn't get around the problem that in the worst case, your Z-buffer can still be bigger than it was originally, so you still have to allocate for the worst case.

Humus said:
Yes, but it's very unlikely to happend in real world applications.
Is it acceptable to generate a bad frame because it did happen?

No, the whole idea with what's been proposed here is that it doesn't need to allocate for the worst case, except if that particular worst case happends, you'll just get more memory allocated during that worst case frame. I understand there are problems with these kinds of approaches, maybe it isn't as feasible as I initially though, but it's not impossible.
 
Dio said:
darkblu said:
Dio said:
.. and if that includes 2-pixel wide quads that go from the nearplane to the farplane the hardware has to handle them, even if they are completely useless.
huh? what do you mean by 'completely useless' - extreme view angles at large planes just happen! the fact that huge planes can come down to stripes of 2 (or even 1) pixels of width is fault of the discrete math - don't blame coders!
That's my point. These cases would be very unlikely to generate in real world application (when did you last see a single triangle clip off near and far planes simultaneously in a real-world app?) but have to be included because they are possible, no matter how unlikely.
I think this might happen fairly often with shadow volumes. Maybe not clipped at both planes, but very long, skinny, and with steep slopes.
 
3dcgi said:
I think this might happen fairly often with shadow volumes. Maybe not clipped at both planes, but very long, skinny, and with steep slopes.

But the z-value of the shadow volumes doesn’t get written to the z-buffer, so it doesn't matter.
 
Back
Top