Why not Compression to cut board cost?

What they need now is instantanious loseless suuuuper nifty hardware Compression. Proprietary or not doesnt matter (licencable is a good thing) something that happens on the GPU or a special "ram controller chip" etc. Something that can compress like "everything" sent to ram on a GPU. Thats the real way to combat the rising costs of high end hardware.

Lets say 4:1 Lossless Compression of everything that passes into GPU ram.

Is there any reason for this type of solution to be feasible?
 
Compressing data means you assume allot of things. ;) The most important assumption is of course that only a part of available values will be taken. For example only 65535 different colors out of 16M possible will be used in a given picture. You can't really lossless compress _everything_ in constant 4:1 ratio. Data close to random will be impossible to compress, while more uniform data, you might get even higher compression ratios.
 
Also, trying to compress data that is already compressed can actually increase the size of the data...
 
For every lossless compression algorithm there exists a data set (useful or not) that cannot be compressed by that algorithm. For data that CAN be compressed by the algorithm you choose, you will normally want to transfer them in as large blocks as possible - most compression algorithms will work better the larger blocks they are allowed to work on. The downside of large blocks is that they make random access within the block very expensive.
 
But we allread have Framebuffer and Z-Buffer compression.

Also not forgetting texture compression - (DXTC and 3DC).

So we are allready compressing where it makes sence (ie there are accetpable traidoffs).

Are there other areas that could also benefit from a form of compression? if so where?
 
It is impossible to have a lossless algorithm that guarantees a given compression ratio. In fact, I believe that a lossless will almost always even increase BW requirements over no compression (less than 1:1), in worst case scenarios. At the very least, you need to anticipate the worst case, and so cannot take advantage of area savings, so overall costs of the system are not lowered (in fact, they are increased by the cost of the compression/decompression HW).

Current generation ATI products can compress significantly (Z can go down to 24:1 and color to 12:1, depending on AA levels and a few other things, but in a lossless way. DXT compression (lossy) can give 6:1 and 8:1, if I remember right. These two sets of datums are the heaviest hitters in the BW consumption, and are already compressed beyond your desire. I believe that competitive products also offer similar items.

At the end, BW savings are already there, and cost savings will never occur.
 
The remaining data would then be per-vertex or per-polygon data. There are a number of highly efficient compression algorithms for vertex/polygon connectivity information, some of which can be used to compress vertex coordinates as well (not losslessly, but with little enough loss not to be noticeable). Normal vectors can be compressed to some extent too, using e.g. a polar coordinate representation. Other per-vertex data generally do not compress very nicely.
 
arjan de lumens said:
The remaining data would then be per-vertex or per-polygon data. There are a number of highly efficient compression algorithms for vertex/polygon connectivity information, some of which can be used to compress vertex coordinates as well (not losslessly, but with little enough loss not to be noticeable). Normal vectors can be compressed to some extent too, using e.g. a polar coordinate representation. Other per-vertex data generally do not compress very nicely.

Not quite. There are lots of different textures in recent events and in the future, that will require aggressive compression. Obviously normals maps, but there are others too; as well current ones haven't been exhausted. These things can make huge visual and performance difference.

As far as vertex data, it's BW, generally, is still not significant compared to texture/pixel data. Having said that, there are lots of compression techniques such as HOS or indexes or various other methods. None of these have been super popular, probably since they aren't really required and can interfere with the other parts of the program (such as collision detection). I assume that in the future, as poly counts increase, this will change.
 
Let me Claify..

In my brilliance I thought that saying something like 4:1 was more realistic to get a lesss lossy output. But Clearly my Brilliace was rather um .. dull....

To clean that up lets just say "Smart Compression". But really something that targets Blocks of data going into ram. Regardless of what that data is destined for. you would have some kind of Buffer that tracked all the current blocks of data in ram with some kind of intelligent tracking system. Or something like that.

Of course my lack of knowledge in this area may have gotten me into even deeper doodoo with this suggestion.
 
Hellbinder said:
Let me Claify..

In my brilliance I thought that saying something like 4:1 was more realistic to get a lesss lossy output. But Clearly my Brilliace was rather um .. dull....

To clean that up lets just say "Smart Compression". But really something that targets Blocks of data going into ram. Regardless of what that data is destined for.

Of course my lack of knowledge in this area may have gotten me into even deeper doodoo with this suggestion.

hum. Let me say it again too: If you want to save the amount of memory, then you cannot use lossless compression. Or you have to deal with really, really bad performance characteristics.

If you want to save BW into the memories, then you can use lossless, and we already do, for all intensive items (and more than you're asking for). But this won't save any money, really (well you could use cheaper memories, but wouldn't you just run faster instead?).
 
sireric said:
Hellbinder said:
Let me Claify..

In my brilliance I thought that saying something like 4:1 was more realistic to get a lesss lossy output. But Clearly my Brilliace was rather um .. dull....

To clean that up lets just say "Smart Compression". But really something that targets Blocks of data going into ram. Regardless of what that data is destined for.

Of course my lack of knowledge in this area may have gotten me into even deeper doodoo with this suggestion.

hum. Let me say it again too: If you want to save the amount of memory, then you cannot use lossless compression. Or you have to deal with really, really bad performance characteristics.

If you want to save BW into the memories, then you can use lossless, and we already do, for all intensive items (and more than you're asking for). But this won't save any money, really (well you could use cheaper memories, but wouldn't you just run faster instead?).
There has to be a way to create a compression system that would work not based on current limitations..

But.. I am just pondering here...
 
Compression isn't magic. A rough guideline for lossless compression is to expect something between 1.5:1 and 2:1.

But the data may not be compressible. I can pretty easily generate a pixel shader that will output approximately flat random noise: guaranteed zero compression (or worse, negative compression). As a result, no matter what happens you have to allocate at least the memory for the uncompressed version (if the compressed data is being generated on-the-fly). Therefore, we use compression on dynamic data to reduce bandwidth but not to reduce memory usage.

When it comes to reducing memory usage, we (particularly I) encourage everyone to use DXTC and it's derivatives because for 90%+ of texture data it is a superb solution.
 
I like the pigeonhole way of looking at lossless compression.

The easiest example is trying to compress a two-bit value so that it takes up one bit.

The compressed value is either a 1 or 0. It can therefore only stand for 2 possible combinations out of an original 4. Unless you store data somewhere else (such as a compressor's library file), and totally negate or make things worse, you will lose data.

Reducing any set of data by even one bit cuts down the number of possible combinations that can be represented by half. You just have to hope that somehow half of the values aren't going to show up, or you really don't mind losing a bunch of the information through close values being set to the same result.
 
Hellbinder said:
There has to be a way to create a compression system that would work not based on current limitations..

So, what... it would work based on future limitations instead?

Compression is a trade-off: Typically trading space for time. Sure, you can find a way to compress data further than it is being compressed now, but it ends up slowing down everything else because you're trading off data transfer time for compression time. And, of course, this is unacceptable.

If you or I can think of a scheme that would improve rendering or data speeds on a processor, you can be 99.999999999% certain that someone has already thought of it, discussed it, modeled it, prototyped it, and tested it. The chances of any of us dreaming up a unique refinement at this point is rather small.

Nothing has to be possible. There are those things that are possible, and those that are not. Breaking the rules, so to speak, typically occurs when someone develops a completely new way to approach the problem. Revolutionary, not evolutionary. Technicians of all breeds have been trying to compress data signals for over 20 years. What makes you feel it "must be possible" if the brightest minds haven't developed a suitable method thus far?

I know that all eyes appear to be on "quantum computing"... but I get the feeling that it's not going to be so easy to make a backwards-compatible IA32 quantum processor. If you're going to reap the benefits of a new paradigm, you typically have to discard the technological baggage that shackles the old model.

So, yes, such compression might be feasible, but not with how data transmission occurs today.

Did anyone read Robert X. Cringley's "pulpit" article about how the optic nerve carries less data than a 56K modem? Sure, it could possibly be true, but that doesn't inversely mean that you can compress a "reality fidelity" signal into 56K. It ignores the amount of temporal processing the brain does to fill in the data not provided. His article was positing that the major telcos with massive copper infrastructure could develop a way to push HDTV down a POTS-capable line given the measurements of data rate on the optic nerve. Pleasing fantasy? Assuredly. Technical fantasy? Definitely.

As Dio points out, compression isn't magic. Compression is actually just a lot of very fast math operations, and one thing you cannot do is change how math works. You can possibly find an elegant shortcut, but you aren't going to change the basic functionality of the base binary math operations. So unless someone gets a lot cleverer than the geniuses of the last 20 (more like 50) years, we are stuck with the rules we have. So all we have left is refinement of technology so we have faster signalling, faster buses, fatter pipes, and parallel operations.

Unless that magical quantum processor arrives soon, we're stuck with the rules we have for a while.
 
flf said:
If you or I can think of a scheme that would improve rendering or data speeds on a processor, you can be 99.999999999% certain that someone has already thought of it, discussed it, modeled it, prototyped it, and tested it. The chances of any of us dreaming up a unique refinement at this point is rather small.
On this point I might disagree. I think there are still 'obvious' improvements to be discovered. I have seen enough quality ideas come up in my time in this industry that prompt you to think 'Why the hell didn't I think of that!' to believe it won't happen again.

Of course, the majority of these are tiny little things, but there I'm sure there are a few big things out there.

One interesting thing is that sometimes these things come up in multiple different groups nearly simultaneously, which I think is usually because there's some seed idea in a previous architecture that prompts people to spot something extra for the next one.
 
Just out of curiosity...

what is a

"hellbinder whogivesabeep mark"???

I am afaraid i am completely unfamiliar with that one.
 
flf said:
Compression is actually just a lot of very fast math operations, and one thing you cannot do is change how math works. You can possibly find an elegant shortcut, but you aren't going to change the basic functionality of the base binary math operations. So unless someone gets a lot cleverer than the geniuses of the last 20 (more like 50) years, we are stuck with the rules we have. So all we have left is refinement of technology so we have faster signalling, faster buses, fatter pipes, and parallel operations.

Unless that magical quantum processor arrives soon, we're stuck with the rules we have for a while.
disagree.

faster processors leads to better available compression.

eventually, data processing will be (could be) dirt cheap compared to bandwidth. The "geniuses" of the past 20-50 years have not had that to work with.
 
Just give me non-destructive quatum reads ( which would break the laws of physics as they currently stand ) and i'll be happy just think like a 256 qubit register would hold more data then the atoms in the universe.
 
Back
Top