Talking about compression

Can't be too good to be true if people are already using it, though I guess the algorithm is very complex and no use in gaming hardware. It'd be nice to have 20:1 compression of textures etc though :)
 
Shifty Geezer said:
Can't be too good to be true if people are already using it, though I guess the algorithm is very complex and no use in gaming hardware. It'd be nice to have 20:1 compression of textures etc though :)

Unless you put some dedicated hardware in of course...

Could be something for the next next gen. :)

It sure sounds good.
 
I wonder if the compression is only that effective for certain types of pictures. The medical origins of the tech suggest to me that it may not be as general as JPEG is.

And, of course, there's the question of the time needed for decompression. But if it's anything within reason, it could still be very useful to X360 developers in stuffing textures onto disk.
 
It's snake oil. History is full of compression snake oil that duped investors and customers.

A lossless compression algorithm that guarantees to compress all input even by a single bit, is impossible. This algorithm and patent has been analyzed on comp.compression. It doesn't come close to the hyperbolic PR claims and infact, is worse than PNG on most input images.
 
Once upon a time a postdoc fellow at MIT, of all places, posted a comment on Usenet about his new wonderful lossless random data compression algorithm. After months of back and forth nothing ever came of it, mostly because it's impossible. You can't create a bijective mapping between sets with different cardinality (size, roughly); that's a very simple fact and yet people seem to easily forget this.

Anyway, for texture compression you need a scheme where random access to any datapoint is relatively cheap computationally and doesn't require you to decompress a huge block of data to determine one texel's value. Most DCT/frequency space or Wavelet (JPEG and JPEG2000) type algorithms are therefore inappropriate for texture compression.
 
DemoCoder said:
It's snake oil. History is full of compression snake oil that duped investors and customers.
You think these firms are being duped? I guess it wouldn't be the first time. I certainly have trouble believing you can get lossless compression like that but with advances in representing data in different ways I can well believe someone invents a miraculous compression model I dont understand! Also if it's working on specific image structures, specifically medical imaging as the given example, how much could assumptions of the datatype help?

Actually that's not worth replying to. This obviously has no bearing on consoles!
 
Yes, they are beating duped. MatrixView has been examined by people who got their hands on the binary. It turns out they are using JBIG, JPEG, LZW, and their own "stupid" compression underneath. They sell to medical imagery industry. JBIG is a lossless image format that works great on black-and-white medical scans.

Here's how you dupe someone.

#1 claim massive gains over existing technology
#2 demonstrate your new "algorithm" on a few test images of your choosing (your algorithm doesn't work on real data)
#3 then run your software on customer images (B&W MRI scans), your compressor drops your busted algorithm and uses JBIG.
#4 you then perform a bait-and-switch and compare the output of your compressor using (secretly) JBIG compared to TIFF. Viola. Required marketing point established, customer sold.

Most people are ignorant of JPEG2000, JBIG, and other lossless DCT/Wavelet schemes, so it is relatively easy to repackage existing technology with a few tweaks and claim amazing breakthroughs.

The algorithm described in the MatrixView patent was implemented by a coder in comp.compression. It's *terrible* It works only for a few bogus test images and makes the vast majority of other images larger.

When someone (the MatrixView founder) starts off by saying "I never studied compression", but their marketing material makes claims to have solve what educated people know as impossible problems in information theory, you know something is wrong.
 
What's happening with JPG2000? I remember the first examples of this and was very impressed, but I have the feeling they were charging apremium that's restricted it's takeup. It's certainly not a standard by any stretch I've seen. Is there any likelihood wavelet based compressions will be used on console games (streaming from disc) or will these use standard compressors?
 
akira888 said:
Anyway, for texture compression you need a scheme where random access to any datapoint is relatively cheap computationally and doesn't require you to decompress a huge block of data to determine one texel's value. Most DCT/frequency space or Wavelet (JPEG and JPEG2000) type algorithms are therefore inappropriate for texture compression.

I don't think DCT would be that bad as it is usually used in blocks (e.g. 8x8), especially as filter kernels get bigger. The problem there (as I see it), is rather fixed size encoding (for random accesses). For wavelet compression, you are of course right, although I still wanted to get around to use a shader to accelerate the transform itself for my home-cooked wavelet image compression (which still leaves the entropy coding...).
 
Shifty Geezer said:
You think these firms are being duped? I guess it wouldn't be the first time. I certainly have trouble believing you can get lossless compression like that but with advances in representing data in different ways I can well believe someone invents a miraculous compression model I dont understand! Also if it's working on specific image structures, specifically medical imaging as the given example, how much could assumptions of the datatype help?

Shannon's Law is governing in all these cases about compression.

Shannon's Law hasn't been broken yet.
 
For wavelet compression, you are of course right, although I still wanted to get around to use a shader to accelerate the transform itself for my home-cooked wavelet image compression (which still leaves the entropy coding...).
I've tried this, and the general speedup hasn't been that impressive. Although if by some miracle, I one day have the money to upgrade to a card that supports SM3.0 (or for that matter, a motherboard that has PCI-E slots), the speedup could be more significant not just because of raw power difference, but that I would be able to afford using larger filter banks (more allowed samples). That probably won't happen though.

Also, people have suggested block-level wavelet techniques, but I haven't heard of any actual results or architectural designs for implementing it as a hardware texture compression scheme. Maybe I'm just not looking hard enough.
 
DemoCoder said:
The first digital cameras with JPEG2000 just shipped.

cool, which are they? btw, what happened to that early analog devices' codec ic - has anybody used it by any chance?
 
The DCT isn't the problem, since a DCT size is fixed, and DCTs of neighboring blocks don't depend on each other. The problem is the entropy encoding of the resultant DCT data. If you paired DCT with a fixed ratio compressor somehow, you'd achieve most of what you need.
 
With a bit of magic you can do the 4x4 integer transform from H.264 in 2 cycles (for 8x8 this kind of magic really explodes the area you need though). The problem is doing quantization and entropy coding in few enough cycles. You might be able to make a transform based compression fast enough for external access, but for use inside texture caches it would probably be too slow.

Im not a big fan of transform based coders though ... it's just so roundabout and complex.
 
Last edited by a moderator:
DemoCoder said:
The DCT isn't the problem, since a DCT size is fixed, and DCTs of neighboring blocks don't depend on each other. The problem is the entropy encoding of the resultant DCT data. If you paired DCT with a fixed ratio compressor somehow, you'd achieve most of what you need.
A big problem is that some patterns in blocks (that often appear in textures) become far more difficult to represent after DCT/etc has been applied.
 
Well, then have the compressor calculate an error rate, and punt back to S3TC or uncompressed. :)

How about a 4x4 lossless integer DCT combined with a DXTC-style interpolation method. Compute the DCT, calculate a lookup table of interpolants for the coefficients.

You start with 16 32-bit pixels, or 512-bits of data. Subsample chroma at 4:2:0 in YUV. Apply 4x4 DCT. Output is 10-bit precision integer coefficients. Quantize. Afterwards, we'll have 480-bits of information.

Now, instead of entropy encoding, we use interpolation to store the resulting coefficients. (sort of a second level quantization) Let's say we pick 8 intervals, and store 2 coefficients. So we store 2 10-bit coefficients (60-bits) plus 16 9-bit interpolants (3 bits per component). That yields 204 bits, with another 52 to spare (alpha channel?). I'd assign 1 interpolant the special "0" value for coefficients that quantize to zero.


So we end up with a 2:1 compression from 512->256bit. Random accessibility is preserved. Image quality vs DXTC? I dunno. But I have a feeling that errors in interpolating coefficients may be hidden more effectively. Downside? Blocking artifacts.
 
Back
Top