Will Microsoft ever adopt a new compression method?

Basic said:
Btw, I know that 3dfxs' FXT1 encoder didn't attempt to do any dithering, did S3s' compressor do that?
I would say that targetting things like dithering and error diffusion is barking up the wrong tree a bit. The errors introduced by the compression process don't really lend themselves to an error-diffusion type model as far as I can see.

Concentrate on a really good block compressor and you'll see much better results. (Any kind of extensive search, by the way, will never be fast). After that look at advanced stuff!

Of course, that's not easy :)
 
andypski said:
It should be possible to write an even better version than the original S3 version, but I think it would take significant time and research to beat it by any significant margin. I know that Simon F. thought that his own S3TC compressor gave better results on some images at least - he commented on this on his home page, although we would respectfully disagree (on the limited basis we have for comparison). ;)
In my defense, it was only on "some images" and it was a long time ago :D.

I think the problem I found with the S3 compressor was that it had some default weights that emphasised the green channel over the red and blue. (IIRC, according to Poynton's colour FAQ, although the eye is less spatially sensitive to blue than the other primaries, it is still quite sensitive to overall blue levels.) On one or of the two test images I tried, the S3 compressor produced an almost monochromatic green result, although I suspect that in general the S3 compressor would be superior.

FWIW, having looked at the description of the compression process in the S3 patent, I noticed some of the techniques described (e.g. principal vector analysis) are probably similar to those I used in the VQ tool (which I'd based on the work by Wu).

I suppose I could update my compression comparison page to use the S3 compression tool but I'm a bit busy (or lazy). Besides, the old version I have has the feature that makes it crash immediately after you output a decompressed .bmp file from a .s3tc input file! :oops: It's a bit tedious.
 
Simon F said:
I think the problem I found with the S3 compressor was that it had some default weights that emphasised the green channel over the red and blue. (IIRC, according to Poynton's colour FAQ, although the eye is less spatially sensitive to blue than the other primaries, it is still quite sensitive to overall blue levels.) On one or of the two test images I tried, the S3 compressor produced an almost monochromatic green result, although I suspect that in general the S3 compressor would be superior.

As I recall I think your analysis is pretty correct here - there can be some rare instances where it doesn't do so well (although we had a large test suite for the compressor to try to avoid this). Certainly I believe that on regions of completely random colour noise (essentially uncompressible) it would tend to bias heavily towards green (so blocks that had an overall 'neutral' character in the original would end up greenish), but on real world images this effect didn't show up.

Actually there was a bug/feature that could cause a very slight (1 lsb) green shift in some circumstances, but I don't think this was ever tracked down and fixed (it can sometimes be seen when compressing greyscales)

Where the S3 compressor was pretty good was manipulating the endpoints and clustering to squeeze out more signal->noise.
 
andypski said:
Actually there was a bug/feature that could cause a very slight (1 lsb) green shift in some circumstances, but I don't think this was ever tracked down and fixed (it can sometimes be seen when compressing greyscales)
Isn't that almost an inevitable side-effect of using 565 base colours? (Though, frankly, I don't think it's at all important)
Where the S3 compressor was pretty good was manipulating the endpoints and clustering to squeeze out more signal->noise.
I tried something along those lines as well. It's annoying how you can't analytically compute the optimum end points. :(
 
Simon F said:
andypski said:
Actually there was a bug/feature that could cause a very slight (1 lsb) green shift in some circumstances, but I don't think this was ever tracked down and fixed (it can sometimes be seen when compressing greyscales)
Isn't that almost an inevitable side-effect of using 565 base colours? (Though, frankly, I don't think it's at all important)

I think it may be pretty much inevitable in a compressor that is trying to make the best use of the available colour precision. I have seen compressors that didn't exhibit this overall shift, but they weren't as good at representing the shallow gradients.

Where the S3 compressor was pretty good was manipulating the endpoints and clustering to squeeze out more signal->noise.
I tried something along those lines as well. It's annoying how you can't analytically compute the optimum end points. :(

Yes - this is an irritating problem with getting an optimal compression solution - balancing error by manipulation of the endpoints, and finding the global minima rather than a local one.
 
Simon F said:
It's annoying how you can't analytically compute the optimum end points. :(
Actually, I did find an analytical solution to most of the problem (everything except the rounding to 565). But I never managed to make it work - which means it is possible (likely?) it wasn't actually a solution :)
 
Dio said:
Simon F said:
It's annoying how you can't analytically compute the optimum end points. :(
Actually, I did find an analytical solution to most of the problem (everything except the rounding to 565). But I never managed to make it work - which means it is possible (likely?) it wasn't actually a solution :)

You've got me very intrigued :?. I don't understand at all how you can do it analytically with all the discontinuities.

For example, with the DC VQ compressor, given a set of image vectors and a chosen partition of that set into two subsets (on either side of a plane perpendicular to the principal axis), you can easily compute two representative vectors for those subsets that gives a local minimum in the error. The problem is that you might be able to move the plane (and suddenly the sets change (i.e. the error function is discontinous)) and you might get a lower error.

I just chose to sweep the partition plane through the entire set to find all possible minima (which, luckily, is linear operation in the number of vectors).
Although it'd be more complicated, I suppose you could do the same with the S3TC encoding whereby you have "4" subsets.
 
I suppose I still had to check a bunch of cases and pick the best, but it was reduced to a maximum of 16, and it was guaranteed to give me the minimum error, so I call it analytical :)

I also had to make a pretty big bunch of assumptions :D but most of them I was making anyway in the progressive refinement version I was trying to replace - reduction to a 1D problem, midpoint is centre of extremities, etc.

Anyway, it never worked, and the progressive refinement was both faster (it rarely executed more than a couple of iterations) and great quality, so I dumped it.
 
andypski said:
Dave B(TotalVR) said:
Personally I think VQ compression is great, sure comrpessing the textures in an overnight job which is an ass but you get so much better compression ratio's than with S3TC, especially as the texture gets larger. You can also read and decompress a VQ compressed texture quicker than you can read an uncompressed texture which is stunning if u ask me.

You can get about twice the compression ratio of DXTC (for colour-only images), but the difference between 2bpp and 4bpp is not really of much interest in the video card market. Smaller than 4bpp is of interest mainly in areas where memory is at a huge premium (handheld devices etc). As far as video cards go if VQ's higher compression ratio couldn't win the day back when it was first introduced (when devices might typically have only about 8-16 MB of onboard RAM) then it is hardly likely to be a convincing argument now.

In the consumer 3D space the most interesting aspect of compression is increasing the efficiency of texturing, and DXTC solves that problem just fine - the added benefits of dropping to 2bpp vs. 4bpp in overall texturing efficiency are generally pretty marginal (considering you've already dropped from 24bpp->4bpp, and effectively from 32bpp->4 bpp, since most 3D hardware does not use packed texel formats).

Whether the image quality of VQ at 2bpp is equivalent to DXTC at 4bpp is a long and involved discussion in and of itself, but in most typical cases I believe it to be somewhat lower quality overall (although in the same ballpark). Of course each compression method has different strong and weak points in terms of IQ, and therefore the exact situation varies from image to image. I know that Simon had a comparison of some aspects of this on his homepage where he made some interesting observations on quality/bit.

VQ compression is also not great for hardware, as Simon has touched upon, since you need to hide an additional indirection. Also, for properly orthogonal support you have to be able to use N different sets of VQ palettes, where N is the number of simultaneous textures you support.

What about Light Field mapping which gains from both VQ and Texture Compression? Any merit in going this direction?

math31.jpg



math32.jpg





Light Field Mapping analysis


Intel Research on Light Field Mapping
 
Humus said:
S3TC has served us well for a while now, but I feel the time for retirement of texture compression is closing in. In the future with long shaders etc. I think texture compression will be more and less of a forgotten feature as shader execution will be the performance determining factor and not the memory bandwidth.

I agree, with the increasing flexibility and performance of pixel shaders, procedural textures are the way to go IMO.
 
Bandwidth isn't the only thing. Textures are getting bigger too - I know of applications that consider 2048x2048 restrictive.

For 80% of textures, even DXTC compression makes no visible difference to image quality. Why use 32-bit for those?

Texture compression will become more, not less, important for the future - particularly on highly restricted memory platforms such as budget cards, laptop, PDA, phone, console... but even when more memory is available, a game developer who chooses not to use compression will make their game look worse because they will not be able to use textures of the same resolution.
 
gkar1 said:
I agree, with the increasing flexibility and performance of pixel shaders, procedural textures are the way to go IMO.
But how would you code a procedural texture for, say, the earth's surface viewed from space? (and I don't mean something that looks like a random blue-green planet). Sometimes a stored texture is an easier approach.

Having said that, the "Wang Tiles for Image and Texture Generation" presented at Siggraph this year would potentially offer a good compromise.
 
Simon F said:
As others have said, there are probably a few times when S3 is better, however I think the biggest problem is that at least one of the FXT1 modes probably infringes S3's patent and so adoption could be risky.

I agree the differences aren't worth adoption of FXT1 at this stage in the API. But I would think that once M$ included the S3 technique in the API that anybody could use it on M$'s license. Who bought S3--VIA? Can't recall at the moment.
 
WaltC said:
I agree the differences aren't worth adoption of FXT1 at this stage in the API. But I would think that once M$ included the S3 technique in the API that anybody could use it on M$'s license.

Which is fine for D3D, but could be an issue with OpenGL.
 
Joe DeFuria said:
Which is fine for D3D, but could be an issue with OpenGL.
I believe you do have to license it for any use outside of D3D.
 
Back
Top