Will Microsoft ever adopt a new compression method?

Brimstone

B3D Shockwave Rider
Veteran
From what I understood FXT1 by 3dfx is better than S3TC. Microsoft stuck with S3's technology because they wanted one standard. It seems algorithms are getting improved upon all the time for techniques like AA. Why not texture compression? At some point will Microsoft have a compelling reason to look for a better technology than S3's?
 
Brimstone said:
From what I understood FXT1 by 3dfx is better than S3TC. Microsoft stuck with S3's technology because they wanted one standard. It seems algorithms are getting improved upon all the time for techniques like AA. Why not texture compression? At some point will Microsoft have a compelling reason to look for a better technology than S3's?

The advantages of FXT1 over S3TC are marginal - certainly not worth Microsoft dividing what is otherwise a simple standard and making things harder to get right. After all, it's apparently difficult for some vendors to even implement the current DXTC standard correctly (or, at least, in what could be regarded as an intelligent manner...) ;)

More seriously, new compression formats will certainly have to have tangible and demonstrable advantages over DXTC in either compression ratio or quality to have a chance of being included.
 
Well, a normal map friendly compressed format would be nice.

High dynamic range compressed format would also be a good thing.
 
SimonF mentioned that MS will not introduce an new compression in the known future. IMHO IMG have tried to "sell" their 2bit/texel PVRTC to MS.
 
What's about this:
PVR-TC is not the same compression scheme as implemented in KYRO (DXTC), nor is it the same as was used in Dreamcast (VQ) - it is instead one which achieves up to twice the compression rates of DXTC - a very important factor for embedded systems where bandwidth is so limited - while maintaining high image quality. It is a compression scheme that we will be incorporating in future products.
( Kristof Beets (PowerVR) in an PowerVR Generations Interview )

This is an interessting new texture compression :)!

CU ActionNews
 
mboeller said:
SimonF mentioned that MS will not introduce an new compression in the known future. IMHO IMG have tried to "sell" their 2bit/texel PVRTC to MS.

Sorry i wasn't fast enough :)! I had to search the Interview :)! I didn't know that you have posted about PVRTC meanwhile.

CU ActionNews
 
S3TC has served us well for a while now, but I feel the time for retirement of texture compression is closing in. In the future with long shaders etc. I think texture compression will be more and less of a forgotten feature as shader execution will be the performance determining factor and not the memory bandwidth.
 
And what about memory space? I don't see compression going anywhere. I can eat 128Mb on current cards even with DXT1 compression, it'll be many many years before space no longer becomes an issue. Surely you aren't proposing we stream all our textures over the AGP bus every frame?
 
I agree, that space is the most important part (while being faster IS a nice thing as well).

And you can no longer rely on AGP memory either, as you'll soon run out of base memory as well.

Especially with texture management - where you have a copy of everything in the system memory.

And while we are at it, vertex compression is also important for the same reasons. I think it is very important that R300 supports many packed vertex formats. It is the way to go.
 
Nvidia and ATI both want to be everywhere pixels are utilized. The Power VR interview brings up a good point about the value of PVR-TC. As you move away from desktops into other markets like embedded systems, bandwidth is going to come at a premium cost along with ample amounts of system memory. Cell phones are starting to get some good displays and when they get around to doing 3d graphics, I would assume texture compression will be of benifit.
 
3D textures are screaming for a storage scheme tailored to them ... not even so much pure compression since that would only be of marginal help, adaptive resolution storage of the textures would make 3D textures vastly more usefull.

Hell, Id love to see adaptive resolution 2D textures and render targets too ... given the choice I would take it over better compression any day. Because you have to be able to efficiently decode the textures in real-time pure compression wont give too much gain over DXTC, gains from adaptive resolution textures would be in a different order of magnitude for things such as lightmaps (orders in the case of 3D).
 
Brimstone said:
From what I understood FXT1 by 3dfx is better than S3TC.
As others have said, there are probably a few times when S3 is better, however I think the biggest problem is that at least one of the FXT1 modes probably infringes S3's patent and so adoption could be risky.

mboeller said:
SimonF mentioned that MS will not introduce an new compression in the known future. IMHO IMG have tried to "sell" their 2bit/texel PVRTC to MS.
I don't think I said exactly that. :eek:
What I believe I did say was that the last time I demonstrated PVR-TC to a leading member of the DX research team, he said something equivalent to them being reluctant to introduce a new compression scheme unless it had a significant increase in compression rate.
 
Ã￾t's possible to tweak FXT1 so that the only drawback compared to S3TC is that it selects compression mode per 8x4 pixel block instead of per 4x4 pixel block. Or in other words, the 15 bit vs 16 bit issue can be removed. However, that would bring it even closer to S3's patent.

Still, it would only be a marginal gain.
 
Basic said:
Ã￾t's possible to tweak FXT1 so that the only drawback compared to S3TC is that it selects compression mode per 8x4 pixel block instead of per 4x4 pixel block. Or in other words, the 15 bit vs 16 bit issue can be removed. However, that would bring it even closer to S3's patent.

Still, it would only be a marginal gain.

Overall it might not be a gain at all - block->block noise (low frequency noise created by colour choice mismatches at block intersections) is one of the key problems that a high quality DXTC compressor needs to deal with, and is very difficult to solve optimally. In addition to this the low-frequency and structured nature of this noise makes it one of the most noticeable artifacts caused by the compression. With a larger block size the errors from block->block are likely to get larger.
 
As a general rule of thumb, the maximum compression you can achieve is dependent largely on the block size, larger blocks giving better compression.

However, blocking artifacts then generally dominate at high compression ratios (this is clearly shown by JPEG, MPEG etc.).

DXTC is a very finely chosen tradeoff...
 
Dio said:
As a general rule of thumb, the maximum compression you can achieve is dependent largely on the block size, larger blocks giving better compression.
Of course the downside (at least with a scheme such as S3TC) is that the quality goes down as it gets very much harder to represent the increased number of pixels in the block with a limited number of 'base' colours.

The big advantage of compression methods such as JPEG is the fact that the data per block is variable which, in turn, makes it rather unsuitable for texturing. I suppose in a way, VQ was a 'variable rate' compression method in the sense that areas of low detail would be assigned, on average, a lower number of bits. The only problem is that the HW guys didn't like implementing it because it needed a 2nd cache to hide the indirection.
However, blocking artifacts then generally dominate at high compression ratios (this is clearly shown by JPEG, MPEG etc.).
Have you seen the MPEG 4 spec? It has some horrendous post-processing to remove the block artefacts. Effective, I suppose, but not exactly elegant. In many ways it'd be nicer to avoid blocks entirely, but I guess backward compatibility is essential.

DXTC is a very finely chosen tradeoff...
But I wonder what really drove the decisions. I have a feeling it may have been influenced by the number of bits need to store each block. 64 bits is a nice granularity (which would also fit the typical bus widths of the time) which then equates to the 2x16bit colours plus the 16x2bits of indexing. The next logical size is 128 bits which would have required two transactions from the external bus and a cache that was twice as wide.
 
I do think variable-rate compression is interesting and possibly has some potential, but if it starts needing index blocks and the like it's messy - and the surface simplicity of DXTC was a big help in its adoption, I think.

I will correct my statement to 'horrendous blocking artifacts unless you put a horrendous amount of effort into trying to get rid of them' :). I did work with the MPEG2 and preliminary MPEG4 specs a couple of years back but I'd forgotten about the deblocking.

As to what drove the decision - well, only the mathemagician knows that :). I think DXTC is a great format myself...
 
Personally I think VQ compression is great, sure comrpessing the textures in an overnight job which is an ass but you get so much better compression ratio's than with S3TC, especially as the texture gets larger. You can also read and decompress a VQ compressed texture quicker than you can read an uncompressed texture which is stunning if u ask me.

IMO the 4:1 compression ratio of alpha textures with S3TC is pitiful, I remember VQ getting 8:1 compression ratios, there was some comparison article written ages ago. I'll see if I can find it.


Simon, are you allowed to hint as to exactly how PVR-TC works? is it an extension of VQ or a whole new thing or what?
 
Back
Top