Future of texture compression (GPU compression in general)

DemoCoder

Veteran
Now that textures are getting even larger (offline rendering), can be FP and more are more textures are going to be functional data, it seems to be that the industry is in dire need of new texture compression algorithms. I see a need for multiple types for multiple types of data. (e.g. specialized for functional data, normal maps, light maps, etc etc)

Moreover, I think as geometry loads increase, we're going to need geometry compression beyond the typical triangle mesh methods now used on GPUs today.

The question is: Should we standardize the compression algorithms and let them be implemented in hardware, or should be include a programmable unit attached to the texture units that can implement user definable compression algorithms. Perhaps combiningboth would allow the greatest performance and flexibility (standard algorithms for known common texture formats, plus the ability to do your own if needed)

To some extent, this may all be mitigated by the adoption of procedural shaders for a lot of stuff. But it seems to me, there still needs to be a replacement for S3TC.
 
I have a strong recollection of a new texture compression standard mentioned at R300 launch. I'm pretty sure it was part of an ATi presentation (though there is a small chance it may have been part of the nVidia siggraph presentations I was reading at the same time), and my impression was it was intended to address exactly what you bring up. I don't know if it is restricted to DirectX or not, because I was also reading about ATi's proposed shader extensions to OpenGL offering the functionality of DX PS and VS 2.0.

Do you have any info on this new compression standard?
 
I think the method of developing a compression algorithm and let hardware vendors implement it worked well with DXTC (S3TC), even though it wasn't really the intent of S3.

I think implementing in hw is much efficient so this shuld be the way to go. (Assuming an agreement can be made.)

The problem of DXTC is that it limits the quality to 16bit. A higher quality compressed format is needed.
The targets could be:
- standard 24/32bit textures
- high dynamic range textures
- normalized / non-normalized vector maps

Can some of you point to some research material about texture compression? (Assuming such thing exist.)
 
The main thing to consider is the time it takes for artists to make these large textures. Hardware folks can come up with all sorts of new TC algorithms but it'll matter not a bit if the developers don't have the time. We need better (=less time consuming) software in the first place.
 
Its called VQ texture compression and its much better than S3TC. The tech has been around for ages and was in hardware on the ATi rage 128 and the Neon250. Never took off because compression times were so long (though decompression times were faster than reading the uncompressed texture).
 
Mulciber said:
Whatever happend to 3dfxs texture compression format?

The only cards to support it(V4, V5) were out of production about 6 months after being released...so it just faded away without ever really being used. I don't think any other upcoming chips(R300, NV30) support it either, so realistically, it is dead.
 
Reznor007 said:
Mulciber said:
Whatever happend to 3dfxs texture compression format?

The only cards to support it(V4, V5) were out of production about 6 months after being released...so it just faded away without ever really being used. I don't think any other upcoming chips(R300, NV30) support it either, so realistically, it is dead.

Thats incorrect the latest intel integrated graphics chipsets support 3dfx FXT1, which would be the i845G chipsets IIRC. They are exposed under OGL.

K~
 
Kristof said:
Reznor007 said:
Mulciber said:
Whatever happend to 3dfxs texture compression format?

The only cards to support it(V4, V5) were out of production about 6 months after being released...so it just faded away without ever really being used. I don't think any other upcoming chips(R300, NV30) support it either, so realistically, it is dead.

Thats incorrect the latest intel integrated graphics chipsets support 3dfx FXT1, which would be the i845G chipsets IIRC. They are exposed under OGL.

K~

So did anyone here ever use it successfully? Is it possable nVidia might incorporate this somehow into their next hardware, ie. would it even be worth it?
 
Not to bring up the whole FXT1 vs. S3TC debate, but I'm not sure the benefits of FXT1 over S3TC were enough to break through the effort of adding it to the standard set. Plus, I'm sure there was a lot of political reasons why NVIDIA never added the technology.

We may see the same thing with other vendor sponsored technologies... (no names, since there's already a ton of threads on that topic;))
 
FXT1 is not a solution, it's only marginally better than S3TC. It doesn't solve the needs of needing higher quality formats or formats that play with 128-bit FP data, or other functional data.
 
DemoCoder said:
FXT1 is not a solution, it's only marginally better than S3TC. It doesn't solve the needs of needing higher quality formats or formats that play with 128-bit FP data, or other functional data.

Interestingly enough (from what I understand, I never really looked into it), Gigapixel had developed a TC format that was apparently superior to FXT1. I believe it would have appeared as FXT2 or something like that, most likely in Fear. It is possible that NVIDIA could end up pushing this format in the future, possibly with NV30.
 
Well, for some textures, it may well be possible to use software TC on the NV30, simply by using the pixel shaders.

One example would be very useful for any smoothly-varying texture: fourier series. Since the PS in the NV30 support sines and cosines, it's not at all out of the question to procedurally generate a 2D texture from a few 1D textures (or, for example, generate a 1024x1024 out of a 1024x4 texture). You could simply use the original texture data as the coefficients for the fourier series. This would produce excellent results for any smoothly-varying texture (Think fog, sky, water, etc.).

Where this could be really cool is to use a 1024x1024x4 texture (or something of similar size) to represet a fourier series for fog filling a level, and have the hardware procedurally generate the 3D texture from the fourier representation.
 
Yes, procedural textures are going to be big on NV30/R300. However it would still be nice to have a data based compression algorithm, since not everything can be easily procedurally generated.


For example, one of the R300 demos used Debevec's lightfield approach which requires sampling real world scenes. The procedural approach would require radiosity in the shader.
 
Yes, I certainly agree. Didn't the OpenGL 2.0 spec call for a texture processor that would handle all compression/decompression? It would be quite cool if the NV30 had such a processor, and that it was programmable...
 
Ahh texture compression... one of my favourite topics (well, it keeps me off the street) :)

DaveB: I wouldn't say that VQ texture compression was "slow". A 1kx1k texture took about 30sec on my ageing P2-300 to compress to 2bpp(+2k LUT). Quality-wise, it was generally quite reasonable - perhaps a bit less than DXTC on natural images, but then DXTC is twice the size.

re FXT1: Now IANAPL, but two of the modes in FXT1 looked to me to infringe the S3TC patent. It may be a big risk for a company to use it, unless they paid a license fee for the S3 technology.
 
I'm a big texture compression advocate. DXTC is actually much better than you might think - it's not really that restricted to 16-bit, and by the time you take the ability to double the resolution into account it is very impressive. On the majority of images there really isn't a noticeable difference.

The pity is that most game developers up to now haven't taken the time and effort to go through their textures, pick the ones that can be compressed with no significant IQ difference, don't compress the ones that can't, but use higher resolution textures for those compressed with DXTC.

Remember, that with DXTC you can double both the width and height of your textures and STILL have them render substantially faster due to improved efficiency in the GPU and have them half the size they were at 32-bit. I've hardly seen any texture that doesn't look far better at 2x resolution with DXTC than at 32 bit (that's not to say they don't exist; they do; but they aren't commonly used in games)
 
Well, the textures that look poor with S3TC are generally ones with smooth gradients, such as the infamous Quake3 sky, as well as many other skies in different games.
 
ATI mentioned a neat compression method for their bump maps in one of their papers. Basically they use 16 bits to express the x & y components of a bump map, and the pixel shader figures out the z component with z = +sqrt(x*x+y*y) (z component of a bumpmap is assumed to be positive). This gives you nice high quality bump maps in only 32bits.
 
Back
Top