Now that textures are getting even larger (offline rendering), can be FP and more are more textures are going to be functional data, it seems to be that the industry is in dire need of new texture compression algorithms. I see a need for multiple types for multiple types of data. (e.g. specialized for functional data, normal maps, light maps, etc etc)
Moreover, I think as geometry loads increase, we're going to need geometry compression beyond the typical triangle mesh methods now used on GPUs today.
The question is: Should we standardize the compression algorithms and let them be implemented in hardware, or should be include a programmable unit attached to the texture units that can implement user definable compression algorithms. Perhaps combiningboth would allow the greatest performance and flexibility (standard algorithms for known common texture formats, plus the ability to do your own if needed)
To some extent, this may all be mitigated by the adoption of procedural shaders for a lot of stuff. But it seems to me, there still needs to be a replacement for S3TC.
Moreover, I think as geometry loads increase, we're going to need geometry compression beyond the typical triangle mesh methods now used on GPUs today.
The question is: Should we standardize the compression algorithms and let them be implemented in hardware, or should be include a programmable unit attached to the texture units that can implement user definable compression algorithms. Perhaps combiningboth would allow the greatest performance and flexibility (standard algorithms for known common texture formats, plus the ability to do your own if needed)
To some extent, this may all be mitigated by the adoption of procedural shaders for a lot of stuff. But it seems to me, there still needs to be a replacement for S3TC.