DXT issue NOT fixed in GF4 Series ?

Should NVIDIA have fixed this properly ?


  • Total voters
    162
LLB said:
I saw this topic discussed in another forum, I'm running a Ti-4600 with the 28.32 drivers and I can't seem to get the texture compression errors to manifest themselves.
The sky in Q3A looks the same whether TC is on or off.

Try rendering lightmaps only. You should see some weird colors on certain walls (DM9 was a good example, if I remember correctly).

If the artifacts are truly and completely gone, then I would guess that nvidia is not compressing lightmaps as there is no way to avoid artifacts with DXTC on some of the lightmaps.
 
Simon F said:
Sorry Marco, I understand what you meant now. FWIW I tried some experiments averaging 2x2 pixel groups, storing that 'base' colour per block, and then VQ compressing the delta (with fewer code book entries). It certainly helped with "low rate of change gradients", but I seem to remember I still ran out of VQ codes pretty rapidly. I might go back and try it again some time.
Mean removed VQ helps a bit, but thats really a very limited form of 2 stage residual VQ ... its easy to see that putting restrictions on the first stage by only allowing vectors for which all the components have the same value (in essence thats what you are doing if you seperately encode the mean) is very much non optimal even for only two stages from a coding efficiency point of view (codebook is smaller of course, so it might be a net win when rendering anyway ... I dont know). With more stages and entropy coding (which of course means the codebooks for each stage would have to be non max entropy, or entropy coding would be futile) <A HREF=http://dikkiedik.student.utwente.nl/~marco/tarkin/papers/Miscellaneous/00861005.pdf>some amazing results</A> are attainable. This might be unsuited for 3D hardware, but for the rest multistage VQ seems ideal for hardware implementation. Im curious how many stages and how large a codebook per stage you would need for a general compressor using a single shared codebook for all the textures, and if you could get much compression at all that way.
 
I'm running a Ti-4600 with the 28.32 drivers and I can't seem to get the texture compression errors to manifest themselves.
The sky in Q3A looks the same whether TC is on or off.

From your screenshots, something is incorrect. There will be a very noticeable change, especially in Q3, with TC OFF and ON.

The most common mistake is with NVMax or RivaTuner having the S3TC tweak set, or having a similar tweaker disabling TC in OGL period.

The second most common mistake is using the incorrect TC cvar in Quake3. id Software did change the cvar between patches, so without knowing which cvar your version uses, it's easier to just throw both in an autoexec.cfg so it will work under all conditions:
set r_ext_compressed_textures "0" // Texture compression OFF
set r_ext_compress_textures "0" // Texture compression OFF

//set r_ext_compressed_textures "1" // Texture compression ON
//set r_ext_compress_textures "1" // Texture compression ON

Just paste those 4 lines into your autoexec.cfg and move the "//" comment lines to the pair you want ignored (shown above with TC OFF). Quake3 will ignore whichever is improper for your version of Q3.

jb-
You can bind a key for screen shot.. just shift-~ for console and type like BIND F12 SCREENSHOT or whatever. FRAPS screenshot seems to work well as does Hypersnap. Just keep in mind with the 8500 that some AA modes will result in only a partial buffer capture... some of the newer ATI drivers hand screenshot proggies a pointer to the AA super-bitmap rather than the final display buffer.. so what you get is the 1/4 upper left part of the display.
 
The new dithering present in the GeForce4 isn't always better than what I had on my old GeForce DDR. The best example for this is in UT's KGalleon level. You can see it to some extent in this screenshot posted by LLB:

http://webpages.charter.net/madman11/UT-S3TC1.jpg

But you really have to see it in person to know just how bad it actually is. Personally, I really don't understand why nVidia is still decompressing the textures in 16-bit. It most certainly looks worse than decompressing them in 32-bit, and the hardware is definitely available that would decompress them in 32-bit. After all, DXT3 uses the exact same compression algorithm as DXT1, but with a 4-bit uncompressed alpha channel. Since DXT3 is decompressed in 32-bit, the hardware is capable of decompressing DXT1 in 32-bit.

Now, the question remains as to whether or not it is possible through the driver to have the hardware decompress DXT1 in 32-bit, or whether it is hard-wired to only work in 16-bit.

It does really get to me that nVidia chose to implement the rather crappy texture-space dithering algorithm, as it really looks terrible for textures whose pixels are large in comparison to screen pixels (usually in sky textures, but also possible in light map textures).
 
Sharkfood said:
I'm running a Ti-4600 with the 28.32 drivers and I can't seem to get the texture compression errors to manifest themselves.
The sky in Q3A looks the same whether TC is on or off.

From your screenshots, something is incorrect. There will be a very noticeable change, especially in Q3, with TC OFF and ON.

For what what it's worth I noticed this, note the Q3A version numbers.

LLB

http://webpages.charter.net/madman11/Q3A1.11.jpg
http://webpages.charter.net/madman11/Q3A1.31.jpg
 
Chalnoth said:
But you really have to see it in person to know just how bad it actually is. Personally, I really don't understand why nVidia is still decompressing the textures in 16-bit. It most certainly looks worse than decompressing them in 32-bit, and the hardware is definitely available that would decompress them in 32-bit. After all, DXT3 uses the exact same compression algorithm as DXT1, but with a 4-bit uncompressed alpha channel.
Earlier in this thread, OpenGL guy suggested that Nvidia chips might have a texel cache that stores decompressed textures and that the DXT1 (externally 4bpp) textures are only decompressed to 16bpp accuracy. It would seem that the DXTn (n>1) formats, (externally 8bpp), decompress to 32bit.
If that is the case, then I presume they do it to increase the hit rate on this decompressed cache when using DXT1 and thus maintain the overall bandwidth advantage of the 4bpp format over the 8bpp.(Shrug)
If this thrue then it's a bit counterproductive because it only forces developers to use an 8bpp mode and thus increase external and internal bandwidth usage.
Now, the question remains as to whether or not it is possible through the driver to have the hardware decompress DXT1 in 32-bit, or whether it is hard-wired to only work in 16-bit.
If it is a hardware issue then I suppose the drivers could easily convert the DXT1 to DXTn textures on-the-fly, but perhaps the doubling of memory usage may be an issue?

All IMHO of course.
 
Simon F said:
Earlier in this thread, OpenGL guy suggested that Nvidia chips might have a texel cache that stores decompressed textures and that the DXT1 (externally 4bpp) textures are only decompressed to 16bpp accuracy.

Byt the looks of this thread concerning XBox & NGC compression it would seem that NV2A only stores uncompressed textures in the cache, so I would assume this would be similar for all the NV2x series.
 
LLB-
For what what it's worth I noticed this, note the Q3A version numbers.

Go double check the two items I previously mentioned. ALL the GF4's I have here look like this:
q3gf4.txt


with texture compression on. This includes the Gainward, PNY, Visiontek and Leadtek boards. (V1.27g of Q3)

It would be interesting if somehow in the 1.31 betas id software having fudged the TC settings to completely disable use of TC, but somehow I doubt it.

Cheers,
-Shark
 
Back
Top