Future of texture compression (GPU compression in general)

ATI mentioned a neat compression method for their bump maps in one of their papers. Basically they use 16 bits to express the x & y components of a bump map, and the pixel shader figures out the z component with z = +sqrt(x*x+y*y) (z component of a bumpmap is assumed to be positive). This gives you nice high quality bump maps in only 32bits.

Actually NV2X supports this. I'm not sure if this is exposed through D3D though.
 
That would make sense, and is essentially lossless compression, as normals in a normal map are normalized (always have a length of one). You only need two numbers to fully-describe the direction of the normals.

This would be a fairly nice 2:1 compression ratio for having absolutely lossless compression.
 
ERP said:
ATI mentioned a neat compression method for their bump maps in one of their papers. Basically they use 16 bits to express the x & y components of a bump map, and the pixel shader figures out the z component with z = +sqrt(x*x+y*y) (z component of a bumpmap is assumed to be positive). This gives you nice high quality bump maps in only 32bits.

Actually NV2X supports this. I'm not sure if this is exposed through D3D though.

Wouldn't have to be exposed. You could just pass it in as a 32bit texture and do the math in the pixel shader.

Now that I think of it, couldn't you also do 8 bit palletized textures with a dependent read? That could be quite a space savings.
 
The format in question does all the math for you on the texture read.
You supply a texture with 2 16 bit elements (I think that there is an 8 bit version also) and the texture read returns a normalised 3 vector.
It's exposed in the XBox version of DX, but I'm unsure if it's directly available in DX8 on a PC.
 
ERP said:
The format in question does all the math for you on the texture read.
You supply a texture with 2 16 bit elements (I think that there is an 8 bit version also) and the texture read returns a normalised 3 vector.
It's exposed in the XBox version of DX, but I'm unsure if it's directly available in DX8 on a PC.
NV20, NV25, and NV2a (Xbox) have 2x16 bit normal formats. NV25 and NV2a also have a 2x8 bit normal format, which is enough precision for most game uses. These are not exposed in DX8 on the PC, though they are available through OpenGL extensions.

--Grue
 
fresh said:
ATI mentioned a neat compression method for their bump maps in one of their papers. Basically they use 16 bits to express the x & y components of a bump map, and the pixel shader figures out the z component with z = +sqrt(x*x+y*y) (z component of a bumpmap is assumed to be positive). This gives you nice high quality bump maps in only 32bits.
Nothing new there. The Dreamcast compressed normals to 2 dimensions as well.
 
Back
Top