3Dc

I would prefer it if normal maps from a PolyBump tool were used. They're more likely to show any real differences. A noisy normal map generated at random is harder to tell.

If you go to ati.com/developer, you can download their NormalMapper tool which includes a sample hi-res polybump-style normalmap (carnormal) plus exporters for Maya and 3DSMax to export hi-res geometry models as low-res models with normal maps.
 
One of the problems with just storing x,y is that you only use about 78% of the available range of values. In effect x and y can only fall within a circle or radius 0.5. Also, you lose precision towards the extremes, i.e. the more forward facing normals get more of the range (you could over come this by storing angles instead)

zscale is used to push the useable range out a bit, into the corners as it were.

Implementing a 3dc type compress in a pixel shader should be possible. You could split the 3dc texture into a A8R8G8B8 texture containing the 4 min/max values for each block and a seperate A4L4 (or whatever?) texture containing two 4 bit index values for each pixel.
That's two texture samples and some maths to calculate XYZ. This is 2bits per pixel more expensive than 3dc.

Rob
 
Colourless said:
For purely quality comparison I would 'emulate' 3dc but putting the X and Y components into the alpha channels of 2 separate DXT5 textures.
That's probably nearly a practical approach for games - you could could put the usual colour texture map in the RGB part of the DXT5 and then stuff 1/2 the data, eg the the "X" component, of your normal map into the alpha. The pixel shader can be used to extract the components.

Since you're probably going to often have a 1:1 relationship between normal map and colour map this would be fairly texture cache friendly,

I'm not sure, yet, what you do to get the Y component, but I'm sure someone can think of something. :)
 
How about:
Code:
   float3 SiComputeNormalATI2N(float2 v, float zscale)
   {  
      v = v*2.0-1.0;

      v.x = v.a;
      v.y = v.g + v.b/64;        // <== This is the interesting part.
      v.z = sqrt(1.0 - dot(v.xy, v.xy);

      v.z *= zscale;
      return normalize(v);
   }
Then store the least significant bits of the Y channel in B.
This would increase the precision in the base value for the Y component to 5+6=11 bit, provided that the decompression is done at high enough precision. You would need a compressor designed for this use though.
 
Back
Top