2 New Hi-Rez Ruby pictures

Basic,
<thinking stream of consciously>, here's an idea

Assume 4x4 block, 128-bit output.

Store 2 base normals "losslessly" @ 16-bits (I use in quotes, I mean, store with 0.01-0.02 radian resolution quantized on unit sphere)

That leaves 96 bits left for 16 texels or 6 bits per texel. Use spherical interpolation to map 3-bits to each major sphere axis between the two end-points. This yields 64 different normal directions per block uniformly distributed over a cross section of the sphere, hopefully one that is small.

If you're willing to make assumptions about frequency, you can do better and support a 64-bit method.

Store 1 16-bit normal. That leaves 48-bit left over 16 texels, or 3-bits per texel. Define a neighborhood around the central normal over which you expect 7-different normals to fall and use the 3-bit index to lookup into an implicit table around this center.

The latter is only useful if you have good tools to detect when it can and can't be used to produce good results.

Using the 'neighborhood' approach, you could also store 1 16-bit normal, and be left with 128 normals in your defined neighborhood (11-22.5 degrees?)

Of course, all of these methods overlook redundancy in the data topology, which might be better served by using VQ-like techniques.
 
DemoCoder said:
Of course, all of these methods overlook redundancy in the data topology, which might be better served by using VQ-like techniques.

apropos, speaking of redundancies, any unit-sphere (i.e. polar) represnentation of normals would suffer from redundancies, as long as the normal's uncompressed storage format is represented by an even number of bits*. that as a sidenote to your discussion, though.

* it stems from the fact that in polar representation using angular components of full range ( [0, 2pi) ) introduces a redundancy by itself - you can represent every point on the surface of the unit-sphere by exactly two different pairs of azimuth and declination. to get rid of that redundancy you need to restrain one of the polar coords to half the range of the other (i.e. [0, pi) ), hence if you want equal precision for both the coords (very likely) you would need one of your coordinate to be one bit less that the other coordinate, i.e. one of the coordinates would be of odd bitness, hence the whole (raw) storage format would need be of odd bitness itself.
 
At last, the computer I wrote this message on can reach the net again. Sorry for the delay.

(My interpretation of) your first method:
Two 16 bit normals per block, the distance between them selects the size of a 2D "cloud" of normals (approximately?) on the unit sphere. A 6 bit index per pixel selects a normal from the "cloud".

Should work well, but it's of course double the size of 3Dc.
Using two normals instead of center normal + standard deviance leaves a feeling that more could be done.


Second method:
You have a constant std dev on the "normal cloud". That's a rather strong limitation.
Maybe the std dev could be deduced from the diff to neighbouring blocks' base-normal, but that would creating dependencies between blocks, which makes everything harder (compression and decompression).

I think it's possible to "steal" one bit each from three normal indices, to form a std dev factor. This can magnify the std dev in eight steps (in powers of two). The three texels that only gets two bit indices can use them to select one of the neighbouring pixels index.

But this all get rather complex. And there's the question if it will give much benefits over 3Dc.


VQ compression:
I don't think that will work so well on a two channel texture like a normal map. VQ compression (at least as from PVR, with a LUT) works because in a three channel colour texture, big parts of the colour volume are completely unused. But a normal map often has normals distributed over its whole 2D range, which means that the LUT would need ~all possible values.


darkblu:
It's actually possible to use an even number of bits to addres samples evenly distributed over a 2:1 rectangular area. (And in the same way, use an odd number of bits for a square area.) But I'd say that's not so important here, because an even angular distribution (in "yaw" & "pitch") would be a too uneven distribution of the normals. (Unnessesarily high precision around the poles.)
 
Basic said:
But I'd say that's not so important here, because an even angular distribution (in "yaw" & "pitch") would be a too uneven distribution of the normals. (Unnessesarily high precision around the poles.)

naturally. is there actually a normal-map representation in any of the APIs which achieves even liner distribution of the normals stored in polar space (that's a genuine question, as i have not been following the standards closely lately) ?
 
The best I've seen is probably Java3D, that DemoCoder linked to.

It uses polar coordinates, but just on a segment that don't come near the "pole". And then some extra bits to say where on the unit sphere this segment is placed.

It does of course have a different drawback though, 25% of the symbol space is unused :?.
 
Hat's of to you DemoCoder. :)
Except for exactly how the 8x8 interpolated samples would be placed, you were correct with your first method. I don't think I would call that 4:1 compression though, more like 2:1 (16->8 bpp).

With a high quality input to the compressor, high enough precision in the decompression, and a favourable map, they could squeeze in 10-11 bits per component though. So I could agree with a bit more than 2 times compression.
 
First off - this is all subjective. To each his/her own. That being said. I'm not impressed.

I found the Dusk image much more impressive. Just my opinion...
 
Back
Top