darkblu said:hmm, it just cought my attention: the format either does not have a designated integer bit or it does not span the [-2, 2] range. if it had a designated integer bit the format's range would be (-2, 2), or pecisely, given the 10 fraction bits, [-1.9990234375, 1.9990234375].
Actually, the range is [-2, 2), from OpenGL Extension Specification for CineFX
andAdditionally, many arithmetic operations can also be
carried out at 12-bit fixed point precision (fx12), where values in
the range [-2,+2) are represented as signed values with 10 fraction
bits.
In the 12-bit fixed-point (fx12) format, numbers are represented as signed
12-bit two's complement integers with 10 fraction bits. The range of
representable values is [-2048/1024, +2047/1024].
Anyway, you raise an interesting point, until now in graphics 255*a = a. With the new shaders, you actually have "holes" in your numeric range when you mix 0-255 ranges with your fixed point arithmetic (what I call "off by one" problems):
Imagine you have a one-byte-per-component texture and you get a texel value of 255 in your fixed point shader. The usual thing is to convert texels with a byte value of 255 into 1.0 (fixed point) so you will never be able to get a texel value of 0.FF in your fixed point pipeline.
The problem is that if you read two texel values, one with 254 (0.FE in fixed point) and the other 1 (0.01 in fixed point) and add them, you will get 0.FF and not 1.0, this means that a 255 read from a texture is not the same as a 254 + 1 read from textures. You can think that this is a minor problem, but if the app is alpha-testing against that value (index-shadowmaps, for example) or if you accumulate enough times, you are going to see very weird artifacts.