noko said:
Lets see If I can sum up the bit requirements for filtering:
A bilinear has 4 samples from the source texture, each one having 8 bits/channel. 2^8 x 4 = 2^10 information or 10bits.
A trilinear has two bilinear so it would be 2 x 2^10 = 2^11 informtion or 11bits.
Anisotropic of 16x on the Radeon 9700 using trilinear filtering would be
16 x 2^11 = 2^4 x 2^11 = 2^15 or 15 bits of information.
No, because you can do the filtering in stages.
Quick example:
Mathematically, the two methods are the same:
T1 = A + B + C + D
Avg = T1 / 4
This requires T1 to have an additional two bits for the final divide to not lose anything.
Second method:
T1 = A + B
T2 = C + D
Avg1 = T1 / 2
Avg2 = T2 / 2
T3 = Avg1 + Avg2
Avg = T3 / 2
Here, because the most we're dividing by is two, no more precision than 9 bits (assuming the inputs A, B, C, and D, as well as the outputs, Avg1, Avg2, Avg are all 8-bit) is necessary.
However, one additional thing needs to be considered here, and that is that modern hardware does not always do straight averages, but usually works with weighted averages. This means that plain bilinear filtering probably needs quite a lot more than 10-bit accuracy in the calculation to be done properly in one stage. Trilinear filtering, also, depends on a weighted average, and so may need more than the suggested 9-bit accuracy.
Anisotropic filtering, however, doesn't require any additional accuracy than about 9-10 bits, as long as the filtering is done in no more than 2-4 bilinear samples at a time, since anisotropic will just do a straight average on the bilinear samples.
Now no one here really answered what the real hardware actually has for precision. Now if precision is lost would also means colors are lost as well as dynamic range.
No, dynamic range is not lost. Dynamic range is the difference between the darkest and brightest color. Lower precision calculations will not diminish dynamic range. They just lose color data, causing dithering/banding.
The floating-point formats allow for a higher dynamic range simply because they are floating-point.
As for the Radeon being "higher-precision," it does have one mode that offers higher-precision, and that is when using overbright lights with PS 1.4. No game that I am aware of today uses these lights, though DOOM3 may (JC has support in the engine, but has said within the past couple of months that game developers have not yet taken advantage of it).
I don't believe for a moment that the GeForce-series has lower-precision filtering/rendering in general, however, as there is no effective loss in color depth from enabling trilinear, anisotropic, or FSAA. If there was a loss under these situations, then I might agree.
As for the gamma issues, I don't know. I would like to see some objective comparisons, but I doubt that those will show any conclusive results.