Gamma Corrected FSAA (9700) vs. No Gamma Correction (Ti4600)

psurge said:
So the reason for not pre-baking the gamma into the actual texture is because 8 bits per component is too imprecise to hold color values in linear space without banding artifacts?
Basically, yes. The eye is non-linear (and, conveniently, so is a CRT in a compatible way) and so to get the best bang-for-the-bit, non-linear is the best approach. It's very similar to the A-Law (U-Law?) encoding used for Audio - the ear is also non-linear.

Given no internal HW for transforming from NonLin to Lin and back again, it makes sense to have the input texture in the same mode as the framebuffer. Of course, assuming the lighting is linear, when the textures and frame buffer aren't, means that it's incorrect but not horrendously so. It just results in pixels being a bit darker than they should be (except for the extremes, of course).

Question - why is hardware support for this so expensive (even if the gamma value on read/write is
programmeable)?. For instance when a texel is read, don't you just need one access into 256 (maybe 1024) entry lookup table per color component?
I don't know about "expensive" but it's not exactly cheap:
For a start, if you are doing a single bilinear per clock, that's 4x3 (RGB) conversions (i.e. 12 copies of the hardware). Expand that to, say, 4 texture pipelines and it gets worse. On output you also need about 12bits per component.
Then you should also do all the multiplies (eg shading and blending) at higher precision, and the "intermediate" image framebuffer would also have to be expanded from 8:8:8:8 to at least 8:12:12:12 before doing the framebuffer blending. Finally, you convert to non-linear when storing the result back in the framebuffer.

Perhaps it is not a massive number of gates but I think consumer IHVs have had other priorities. Workstations, OTOH, have had wider framebuffers for some time, eg, SGI, so they could do calculations in linear space.

Galilee said:
Just a quick question: (probably very dumb, but I'm no expert in this area).
If a line or something is supposed to be white. Bright white, will not gamme correction in many cases change that color to grey? Is it then the way the designer wanted it to be?
(in those stars, the corrected gamma is always less bright, and more gray than the non-corrected).
Do you mean with antialiasing or without? Without AA, a bright white dot would be exactly the same.
With 2x2 AA, a 1/4 pixel dot would appear brighter with gamma correction than without.
 
SvP said:
Am I the only one who sees something weird in 9700's 6x picture? :-?

http://www.thoroughbred-data.com/nudies/chtest/chalnothtestcompare.jpg

No I can see that too.

I think its because of the sample pattern used.

aa_imp_6x.gif


The sample pattern shown there would give the least anti aliasing in the direction of the line that shows it in the tests.
 
I doubt it, Dave. That sample pattern would perfectly explain the lower image quality on the approximately 135 degree line (Note that it's not exact, because I didn't square the windows...figured it'd be a little bit better that way).
 
I wonder why ATi does not use/support an 5-subpixel mask. This would give IMHO an nearly perfect RGAA :

Bild8.gif


The picture is from aths ( german 3dcenter-forum )
 
Back
Top