Gamma Corrected FSAA (9700) vs. No Gamma Correction (Ti4600)

Yes, it's an OpenGL app. No, line AA is not enabled.

Btw, GeForce's don't support line AA...the only cards from nVidia that do are the Quadro line.
 
Chalnoth said:
I think that conversely, the AA lines on the Radeon shot look too narrow.
I don't know what your app is doing, but the Radeon is showing you a one pixel wide line, the GeForce is showing 2 pixels wide in most places.

I'll trust the Radeon, thanks.
 
OpenGL guy said:
Chalnoth said:
I think that conversely, the AA lines on the Radeon shot look too narrow.
I don't know what your app is doing, but the Radeon is showing you a one pixel wide line, the GeForce is showing 2 pixels wide in most places.

I'll trust the Radeon, thanks.

It's just drawing lines, nothing more, nothing less.

Anyway, the reason I made the comment that the gamma setting obviously wasn't correct for my monitor (which means it's going to be incorrect for somebody's monitor with a 9700) was simply that if you look at the black background lines, they definitely look wider than the ones with the white background. Logic dictates that when properly-calibrated, they should look just as wide in both situations.

This is why I feel it is necessary to have user calibration of this particular feature, in order for it to reach its full potential.
 
Chalnoth said:
It's just drawing lines, nothing more, nothing less.
And as I said, the lines are too wide on the GeForce. Look at the 45 degree line for a good example. Two adjacent pixels should not both be black as you can't get full coverage for both pixels with a 1 pixel wide line. Maybe it has something to do with the 4xS mode you said was used... I think this shows it's not that good.

The horizontal and verical lines are one pixel wide, so why is the GeForce inconsistent?

I think that conversely, the AA lines on the Radeon shot look too narrow.
Just because it's not appealing to your eye doesn't mean it is incorrect. The Radeon is quite consistently drawing a one pixel wide line, the GeForce is not. Why? It might be a limitation of 4xS, so maybe we should be looking at the normal 4x mode.
 
OpenGL guy said:
And as I said, the lines are too wide on the GeForce. Look at the 45 degree line for a good example. Two adjacent pixels should not both be black as you can't get full coverage for both pixels with a 1 pixel wide line. Maybe it has something to do with the 4xS mode you said was used... I think this shows it's not that good.

Two things:
1. I didn't take the shots with the white background.
2. 4xS mode is not available in OpenGL (sorry for the confusion). Normal 4x was used.

The horizontal and verical lines are one pixel wide, so why is the GeForce inconsistent?

The only time those lines might not be one pixel wide would be in the case of using one of the downfilter modes (Looking at the 4x9 shot, this seems to be the case...notice the "shadow" on the right side of the vertical lines, and above the horizontal lines). Personally, I don't like these downfilter modes.

Just because it's not appealing to your eye doesn't mean it is incorrect. The Radeon is quite consistently drawing a one pixel wide line, the GeForce is not. Why? It might be a limitation of 4xS, so maybe we should be looking at the normal 4x mode.

Compare the lines with the black background and the ones with the white background. The lines with the black background are a fair bit wider on my monitor (if they're not on your display, then perhaps the 9700 is properly-calibrated for your monitor). Note that I have adjusted no gamma settings, just brightness and contrast.

What this tells me is that if I got a Radeon 9700 today, the first thing I would do is see if I could find a setting to adjust the gamma of the gamma-corrected FSAA, and play around with the settings until the lines in both situations were about the same width.

One of the other things to pay attention to is the overall brightness of the lines, or, in the case of the white background, how dark the lines are. Notice how much blacker the vertical/horizontal lines look than the angled ones? Conversely, in comparison to what you'd expect, the GeForce4's angled lines seem a bit darker than the horizontal/vertical ones.

Update: For completely optimal gamma adjustment, it might be good to adjust the gamma-correct FSAA for different color channels with similar wheels used for each channel. Speaking of which, do you know if the 9700 supports per-channel gamma correction? It's not really that big of a deal if it doesn't, as I'm willing to bet most monitors have similar color response for each channel, but it would be nice.
 
Some comments on AA with gamma correction:
  • Assuming a typical monitor set-up, it is always better with than without. In my previous job I had to produce down-filtered thumbnails of video images (effectively the same as the last step in SSAA). If you didn't take gamma into account, the images would be dull and murky.
  • It doesn't really matter if the gamma power you are using isn't identical to the monitor/display system - it just has to be near enough. Heck, almost anything > unity (i.e. no gamma) is an improvement!
  • If you really want to compare the differences, don't use black and white in your test program - use Blue and Yellow, or Cyan and Red.

BTW, If anyone wants a really good reference on gamma, try Charles Poyntons Colour and Gamma FAQs. IIRC these also explain how to properly set up a monitor.
 
I can't disagree with you at all, Simon, but it still doesn't change the fact that such calibration should be available.

Your argument is like saying, "Some is still better than none."

But the hardware is almost certainly capable of different gamma correct values (I'd be amazed if it wasn't), so the drivers should certainly have a panel that allows the user to adjust it.
 
Chalnoth said:
I can't disagree with you at all, Simon, but it still doesn't change the fact that such calibration should be available.
Well it would be nicer, but I don't think it would be essential. To have it variable would require additional hardware and that might be better spent elsewhere.
Your argument is like saying, "Some is still better than none."
Indeed and I think that is a valid argument. Even having 'hardwired' settings should reduce artefacts due to gamma problems significantly - the likelihood is that these errors are going to far less significant than errors due to insufficient supersampling. (FWIW I've tried varying the assumed gamma and it really doesn't matter much.)

But the hardware is almost certainly capable of different gamma correct values (I'd be amazed if it wasn't), so the drivers should certainly have a panel that allows the user to adjust it.
As I said, fully adjustable hardware would imply more gates, so it might not be the case.

I think those FAQs I listed imply that a lot of improvement can be achieved by simply setting the monitor's "brightness" and "contrast" controls correctly. (BTW, according to the FAQ these are actually badly named.)
 
Simon F said:
[*] If you really want to compare the differences, don't use black and white in your test program - use Blue and Yellow, or Cyan and Red.

As a quick test which I expect people have access to - the OpenGL FSAA test app that was doing the rounds a little while ago has a green/red border at the top and right hand sides (it's a polygon intersection border, but that will be fine for MSAA, if not Matrox's method)

Check this border out with 9700 and Geforce - I thnk you'll find that the intermediate colour chosen without gamma correction actually looks darker to the eye than either the red or green areas, creating an obvious border. With gamma correction the colour appears to be correctly blended between the two areas.

- Andy.
 
andypski said:
As a quick test which I expect people have access to - the OpenGL FSAA test app that was doing the rounds a little while ago has a green/red border at the top and right hand sides (it's a polygon intersection border, but that will be fine for MSAA, if not Matrox's method)

That sounds like Basic's app that you are talking about and its available here.
 
Ok, after thinking about this some more, I've now managed to confuse myself. I understand why hardware needs to undo gamma correction in sRGB source art, and why the HW needs to gamma correct on output.

Also, basic mathematics suggests why it might be needed. In short, the function x^g is not an isomorphism under addition.

Let f(x) = x^g. Under multiplication, f(x)*f(y) = f(xy). But under addition, f(x) + f(y) != f(x+y). Therefore, a standard linear interpolation (x+y)/2 won't be correct.

But if I accept this rationale, then everytime I do any ADD instruction in a vertex or pixel shader that has to do with additive color, I need to insert gamma correction! It is simply never correct to add any two linear color values together!

Am I insane, or is this correct? If so, doesn't this suggest the addition of a new instruction: add_gamma, which performs addition in nonlinear space? Or moreover, how about new addressing modes that allow you to specify this? e.g.

add r0_invgamma, r1_gamma, r2_gamma


As a counter argument to confuse myself, just what is wrong with doing all math in a purely linear world? Why should I care about the CRT or the human eye's nonlinear response when I am calculating colors in my virtual world? In my virtual world, I can make 2 twice as bright as 1 if I want. 1 + 1 = a luminance that is 2x brighter. I should only have to worry about the CRT or the human observer when converting between my virtual world to the real world. Anyone care to explain this discrepency?

If doing the downsampling in non-linear space is more "correct", I am only forced to conclude that any time the 3D hardware adds two color values together anywhere in the pipeline, it would have to do the same thing in order to be correct.

But given that every software renderer I've ever seen, and every one I've ever written does everything in linear space, I tend to believe that is more correct. Then why is it more correct to do the downsampling in nonlinear space? It's just a blend, and blending between two pixels to me seems no different than blending between two colors, such as when I add diffuse to specular.

Thoroughly confused.

p.s. for the purposes of this discussion, I am assuming that prior to downsampling, the framebuffer is in linear space already (as it would be, if all inputs were linear and the pixel was written to the fb without gamma correction).
 
DemoCoder:
You're right, there's a lot more places that should be gamma corrected.
If textures aren't in intensity linear format, you need to do gamma correction at texture read (before filtering). Then you have it in linear format throughout the pixel shader, and gamma convert it before writing it to frame buffer (if it isn't in linear format).

I checked in Morrowind, and found that the in-game gamma setting were set to slightly above center (I don't remember to have touched it, but can't guarantee it). This setting did actually put the frame buffer in pretty close to linear format. So FSAA looked as it should.

I haven't tested on any other game yet, but I do know that my gamma in UT is rather high (I see a clear difference between color 0x010101 and color 0x000000), and I don't remember that I've changed any settings there either.

So I'm not sure that all games need gamma correction for FSAA. But I do agree that it is a good thing to have. Especially when it becones mainstream, since then you know for sure that games use art with the correct gamma.

The reason why it's good to store colors in a format that is non-linear in intensity, is that it instead is (approximately) linear in lightness. Lightness is proportional to how bright is perceived by a human. Or in other words, with color codes linear to lightness, banding from lacking FB precision is equaly large/small on all intensities. But with color codes that are linear to intensity, you got a lot of banding in dark colors, and unneccesarily high precision in bright colors.
 
DemoCoder said:
Ok, after thinking about this some more, I've now managed to confuse myself. I understand why hardware needs to undo gamma correction in sRGB source art, and why the HW needs to gamma correct on output.

you seem to be pretty consistent for a confused chap.

Also, basic mathematics suggests why it might be needed. In short, the function x^g is not an isomorphism under addition.

Let f(x) = x^g. Under multiplication, f(x)*f(y) = f(xy). But under addition, f(x) + f(y) != f(x+y). Therefore, a standard linear interpolation (x+y)/2 won't be correct.

no, it won't.

But if I accept this rationale, then everytime I do any ADD instruction in a vertex or pixel shader that has to do with additive color, I need to insert gamma correction! It is simply never correct to add any two linear color values together!

that's why once upon a time people used to 'pre-bake' the gama into those textures which would be op_add-ed later.

Am I insane, or is this correct? If so, doesn't this suggest the addition of a new instruction: add_gamma, which performs addition in nonlinear space? Or moreover, how about new addressing modes that allow you to specify this? e.g.

add r0_invgamma, r1_gamma, r2_gamma

the 'pre-baking' solution is a good 'prior art' to the above.

As a counter argument to confuse myself, just what is wrong with doing all math in a purely linear world? Why should I care about the CRT or the human eye's nonlinear response when I am calculating colors in my virtual world? In my virtual world, I can make 2 twice as bright as 1 if I want. 1 + 1 = a luminance that is 2x brighter. I should only have to worry about the CRT or the human observer when converting between my virtual world to the real world. Anyone care to explain this discrepency?

pretty simple - you have a two-component system - linear image synthesis + human vision. the output of the former goes to the input of the latter, which, though, happens to be non-linear, or actually exponential, to be precise.

so in your virtual world luminance of 1 + luminance of 1 is 2. problem is, you don't see it is as 2.

If doing the downsampling in non-linear space is more "correct", I am only forced to conclude that any time the 3D hardware adds two color values together anywhere in the pipeline, it would have to do the same thing in order to be correct.

yes, you are perfectly correct.

But given that every software renderer I've ever seen, and every one I've ever written does everything in linear space, I tend to believe that is more correct. Then why is it more correct to do the downsampling in nonlinear space? It's just a blend, and blending between two pixels to me seems no different than blending between two colors, such as when I add diffuse to specular.

becasue at high-contrast transitions (such as where aliasing would occcur as well) the linear-to-exponential 'naive' mapping hurts most.

Thoroughly confused.

no need to be. such is the nature of the beast.
 
I have to agree with most what Chalnoth said in this thread.
As I see it on my screen, white lines on black look much better on the R9700. Especially the 45° lines on GF look "dotted". Interestingly, in the 4x shot this only becomes really apparent when moving/scrolling the picture.

Looking at the black lines I have mixed feelings. Radeon lines seem a bit to thin, GeForce lines too fat. However, this makes the lines on GF really look black, while Radeon angled lines appear too bright. Also, the 45° lines look a bit more dotted than on GF.

Antialiasing is one of the reasons why I have set gamma to 0.8. Font antialiasing, that is. ClearType.
 
Democoder:
Don't worry, you're probably not alone, but the FAQs I listed can help a lot!

Because the eye is non-linear in its response, to get the best use of bits (i.e. so every increment in colour looks to be equally spaced to the eye) images will typically be stored in a non-linear format. This is fine because that's effectively what the CRT (and, to a reasonable extent, LCDs) delivers (if you don't go messing about with the DAC settings!).

Of course, many applications (some of mine included) just make the assumption that the framebuffer/CRT delivers a linear response, and so, in theory, the shading is possibly incorrect.

As others have suggested, probably the ideal approach would be to have a "Linearise" function when an image texel is read (expanding to, say, >= 12bits/channel), do the lighting linearly (as is currently done), but then convert back to non-linear when writing back to the (final) framebuffer.
 
So the reason for not pre-baking the gamma into the actual texture is because 8 bits per component is too imprecise to hold color values in linear space without banding artifacts?

Question - why is hardware support for this so expensive (even if the gamma value on read/write is programmeable)?. For instance when a texel is read, don't you just need one access into 256 (maybe 1024) entry lookup table per color component?

Serge
 
Just a quick question: (probably very dumb, but I'm no expert in this area).
If a line or something is supposed to be white. Bright white, will not gamme correction in many cases change that color to grey? Is it then the way the designer wanted it to be?
(in those stars, the corrected gamma is always less bright, and more gray than the non-corrected).
 
Galilee said:
Just a quick question: (probably very dumb, but I'm no expert in this area).
If a line or something is supposed to be white. Bright white, will not gamme correction in many cases change that color to grey? Is it then the way the designer wanted it to be?
(in those stars, the corrected gamma is always less bright, and more gray than the non-corrected).

Depends if the point at the top of the gamma correction function is also white - generally I think you would find that absolute white would remain white, and black would remain black. Points between these values remap from a point on the linear graph to the equivalent point on the gamma curve.
 
Back
Top