I don't know what your app is doing, but the Radeon is showing you a one pixel wide line, the GeForce is showing 2 pixels wide in most places.Chalnoth said:I think that conversely, the AA lines on the Radeon shot look too narrow.
OpenGL guy said:I don't know what your app is doing, but the Radeon is showing you a one pixel wide line, the GeForce is showing 2 pixels wide in most places.Chalnoth said:I think that conversely, the AA lines on the Radeon shot look too narrow.
I'll trust the Radeon, thanks.
And as I said, the lines are too wide on the GeForce. Look at the 45 degree line for a good example. Two adjacent pixels should not both be black as you can't get full coverage for both pixels with a 1 pixel wide line. Maybe it has something to do with the 4xS mode you said was used... I think this shows it's not that good.Chalnoth said:It's just drawing lines, nothing more, nothing less.
Just because it's not appealing to your eye doesn't mean it is incorrect. The Radeon is quite consistently drawing a one pixel wide line, the GeForce is not. Why? It might be a limitation of 4xS, so maybe we should be looking at the normal 4x mode.I think that conversely, the AA lines on the Radeon shot look too narrow.
OpenGL guy said:And as I said, the lines are too wide on the GeForce. Look at the 45 degree line for a good example. Two adjacent pixels should not both be black as you can't get full coverage for both pixels with a 1 pixel wide line. Maybe it has something to do with the 4xS mode you said was used... I think this shows it's not that good.
The horizontal and verical lines are one pixel wide, so why is the GeForce inconsistent?
Just because it's not appealing to your eye doesn't mean it is incorrect. The Radeon is quite consistently drawing a one pixel wide line, the GeForce is not. Why? It might be a limitation of 4xS, so maybe we should be looking at the normal 4x mode.
Well it would be nicer, but I don't think it would be essential. To have it variable would require additional hardware and that might be better spent elsewhere.Chalnoth said:I can't disagree with you at all, Simon, but it still doesn't change the fact that such calibration should be available.
Indeed and I think that is a valid argument. Even having 'hardwired' settings should reduce artefacts due to gamma problems significantly - the likelihood is that these errors are going to far less significant than errors due to insufficient supersampling. (FWIW I've tried varying the assumed gamma and it really doesn't matter much.)Your argument is like saying, "Some is still better than none."
As I said, fully adjustable hardware would imply more gates, so it might not be the case.But the hardware is almost certainly capable of different gamma correct values (I'd be amazed if it wasn't), so the drivers should certainly have a panel that allows the user to adjust it.
Simon F said:[*] If you really want to compare the differences, don't use black and white in your test program - use Blue and Yellow, or Cyan and Red.
andypski said:As a quick test which I expect people have access to - the OpenGL FSAA test app that was doing the rounds a little while ago has a green/red border at the top and right hand sides (it's a polygon intersection border, but that will be fine for MSAA, if not Matrox's method)
DemoCoder said:Ok, after thinking about this some more, I've now managed to confuse myself. I understand why hardware needs to undo gamma correction in sRGB source art, and why the HW needs to gamma correct on output.
Also, basic mathematics suggests why it might be needed. In short, the function x^g is not an isomorphism under addition.
Let f(x) = x^g. Under multiplication, f(x)*f = f(xy). But under addition, f(x) + f != f(x+y). Therefore, a standard linear interpolation (x+y)/2 won't be correct.
But if I accept this rationale, then everytime I do any ADD instruction in a vertex or pixel shader that has to do with additive color, I need to insert gamma correction! It is simply never correct to add any two linear color values together!
Am I insane, or is this correct? If so, doesn't this suggest the addition of a new instruction: add_gamma, which performs addition in nonlinear space? Or moreover, how about new addressing modes that allow you to specify this? e.g.
add r0_invgamma, r1_gamma, r2_gamma
As a counter argument to confuse myself, just what is wrong with doing all math in a purely linear world? Why should I care about the CRT or the human eye's nonlinear response when I am calculating colors in my virtual world? In my virtual world, I can make 2 twice as bright as 1 if I want. 1 + 1 = a luminance that is 2x brighter. I should only have to worry about the CRT or the human observer when converting between my virtual world to the real world. Anyone care to explain this discrepency?
If doing the downsampling in non-linear space is more "correct", I am only forced to conclude that any time the 3D hardware adds two color values together anywhere in the pipeline, it would have to do the same thing in order to be correct.
But given that every software renderer I've ever seen, and every one I've ever written does everything in linear space, I tend to believe that is more correct. Then why is it more correct to do the downsampling in nonlinear space? It's just a blend, and blending between two pixels to me seems no different than blending between two colors, such as when I add diffuse to specular.
Thoroughly confused.
Galilee said:Just a quick question: (probably very dumb, but I'm no expert in this area).
If a line or something is supposed to be white. Bright white, will not gamme correction in many cases change that color to grey? Is it then the way the designer wanted it to be?
(in those stars, the corrected gamma is always less bright, and more gray than the non-corrected).