gamma correction

zhugel_007

Newcomer
Hi,

I am confused about how the gamma correction is done in game rendering.

Assume the frame buffer is with the format R8G8B8A8 so it is in sRGB space. The sRGB gamma is 2.2 which is gamma decoding. The CRT gamma is 2.2 which is also gamma decoding. And we expect the final output from the CRT has the gamma 1.0. So at which stage we actually doing the gamma encoding which cancels the gamma of (2.2 x 2.2)?

From the info on Wiki, I assume we need to apply gamma encoding before sending to CRT, and (1/2.2) is not enough because the sRGB also has a gamma of 2.2, so we need to apply another (1/2.2).

Then in the Xbox 360 white paper:
Earlier we defined viewing gamma and the ideal viewing gamma of about 1.125. This means that gamma curves throughout the pipeline should cancel out by the time we get to light leaving the screen. If the pipeline is linear, it means the frame buffer contains linear data. Remember that the gamma curve for the frame buffer’s data gives us camera gamma. Since viewing gamma = camera gamma x display gamma, and the display gamma (display gamma = CRT gamma x LUT gamma ) is generally on the order of 2.2, this results in a viewing gamma of 2.2. This is incorrect.

So I assume the camera gamma should be (1/2.2). But again, the frame buffer is in sRGB space which has the gamma of 2.2, so we need to apply (1/(2.2 x 2.2)) to the frame buffer. It also says:

Another way of thinking about this is to assume that the display driver is expecting sRGB data. If you’re feeding it linear data, then what is displayed on the screen is incorrect. In fact, most color values show up darker than they should.

So I guess it is the display diver who applies the gamma encoding of (1/(2.2x2.2))?

Also, why should the ideal viewing gamma be around 1.0 (linear curve)? If the curve is already linear at this point, because of the human visual system has a gamma of 0.5, would the final curve be non-linear again?

Or do I miss anything?

Thanks!
 
Hi,
I am confused about how the gamma correction is done in game rendering.

Colors with 8 or fewer bits per channel should always be sRGB, and should always have been sRGB: linear in 8 bits has too many whites and not enough blacks, the frame buffer has always been sRGB, and converting from low-bit linear to low-bit sRGB for the frame buffer is extremely lossy.

For decades we've lived with the compromise of pretending that low-bit colors were linear because 32-bit floating point math was for supercomputers, and without 32-bit floating point math, it's tricky to do linear color with enough range and precision.

This ancient compromise still exists in Photoshop, which can't scale images correctly in 2010, because the scaling code still pretends that 8-bit channels are linear when they are sRGB:

http://www.4p8.com/eric.brasseur/gamma.html

Now that we all own supercomputers (GPUs) we should do all our color computations in 32 bit floating point linear, and we should consider all low-bit colors to be sRGB de facto.

As for "how games do it," good game engines store compressed textures as sRGB, compute linear color in 32 bit floating point, and store frame buffer color in at least 16 bits of linear luminance or equivalent. Only at the last possible moment is this tone-mapped down into 8 bit sRGB.
 
Thanks guys for the reply. :)

My understand is:

What the xbox 360 suggests was to doing all the calculation in linear space, then convert the color into sRGB space by
adding a line in the shader:
vColorOut = pow(vColor, 0.45);
And this will actually set the color gamma to 1/2.2. (not sure why they
called this as sRGB space in the document, looks like a inverse sRGB
space. ;P)
Then this will cancel out the gamma of the CRT to make the final viewing
gamma to be 1.

Another option would be output in linear space to display buffer, then encode
the 1/2.2 gamma in the LUT which will be applied in the graphic card's
DAC.
http://developer.amd.com/gpu/radeon/archives/RadeonSampleCode/GammaCorrection/Pages/default.aspx


Am I correct?
 
Last edited by a moderator:
DX10+ GPUs support both texturing and blending in gamma-corrected sRGB space. i.e. you can store your 8-bit sRGB texture and when you look up into it the hardware will convert each texture tap into linear space, blend them and return the linear result to the shader (for further linear math).

Similarly with an sRGB render target the linear shader output will be blended with the converted sRGB value in the framebuffer and the final value will be converted to sRGB and stored. As bmcnett notes though, it's common to just store and accumulate linear HDR framebuffers and do a final tone-mapping/sRGB pass at the end.

Thus in DX10+ all you need to do is declare your surface formats correctly and the hardware "does the right thing". In previous APIs you have to be careful. For instance, doing the gamma correction at the end of the shader or after sampling a texture is less correct since blending and texture filtering respectively will be slightly wrong.
 
Am I correct?

Those sound like things people do. If your engine renders only opaque objects and does no post-processing, rasterizing directly to an 8-bit sRGB frame buffer is probably fine, regardless of whether you do the pow() in the shader or let the hardware do it for you.

If you have translucent objects or post-processing, then you should not use an 8-bit frame buffer because 8 bits isn't enough to store the temporary results between translucent layers or passes in the general case.

The current standard is to rasterize to a 16 bit linear frame buffer, and not worry about the gamma of the output at all, until after all your passes are done and it's almost time to flip the frame.
 
I am still a bit confused:
As I understand, sRGB space should have the gamma of 2.2. But all the document i could find suggest (for DX9 class hardware) to do calculation in linear space, then convert to sRGB space in the final output stage. But shouldn't we output to "inverse sRGB space" which has a gamma of 1/2.2 instead to cancel out the CRT's gamma?
 
Poynton has the answer in simple terms:

http://www.poynton.com/PDFs/GammaFAQ.pdf

Question 6 shows the power law to use is 0.45, and if you want to be super-precise about it, you implement the linear section.

The correct inverse is 2.222222, for what it's worth. Again he provides the precise conversion for gamma space to linear space.

Rec.709 is the same as sRGB, both in terms of colour primaries and gamma.
 
The driver/display expects colors in sRGB space. If you present linear colors, the resulting image will appear darker than it should. This presentation is pretty straightforward, if you're interested.
 
Thanks for the 2 links. But I am actually confused from there. ;P
GammaFAQ mentioned that Frame buffer should be in linear space but in the presentation from Microsoft, as I understand, it says that divers expect Frame buffer in sRGB space which has a gamma of 2.2. Which one is correct then? (for DX9 class hardware.)
 
You want to do all texturing, lighting, blending and antialiasing in linear space (in a higher precision framebuffer) and, at the very final step, convert to an 888 sRGB framebuffer.

Does that help?
 
Yea, that is what I thought. But then go back to my first question, sRGB has a gamma of 2.2, and CRT has a gamma of 2.2. Who exactly canceled out these 2 gammas?
 
sRGB isn't 2.2, that's an approximation AFAIR.

What happens is that the screens are now sRGB, so they expect sRGB data in, and will treat the data as if in that format to compute how to lite their own pixels.
So to get the correct result on screen, the application must send data in sRGB format.

Note that since most screens are poorly calibrated you will still have HUGE difference in colour reproduction from one screen to another though :(
 
The problem with just using a power function such as y = x^2.2, is that the inverse function, x=y^(1/2.2) has an infinite slope at 0.0.

sRGB address this by, basically, blending in to a linear section around 0 so that the inverse function is well behaved.

(Mind you, the 'standard' sRGB is not perfect as the constants that are used were rounded down to such a low accuracy that there is, IIRC, a knee in the curve)
 
Yea, that is what I thought. But then go back to my first question, sRGB has a gamma of 2.2, and CRT has a gamma of 2.2. Who exactly canceled out these 2 gammas?
They cancel each other out. To display sRGB colour data on a perfect sRGB screen, you simply feed it the data unchanged.
 
Back
Top