zhugel_007
Newcomer
Hi,
I am confused about how the gamma correction is done in game rendering.
Assume the frame buffer is with the format R8G8B8A8 so it is in sRGB space. The sRGB gamma is 2.2 which is gamma decoding. The CRT gamma is 2.2 which is also gamma decoding. And we expect the final output from the CRT has the gamma 1.0. So at which stage we actually doing the gamma encoding which cancels the gamma of (2.2 x 2.2)?
From the info on Wiki, I assume we need to apply gamma encoding before sending to CRT, and (1/2.2) is not enough because the sRGB also has a gamma of 2.2, so we need to apply another (1/2.2).
Then in the Xbox 360 white paper:
Earlier we defined viewing gamma and the ideal viewing gamma of about 1.125. This means that gamma curves throughout the pipeline should cancel out by the time we get to light leaving the screen. If the pipeline is linear, it means the frame buffer contains linear data. Remember that the gamma curve for the frame buffer’s data gives us camera gamma. Since viewing gamma = camera gamma x display gamma, and the display gamma (display gamma = CRT gamma x LUT gamma ) is generally on the order of 2.2, this results in a viewing gamma of 2.2. This is incorrect.
So I assume the camera gamma should be (1/2.2). But again, the frame buffer is in sRGB space which has the gamma of 2.2, so we need to apply (1/(2.2 x 2.2)) to the frame buffer. It also says:
Another way of thinking about this is to assume that the display driver is expecting sRGB data. If you’re feeding it linear data, then what is displayed on the screen is incorrect. In fact, most color values show up darker than they should.
So I guess it is the display diver who applies the gamma encoding of (1/(2.2x2.2))?
Also, why should the ideal viewing gamma be around 1.0 (linear curve)? If the curve is already linear at this point, because of the human visual system has a gamma of 0.5, would the final curve be non-linear again?
Or do I miss anything?
Thanks!
I am confused about how the gamma correction is done in game rendering.
Assume the frame buffer is with the format R8G8B8A8 so it is in sRGB space. The sRGB gamma is 2.2 which is gamma decoding. The CRT gamma is 2.2 which is also gamma decoding. And we expect the final output from the CRT has the gamma 1.0. So at which stage we actually doing the gamma encoding which cancels the gamma of (2.2 x 2.2)?
From the info on Wiki, I assume we need to apply gamma encoding before sending to CRT, and (1/2.2) is not enough because the sRGB also has a gamma of 2.2, so we need to apply another (1/2.2).
Then in the Xbox 360 white paper:
Earlier we defined viewing gamma and the ideal viewing gamma of about 1.125. This means that gamma curves throughout the pipeline should cancel out by the time we get to light leaving the screen. If the pipeline is linear, it means the frame buffer contains linear data. Remember that the gamma curve for the frame buffer’s data gives us camera gamma. Since viewing gamma = camera gamma x display gamma, and the display gamma (display gamma = CRT gamma x LUT gamma ) is generally on the order of 2.2, this results in a viewing gamma of 2.2. This is incorrect.
So I assume the camera gamma should be (1/2.2). But again, the frame buffer is in sRGB space which has the gamma of 2.2, so we need to apply (1/(2.2 x 2.2)) to the frame buffer. It also says:
Another way of thinking about this is to assume that the display driver is expecting sRGB data. If you’re feeding it linear data, then what is displayed on the screen is incorrect. In fact, most color values show up darker than they should.
So I guess it is the display diver who applies the gamma encoding of (1/(2.2x2.2))?
Also, why should the ideal viewing gamma be around 1.0 (linear curve)? If the curve is already linear at this point, because of the human visual system has a gamma of 0.5, would the final curve be non-linear again?
Or do I miss anything?
Thanks!