Precisely. It's with this kind of stuff in mind:
Anyway, I'm not sure I can really agree that the answer is to do the gamma correction at each and every step. Makes more sense to me to just do all operations in linear space and leave the correction to the end (less hardware is required that way). The only potential problem is that this does depend upon game developers making the switch.
Now, lastly, the thing that really bothers me about this whole issue is that if I simply take the situation of a white line on a black border, why should the gamma setting of 2.2 be the proper setting to make that line look correct on any monitor if the "proper" gamma correction setting is used?
....that I'm having a hard time with vember's posts.But there are two other things to consider.
- First, a typical display device doesn't have a linear response curve, i.e. if you double the intensity value (voltage for analog transmission), you get more than double the photons. Usually, that relation can be approximated as
Luminance ~ signal^gamma
with signal being in the [0,1] range and gamma typically being about 2.
- And second, our perception isn't linear, but approximately logarithmic. The ratio between just noticeable difference and luminance doesn't change much. This has a big impact on required precision, in that we need much more values representing the darker colors.
Anyway, I'm not sure I can really agree that the answer is to do the gamma correction at each and every step. Makes more sense to me to just do all operations in linear space and leave the correction to the end (less hardware is required that way). The only potential problem is that this does depend upon game developers making the switch.
Now, lastly, the thing that really bothers me about this whole issue is that if I simply take the situation of a white line on a black border, why should the gamma setting of 2.2 be the proper setting to make that line look correct on any monitor if the "proper" gamma correction setting is used?