Partly inspired by the "Future of MSAA" thread, I'd like to start a discussion on how to best solve the problem of proper color calculation and display considering the possible non-linearity of content and display devices.
Linear color space means that the intensity values are proportional to the amount of photons they represent. It seems quite obvious that any lighting calculations should take place in this space. If you add a second identical light source, you get double the amount of photons, a transparent surface lets a certain percentage of photons pass, etc. It's all simple, linear math (although it's not quite that simple if you consider spectral distribution).
But there are two other things to consider.
- First, a typical display device doesn't have a linear response curve, i.e. if you double the intensity value (voltage for analog transmission), you get more than double the photons. Usually, that relation can be approximated as
Luminance ~ signal^gamma
with signal being in the [0,1] range and gamma typically being about 2.
- And second, our perception isn't linear, but approximately logarithmic. The ratio between just noticeable difference and luminance doesn't change much. This has a big impact on required precision, in that we need much more values representing the darker colors.
(One nice property of those two issues combined is that the 8bit precision DVI uses is acceptable in most cases. 16bit per channel is specified, but I know of no devices supporting this.)
Basically, it means that an 8bit linear representation per channel isn't enough for storing linear color space content. Floating point values, OTOH, are perfectly suited for this, due to their logarithmic precision. But FP16 values (the lowest supported FP precision) need more bandwidth, so it might be more practical to map those 8bit values in a non-linear way, during every read and write operation. Current hardware already supports such a mapping (sRGB reads and writes), computing ^2.2 when reading and ^.45 when writing. It is very important that this conversion is applied during any read and write operation to a sRGB buffer. Also, any operation in linear color space must take place using higher precision.
If we look at the 3d pipeline, we have textures, vertex colors and PS/VS constants as possible color input values. The latter two are FP precision anyways, so linear color space is no problem here.
Overall, what we have to consider is
texture read -> PS -> [framebuffer read -> blend ->] framebuffer write -> framebuffer read -> AA downsampling -> [framebuffer write -> framebuffer read ->] gamma LUT -> output to screen
Optional parts in []
Until very recently, games basically didn't really care about gamma conversion, and treated everything as being in linear color space, though it wasn't. I'm not even sure there are games that take advantage of the sRGB capabilities of DX9 hardware yet.
But for an overall pleasing rendered and displayed image, I think it is important that hardware and software treat colors correctly.
Every (color) texture read, framebuffer read and framebuffer write should be either sRGB mapped or FP16, and PS, blending, AA downsampling and gamma LUT need to provide enough precision.
AFAIK, ATI currently has neither FP16 nor sRGB framebuffer reads for blending, and no high precision blending. NVidia is missing the sRGB framebuffer reads for blending, as well as sRGB AA downsampling.
I hope WGF will require sRGB and FP16 in all those places.
Linear color space means that the intensity values are proportional to the amount of photons they represent. It seems quite obvious that any lighting calculations should take place in this space. If you add a second identical light source, you get double the amount of photons, a transparent surface lets a certain percentage of photons pass, etc. It's all simple, linear math (although it's not quite that simple if you consider spectral distribution).
But there are two other things to consider.
- First, a typical display device doesn't have a linear response curve, i.e. if you double the intensity value (voltage for analog transmission), you get more than double the photons. Usually, that relation can be approximated as
Luminance ~ signal^gamma
with signal being in the [0,1] range and gamma typically being about 2.
- And second, our perception isn't linear, but approximately logarithmic. The ratio between just noticeable difference and luminance doesn't change much. This has a big impact on required precision, in that we need much more values representing the darker colors.
(One nice property of those two issues combined is that the 8bit precision DVI uses is acceptable in most cases. 16bit per channel is specified, but I know of no devices supporting this.)
Basically, it means that an 8bit linear representation per channel isn't enough for storing linear color space content. Floating point values, OTOH, are perfectly suited for this, due to their logarithmic precision. But FP16 values (the lowest supported FP precision) need more bandwidth, so it might be more practical to map those 8bit values in a non-linear way, during every read and write operation. Current hardware already supports such a mapping (sRGB reads and writes), computing ^2.2 when reading and ^.45 when writing. It is very important that this conversion is applied during any read and write operation to a sRGB buffer. Also, any operation in linear color space must take place using higher precision.
If we look at the 3d pipeline, we have textures, vertex colors and PS/VS constants as possible color input values. The latter two are FP precision anyways, so linear color space is no problem here.
Overall, what we have to consider is
texture read -> PS -> [framebuffer read -> blend ->] framebuffer write -> framebuffer read -> AA downsampling -> [framebuffer write -> framebuffer read ->] gamma LUT -> output to screen
Optional parts in []
Until very recently, games basically didn't really care about gamma conversion, and treated everything as being in linear color space, though it wasn't. I'm not even sure there are games that take advantage of the sRGB capabilities of DX9 hardware yet.
But for an overall pleasing rendered and displayed image, I think it is important that hardware and software treat colors correctly.
Every (color) texture read, framebuffer read and framebuffer write should be either sRGB mapped or FP16, and PS, blending, AA downsampling and gamma LUT need to provide enough precision.
AFAIK, ATI currently has neither FP16 nor sRGB framebuffer reads for blending, and no high precision blending. NVidia is missing the sRGB framebuffer reads for blending, as well as sRGB AA downsampling.
I hope WGF will require sRGB and FP16 in all those places.