Hi All!!!
My name is Edu. I'm currently doing a small personal project in XNA and I'm a little stuck in the HDR pipeline.
I read the ShaderX6 article of Fran Carucci regarding histogram computation on the CPU to compute the max, min and average luminance values to use on Reinhard's tonemapping algorithm. I'm getting now a bit crazy.
As stated in the article I render the scene to a FP16 format render target (I'm not using currently LogLuv encoding, to simplify), create a downsampled version 1/4 of the original size and download this version for CPU analysis.
The first step is the histogram creation:
Carucci suggests to use 1024 luminance slots for the histogram. The luminance covered by each slot will be the absolute_maximum_luminance / 1024. I'm still trying to figure what is he referring to with 'absolute_max_luminance'.
When we are working with a A8R8G8B8 render target we are working with fixed point values whose range is [0...1]. Every color with a value greater than 1 will be clamped to1, so the maximum diffrence in luminance range will be 1:255.
I understand tha HDR tries to overpass this limit using an FP16 format, with a 16 bit floating point value per channel to represent a greater color range than previous one. The range covered by a 16-bit floating point is [0.0000153 - 131072], which is sufficient.
Does this mean that the absolute maximum luminance to choose would be 131072? Or we must choose at a guess a smaller value, one which we know the scene will never exceed?
The intention is to let an artist set manually this parameter depending on teh scene? Or must be a fixed one?
The fact is that the value chosen for this parameter affects the final result of the image: if I choose a big value for it and my scene has pixels whose max luminance is relatively small compared to the absolute_max_luminance chosen, then the vast majority of pixels will fall on the same histogram slot (the greater the value of absolute_max_lum is, the greater the luminace range covered by each slot, and the luminance difference between most pixels won't be greater enough to fall into different slots, making a flat histogram (almost all pixels in one to three slots). This will give me incorrect results, as you can imagine
I also can't understand very well how histogram equalization is applied in this process. As far as I know the equalization process tries to maximize the image contrast spreading the original histogram of the image across the entire luminance range (the range between 0 and the abs_max_luminance we chose), preserving the luminance ratios.
Reading on this subject I found that the way to perform equalization on an image is to construct a remapping function using the cumulative distribution function of the histogram, which is as simple as for each slot sum all the histogram normalized values till that slot.
Now, what I suppose we want to do with this is to apply this remapping function to each pixel: we get its luminance, apply the remapping function using this value and we get the new luminance for the pixel. We convert the pixel to the new luminance value.
I suppose that this must be a process that must be performed on the pixel shader before tonemapping process (we may want equalize the image before apply tonemapping to render image to tarditional 0-1 range). If the image is going to be equalized I suppose that maximum, minimm and average luminance that are passed to Reinhard operator must be computed on 'equalized space', because the image is going to be converted.
What I've tried to do is:
1.- At least by now I let the user set the abs_max_luminance for the scene. I set it to a value that will give me 'more or less' visually correct results
2.- Foreach pixel in the image compute luminance and find its slot. Add 1/Total_Num_Pixels for that slot. This will give us the normalized histogram.
3.- Compute the cumulative distribution function for the image, that will be used to equalize the image. As seen in Digital Image Processing book this function can be used to perform the luminance mapping.
I store this fucntion in a num_slots x 1 texture.
4.- As the image will be equalized to cover all the luminance range this implies that I can compute the max, min and average luinance this way:
MinLuminance = Absolute_Max_Luminance * Min_Percentage
MaxLuminance = Absolute_Max_Luminance * Max_Percentage
AvgLuminance = Absolute_Max_Luminance * Avg_Percentage
(I'm not very sure of this step)
5.- To perform tonemapping I use a Reinhard operator, using the values computed. Before applying the operator I try to map each pixel luminance to its equalized value:
// Calculate the luminance of the current pixel
float Lw = dot(LUM_CONVERT,vColor);
Lw = tex1D(RemapSampler,Lw);
float Ls = Lw - g_fMinLuminance;
float Ld = 0.0f;
if(Ls > 0.0f)
{
float L = (g_fMiddleGrey / g_fAvgLuminance) * Ls;
Ld = L * (1.0f + L / (g_fMaxLuminance * g_fMaxLuminance)) /(1.0f + L);
}
vColor *= (Ld / Lw);
return vColor;
But this is not giving me good results. I send a capture showing the final rendered image, the render of the downsampled version of the original FP16 render target and the lighting configuration panel with the user configurable values for the algorithm
Surely I'm not understandng something well. But I've been thinking on it for two weeks and my mind is blocked now.
Thanks a lot in advance, really.
My name is Edu. I'm currently doing a small personal project in XNA and I'm a little stuck in the HDR pipeline.
I read the ShaderX6 article of Fran Carucci regarding histogram computation on the CPU to compute the max, min and average luminance values to use on Reinhard's tonemapping algorithm. I'm getting now a bit crazy.
As stated in the article I render the scene to a FP16 format render target (I'm not using currently LogLuv encoding, to simplify), create a downsampled version 1/4 of the original size and download this version for CPU analysis.
The first step is the histogram creation:
Carucci suggests to use 1024 luminance slots for the histogram. The luminance covered by each slot will be the absolute_maximum_luminance / 1024. I'm still trying to figure what is he referring to with 'absolute_max_luminance'.
When we are working with a A8R8G8B8 render target we are working with fixed point values whose range is [0...1]. Every color with a value greater than 1 will be clamped to1, so the maximum diffrence in luminance range will be 1:255.
I understand tha HDR tries to overpass this limit using an FP16 format, with a 16 bit floating point value per channel to represent a greater color range than previous one. The range covered by a 16-bit floating point is [0.0000153 - 131072], which is sufficient.
Does this mean that the absolute maximum luminance to choose would be 131072? Or we must choose at a guess a smaller value, one which we know the scene will never exceed?
The intention is to let an artist set manually this parameter depending on teh scene? Or must be a fixed one?
The fact is that the value chosen for this parameter affects the final result of the image: if I choose a big value for it and my scene has pixels whose max luminance is relatively small compared to the absolute_max_luminance chosen, then the vast majority of pixels will fall on the same histogram slot (the greater the value of absolute_max_lum is, the greater the luminace range covered by each slot, and the luminance difference between most pixels won't be greater enough to fall into different slots, making a flat histogram (almost all pixels in one to three slots). This will give me incorrect results, as you can imagine
I also can't understand very well how histogram equalization is applied in this process. As far as I know the equalization process tries to maximize the image contrast spreading the original histogram of the image across the entire luminance range (the range between 0 and the abs_max_luminance we chose), preserving the luminance ratios.
Reading on this subject I found that the way to perform equalization on an image is to construct a remapping function using the cumulative distribution function of the histogram, which is as simple as for each slot sum all the histogram normalized values till that slot.
Now, what I suppose we want to do with this is to apply this remapping function to each pixel: we get its luminance, apply the remapping function using this value and we get the new luminance for the pixel. We convert the pixel to the new luminance value.
I suppose that this must be a process that must be performed on the pixel shader before tonemapping process (we may want equalize the image before apply tonemapping to render image to tarditional 0-1 range). If the image is going to be equalized I suppose that maximum, minimm and average luminance that are passed to Reinhard operator must be computed on 'equalized space', because the image is going to be converted.
What I've tried to do is:
1.- At least by now I let the user set the abs_max_luminance for the scene. I set it to a value that will give me 'more or less' visually correct results
2.- Foreach pixel in the image compute luminance and find its slot. Add 1/Total_Num_Pixels for that slot. This will give us the normalized histogram.
3.- Compute the cumulative distribution function for the image, that will be used to equalize the image. As seen in Digital Image Processing book this function can be used to perform the luminance mapping.
I store this fucntion in a num_slots x 1 texture.
4.- As the image will be equalized to cover all the luminance range this implies that I can compute the max, min and average luinance this way:
MinLuminance = Absolute_Max_Luminance * Min_Percentage
MaxLuminance = Absolute_Max_Luminance * Max_Percentage
AvgLuminance = Absolute_Max_Luminance * Avg_Percentage
(I'm not very sure of this step)
5.- To perform tonemapping I use a Reinhard operator, using the values computed. Before applying the operator I try to map each pixel luminance to its equalized value:
// Calculate the luminance of the current pixel
float Lw = dot(LUM_CONVERT,vColor);
Lw = tex1D(RemapSampler,Lw);
float Ls = Lw - g_fMinLuminance;
float Ld = 0.0f;
if(Ls > 0.0f)
{
float L = (g_fMiddleGrey / g_fAvgLuminance) * Ls;
Ld = L * (1.0f + L / (g_fMaxLuminance * g_fMaxLuminance)) /(1.0f + L);
}
vColor *= (Ld / Lw);
return vColor;
But this is not giving me good results. I send a capture showing the final rendered image, the render of the downsampled version of the original FP16 render target and the lighting configuration panel with the user configurable values for the algorithm
Surely I'm not understandng something well. But I've been thinking on it for two weeks and my mind is blocked now.
Thanks a lot in advance, really.