My fake HDR imitation shader (1MB total images)

K.I.L.E.R

Retarded moron
Veteran
Standard shot:
NOexposure.PNG


With my shader:
exposure.PNG


If anyone's interested:
http://members.optusnet.com.au/ksaho/dumb/gamma.zip

Seting up the kernel idea in the vertex shader was an idea I got from ShadX in Pete's OGL forum from ShadX's HQ2X filter. It really does save performance having it decalred in the vertex shader.

What my shader does:

Gets the current pixel.

Extracts intensity from each pixel.

Gets average intensity.

Blends the average intensity with the original pixel's intensity.

Puts the blended intesnity through an e^(x) function, obviously subtracts one from the result to ensure that the image won't be white.
It scales the intensity before it subtracts 1 from it.

Converts final colour + new intensity from YIQ back into RGB.

I'm particularly proud of this as I've done it purely from what little knowledge of image processing I have. You guys may have your PhDs but I don't even have an undergraduate degree as of yet.
My implementation maybe weak and a poor representation of HDR but please do keep in mind that I'm still learning.

I would like some technical feedback as well as all other feedback you guys can give.
Thanks.
 
Arrgghh!
What youve done is some sort of intensity normalizing. For HDR you need more information, like how overexposed each pixel is. Then you can postprocess.

Oh, and for the record, Im a learned electrician :p (but studying maths at the moment)
 
I can't really get much information on HDR.
I've seen a few HDR techniques.

Tone mapping is impossible to find info on.
Does anyone have any good image processing text books they can recommend for this stuff?
 
K.I.L.E.R said:
I can't really get much information on HDR.
I've seen a few HDR techniques.

Tone mapping is impossible to find info on.

Of course you usually need tone mapping to display an HDR image, but there's nothing special in HDR rendering except the color representation used. All the "tricky stuff" happens during tone mapping, simulating on an LDR image how the human eye would react to the HDR one.

Googled "tone mapping". The first link (Wikipedia) gives a short explanation, and a bunch of links at the bottom of the page:
Photographic Tone Reproduction for Digital Images
Lightness Perception in Tone Reproduction for High Dynamic Range Images
Contrast Processing of High Dynamic Range Images
Fast Bilateral Filtering for the Display of High-Dynamic-Range Images
Gradient Domain High Dynamic Range Compression
Implementation of state-of-the-art tone mapping operators
 
Kruno what is this for, some sort of post-processing filter on an old school console emulator?

Also, what kind of filter kernel are you using here? I can see that your centre tap is at (gl_TexCoord[1].x, gl_TexCoord[0].w), and in storePixels(), 3 samples have the same x-coordinate as the centre tap and 3 sample have the same y-coordinate. Is this some sort of asymmetric cross? Anyway, a few big suggestions for "gpuPeteOGL2.slf":

Optimization 1:

Do the matrix multiplies after the averaging. Note that M*v1+M*v2 = M*(v1+v2). Get rid of YIQ * in all 7 places that you have it, and then replace the 3rd liine of main() with finalPixel = YIQ * meanFilter(finalPixel);. This knocks of 18 math instructions without altering the output.


Optimization 2:

Your exponential function isn't doing much, and in the [0,1] range it's almost linear. You can pretty much collapse everything to a single matrix multiply and offset. This code should give very similar results (I got about 3% RMS error with some random test values in Excel):
Code:
mat3 MAGIC = mat3(
	1.134357,	0.264423,	0.05122,
	0.134921,	1.264059,	0.05102,
	0.133707,	0.263525,	1.052768
);

void main(void)
{
	vec3 finalPixel = texture2D(OGL2Texture, vec2(gl_TexCoord[1].x, gl_TexCoord[0].w)).xyz;
	
	storePixels();
	finalPixel = meanFilter(finalPixel);
	
   gl_FragColor = vec4(MAGIC * finalPixel - vec3(0.28, 0.28, 0.28), 0.0);
}

vec3 meanFilter(inout vec3 finalPixel){
	vec3 avg = vec3(0.0, finalPixel.yz);

	for(int i=0; i < KERNEL_AREA; i++){
		avg.x += pixels[i].x;
	}
	
	avg.x /= float(KERNEL_AREA);
	
	float a = 0.75;
	finalPixel.x = avg.x * (1.0 - a) + finalPixel.x * a;
	return finalPixel;
}

void storePixels(){
	pixels[0] = texture2D(OGL2Texture, gl_TexCoord[0].xw).xyz;
	pixels[1] = texture2D(OGL2Texture, gl_TexCoord[0].yw).xyz;
	pixels[2] = texture2D(OGL2Texture, gl_TexCoord[0].zw).xyz;
	
	pixels[3] = texture2D(OGL2Texture, gl_TexCoord[1].xy).xyz;
	pixels[4] = texture2D(OGL2Texture, gl_TexCoord[1].xz).xyz;
	pixels[5] = texture2D(OGL2Texture, gl_TexCoord[1].xw).xyz;
}
Let me know how it goes.

EDIT: Whoops, looks like you already replied. I pressed submit by accident. Anyway, can you take a peek at the second optimization?
 
Last edited by a moderator:
Thanks, I didn't think of that.
It's for a PSX console.
http://www.pbernert.com/

Mintmaster said:
Kruno what is this for, some sort of post-processing filter on an old school console emulator? Anyway, a few suggestions for "gpuPeteOGL2.slf":

Optimization 1:

Do the matrix multiplies after the averaging. M*v1+M*v2 = M*(v1+v2). Get rid of YIQ * in all 7 places that you have it, and then replace the 3rd liine of main() with finalPixel = YIQ * meanFilter(finalPixel);


Thanks.
I will definitely do more reading on the topic.
Tone mapping to me looks nothing more than using a function to preserve the ratios of contrast between pixels.

Mate Kovacs said:
Of course you usually need tone mapping to display an HDR image, but there's nothing special in HDR rendering except the color representation used. All the "tricky stuff" happens during tone mapping, simulating on an LDR image how the human eye would react to the HDR one.

Googled "tone mapping". The first link (Wikipedia) gives a short explanation, and a bunch of links at the bottom of the page:
Photographic Tone Reproduction for Digital Images
Lightness Perception in Tone Reproduction for High Dynamic Range Images
Contrast Processing of High Dynamic Range Images
Fast Bilateral Filtering for the Display of High-Dynamic-Range Images
Gradient Domain High Dynamic Range Compression
Implementation of state-of-the-art tone mapping operators
 
K.I.L.E.R said:
I will definitely do more reading on the topic.
Tone mapping to me looks nothing more than using a function to preserve the ratios of contrast between pixels.
Yeah I'd say thats roughly about right.

Anyway please don't try to do fake HDR on a hand drawn piece of art as its probably artistic instead of realistic.

Anyway if you wanted to do this on a render LDR image I could suggest if you really want to do fake HDR you could try something like this. ( Say other PSX games )

Basicly to create the HDR you interpolate pixel with a component with a brightness over 1.0 i.e. they are probably brighter then 1.0.

Now i'll talk in 1D because its easy.
killa.PNG


Talking about the left hand side. The black represent the value held in the LDR buffer you notice the straight line that is where it is clamped at 1.0. One of the most obivious choice to interpolate this would guassian curve I've tried to roughly make one. Obviously in your pseudo hdr buffer you would replace 1.0 values with the blue value and do any further post-processing.

The right hand side shows a case where teh guassian curve would produce a very high peak in cyan now since you don't actually have any information what its really supposed to be your probably just going to loose the interesting parts of your image by shoving it all into darkness. That is why you would attenuatiate the guassian curve some how ( say sqrt( value ) ) as shown in red. This exercise gets harder in 2d and your probably finding skeletons and stuff.
 
Last edited by a moderator:
Back
Top