Will you explain how HDR is rendered?

BenQ

Newcomer
I've been "Googling" like crazy but I can't seem to find enough specific info about HDR. Most sites just give a very general idea of what HDR is. HDR simulates the way your eyes percieve light more effectively yad yada yada....

But at this point I can't even tell you exactly what kinds of hardware is used to generate or how HDR is rendered.

Do any of you have a link to a fairly detailed despciption of HDR?

Much appreciated.
 
HDR is really simply.

LDR mean you do atleast stored or calculated as 0-1.0 with lower precision usually 8 bit integers. 0-1.0 isn't a big range and 8 bit int isn't the best storage.

HDR means 0.0-BIG ( possible -BIG to BIG ) with something that has good accuracy generally using float points ALL the way through the pipeline.

So its exactly the same as normal but its uses more accuracy and a larger range.

Now this has to come to your display since we don't have HDR displays we cheat here and we compressing the interesting parts of the image into what we can display on the screen one technique is called tone mapping. As you can imagine if your trying to display something 10 brighter then what your screen can handle it would be useful.
 
bloodbob said:
HDR is really simply.

LDR mean you do atleast stored or calculated as 0-1.0 with lower precision usually 8 bit integers. 0-1.0 isn't a big range and 8 bit int isn't the best storage.

HDR means 0.0-BIG ( possible -BIG to BIG ) with something that has good accuracy generally using float points ALL the way through the pipeline.

So its exactly the same as normal but its uses more accuracy and a larger range.

Now this has to come to your display since we don't have HDR displays we cheat here and we compressing the interesting parts of the image into what we can display on the screen one technique is called tone mapping. As you can imagine if your trying to display something 10 brighter then what your screen can handle it would be useful.

Alright.....

So HDR uses something called tone mapping because the "real" range of light is far more than any PC monitor can display. So they use tone mapping to ..... bah I'm still confused....

Also can you explain the differences between fp8 ( if it exists ) fp10, the "special" fp10 mode in Xenos and fp16?

..... just keep talking :)
 
HDR rendering doesn't work this way though, does it? I've heard talk of HDR working by compositing separately rendered elements, as opposed to using traditional rendering just with high dynamic ranges values and then scale the output.
 
Basically what happens is that when something is rendered with an higher range than the monitor can display, the final "interpolated" (not the right term) image will be clearer than if the rendering was done "close to spec" to the monitor.


Look at it this way, when you watch a DVD, your TV and the DVD itself both have limitations, they just will not replicate the image and make you think you're actually looking at a window.

However it looks miles better than any realtime computer graphics because the source material (reality) has a very high range of colour, contrast, brightness, which are all way beyond what your TV can display, but are needed to "get it right" in the final image.

Not sure that was very clear.
 
BenQ said:
So HDR uses something called tone mapping because the "real" range of light is far more than any PC monitor can display. So they use tone mapping to ..... bah I'm still confused....
If you know anything about Photography, this is easy to understand. The idea is to supply information on brightness that is very diverse like real-world illumination, and then you scale the output to show the area of detail you want on the display. So you can scale low level values in shadows from black to white, and anything above that brightness will be whited out (overexposed). Or you can scale the image so the very brightest areas like directly at the sun are white, and everything else is much darker. It gives responses like real cinematography.

Also can you explain the differences between fp8 ( if it exists ) fp10, the "special" fp10 mode in Xenos and fp16?
There's no such thing as fp8. In an fpxx number, the xx determines the number of bits used to represent a number. In the case of fp10, 10 bits are used. These are divided in exponent and mantissa values, which in English is a decimal like 3.421 and a multiplier that 's 2 to the nth power.

In the case of fp10, there's I think 7 mantissa bits and 3 exponent bits, but I'm fuzzy. Basically the bigger the fp number, the more accurate you are, but the more memeory you need to store information, and the slower your calculations (though maybe only above fp16?). Accuracy is needed when blending etc. to elliminate artefacts.
 
Shifty Geezer said:
BenQ said:
So HDR uses something called tone mapping because the "real" range of light is far more than any PC monitor can display. So they use tone mapping to ..... bah I'm still confused....
If you know anything about Photography, this is easy to understand. The idea is to supply information on brightness that is very diverse like real-world illumination, and then you scale the output to show the area of detail you want on the display. So you can scale low level values in shadows from black to white, and anything above that brightness will be whited out (overexposed). Or you can scale the image so the very brightest areas like directly at the sun are white, and everything else is much darker. It gives responses like real cinematography.

Also can you explain the differences between fp8 ( if it exists ) fp10, the "special" fp10 mode in Xenos and fp16?
There's no such thing as fp8. In an fpxx number, the xx determines the number of bits used to represent a number. In the case of fp10, 10 bits are used. These are divided in exponent and mantissa values, which in English is a decimal like 3.421 and a multiplier that 's 2 to the nth power.

In the case of fp10, there's I think 7 mantissa bits and 3 exponent bits, but I'm fuzzy. Basically the bigger the fp number, the more accurate you are, but the more memeory you need to store information, and the slower your calculations (though maybe only above fp16?). Accuracy is needed when blending etc. to elliminate artefacts.

Your explanation of what tone mapping does. It was very clear and helpful, thank you.

But are you sure about fp10 being 10 bits? I have read on these forums that fp10 is ( 10x10x10x2 ) - 32 bit. and that fp16 is ( 16x16x16x16 ) - 64 bit. However I admit I don't really understand what any of that means. :oops: I only know that higher numbers = higher quality in this case ( and higher performance hits ).

I have ALOT to learn about HDR.
 
There's really not much to it than "a lot more precision than before". That's all really.
What's different is that chips might have to use different rendering methods in order to keep performance to acceptable levels when using HDR. Or that's the way i understand it.
Like Tiling, it's just a way to keep performance to good levels while having many features turned on that would be too much if tiling wasn't used. It doesn't have anything to do with HDR or a supposed HDR rendering method.
 
This guy from gamedev.net http://www.gamedev.net/reference/articles/article2108.asp explains it like this:

Image2-HDRprocess.jpg


Render the scene with HDR values into a floating point buffer.
Downsample this buffer to 1/4th size (1/2 width and 1/2 height) and suppress LDR values.
Run a bloom filter over the downsampled image blurring it along x and y axes.

Tone map the blurred image after compositing it with the original image.
We need to suppress LDR values so that we don't blur those parts of the image. A bloom filter simply bleeds color from one pixel to it's neighboring pixels. We use a Gaussian filter in this case, but you can use any bloom filter.
 
10 bits per colour. That's 10 bits for red, green and blue. To make up a computer friendly 32bit word, a generous 2 bits are provided for alpha value.

In fp16, 16 bits are used for each colour. That's 16 bits for red, green, blue and alpha, 64 bits per pixel.

If you don't know what bits are it'd be best to go goodle up some 'computer jargon 101'
 
I have read one of Jawed's posts where he said that HDR was accomplished by "floating point blending in the frame buffer".....unfortunately I'm not smart enough for that to offer me any real insight :oops:
 
BenQ said:
I have read one of Jawed's posts where he said that HDR was accomplished by "floating point blending in the frame buffer".....unfortunately I'm not smart enough for that to offer me any real insight :oops:

i think my post clears that process up a bit...
 
Having a brief glance at that article, they seem to be saying the blur is part of the HDR process, but it seems to me it's only to add light bloom. All HDR images we see have this optical 'fuzz' but it's not necessary for dynamic range.
 
Look nice, but that shows bloom around bright areas. The same can be achieved on existing non-HDR hardware easily enough (though maybe not quickly as you need to post-process the final image). Bloom isn't intrinsic to HDR and is added by the blurring that article referred to. But this doesn't explain compositing of different elements which seems to be intrinsic to rendering HDR scenes but which I know nothing about :?
 
Shifty Geezer said:
Look nice, but that shows bloom around bright areas. The same can be achieved on existing non-HDR hardware easily enough (though maybe not quickly as you need to post-process the final image). Bloom isn't intrinsic to HDR and is added by the blurring that article referred to. But this doesn't explain compositing of different elements which seems to be intrinsic to rendering HDR scenes but which I know nothing about :?

No, you're stopping at the bloom effects. HDR is how those images look much more realistic thanks to the high contrast, how overbright they are. The bloom is just an after-effect. HDR is about how bright it all looks, like you'd expect from a real picture.

Non-HDR hardware can't do that. It can do bloom effects, but they can't replicate the over-brightness and other optical effects.
 
Yes they can - I've done it! You use additive texture blending, and add a highlights image to a normal image.

Remember most HDR images are constructed from two 24bit colour sources at different exposures. You can simulate the same with two normal photos taken at different exposures and add the brighter highlights with a 'compensatory' scaling. I'll also boast that I was doing this with raytracers before anyone had ever mentioned HDR ;)

HDR provides a better quality solution, but the overbright isn't anytng that couldn't be done on any hardware with 2 texture capability.

And most importantly, regards how HDR images are constructed, that article seemed to be talking about bloom postprocessing and not the actual rendering.
 
Shifty Geezer said:
Yes they can - I've done it! You use additive texture blending, and add a highlights image to a normal image.

Remember most HDR images are constructed from two 24bit colour sources at different exposures. You can simulate the same with two normal photos taken at different exposures and add the brighter highlights with a 'compensatory' scaling. I'll also boast that I was doing this with raytracers before anyone had ever mentioned HDR ;)

HDR provides a better quality solution, but the overbright isn't anytng that couldn't be done on any hardware with 2 texture capability.

And most importantly, regards how HDR images are constructed, that article seemed to be talking about bloom postprocessing and not the actual rendering.

Well i'd love to see images without HDR that look like images with HDR.
 
Shifty Geezer said:
Yes they can - I've done it! You use additive texture blending, and add a highlights image to a normal image.

Remember most HDR images are constructed from two 24bit colour sources at different exposures. You can simulate the same with two normal photos taken at different exposures and add the brighter highlights with a 'compensatory' scaling. I'll also boast that I was doing this with raytracers before anyone had ever mentioned HDR ;)

HDR provides a better quality solution, but the overbright isn't anytng that couldn't be done on any hardware with 2 texture capability.

And most importantly, regards how HDR images are constructed, that article seemed to be talking about bloom postprocessing and not the actual rendering.

care to elaborate? i don't see how an overbright pass would solve the issue with representing correct higher ranges even if we assume one can somehow 'carry over' the overbright info to a second pass.

actually i don't see how that 'carrying over' can be done in the first place. the only thing i know of that helps simulating HDR on non-HDR hw is scaling + renormalisation of the color ranges, during which you (severely) lose precision.
 
bloodbob said:
HDR means 0.0-BIG ( possible -BIG to BIG ) with something that has good accuracy generally using float points ALL the way through the pipeline.

negative range for color?
which is the interest?
 
Back
Top