HDR overused?

I'm sure we've had this discussion before...

HDR is absolutely necessary to get more realism into rendering - with realism being something that a lot of people strive for and a lot of consumers seem to want.

It's not a matter of special effects like bloom - it's simply a case of having lighting calculated with enough precision and range to produce a result that's closer to the real world (or at least be a more accurate simulation of it than is possible with a linear 8-bit approximation).

The problem comes when you've got this more accurate rendered image - how do you actually display it? We don't yet have HDR displays in consumer hands, and even when we do they don't have the same capabilities as the real world (personally I wouldn't want a TV capable of actually blinding me).

We need to take the HDR image, and compress the range of values down to something that is displayable, while at the same time using rendering tricks to convince the eye/brain that it's as bright/dark as it's supposed to be.

The first thing to do is apply "tone mapping". This is a way of mapping the range of luminance in the image, to the range this is displayable, but in such a way that as much detail (in both dark and bright areas) is preserved.

That's already a fairly hard problem, and one with continuing research. Doing it on a static image is hard enough, doing it in realtime at 60fps is more tricky. There are workable solutions, but they're not perfect, so expect to see improvements in that area as we continue to work with HDR and find better ways of displaying it on a LDR display in a pleasing fashion.

The other thing people do, is apply things like bloom - this is simply designed to be another hint to the optical system that something is "really bright".

Unfortunately right now people are being really unsubtle with these tricks. Perhaps they have to make them obvious so that people believe them when they have "HDR" as a bullet point on the box. Or maybe they're just getting carried away with their fancy shader and don't like it when they can't see it in every image.

IMO HDR is an absolutely necessary development, and bloom, while being horribly mishandled, is actually quite useful too, at least until we get more advanced display technologies. However right now people are just getting to grips with HDR and how it works, and there is a lot of "dynamic range abuse" being committed...
 
MrWibble said:
The problem comes when you've got this more accurate rendered image - how do you actually display it? We don't yet have HDR displays in consumer hands, and even when we do they don't have the same capabilities as the real world (personally I wouldn't want a TV capable of actually blinding me).
Heh, now I'm imagining the legal blabla involved with selling a consumer set that can potentially display images capable of blinding a person (something like real sun brighntess perhaps :p).
I think it's pretty safe to say that HDR displays will have a very fixed upper limit for the brightness range in consumer space even when the technology eventually becomes viable to manufacture cheaply.

Unfortunately right now people are being really unsubtle with these tricks. Perhaps they have to make them obvious so that people believe them when they have "HDR" as a bullet point on the box. Or maybe they're just getting carried away with their fancy shader and don't like it when they can't see it in every image.
I like to call it disco effect, cuz everytime I see gaudy overdone CG I remember the first days of hw-accelearated colored lighting. It's interesting to observe the same phenomenon just keep happening over and over again though.
I'm far more annoyed with current trend of uber-glitchy shadowing though - much as I'm no fan of volume shadows performance characteristics, the image stability/quality with titles that use them is head and shoulders superior to all the 'nexgen' shadowmapping using stuff we've been seeing over the course of last 12 months or so.
 
MrWibble said:
The problem comes when you've got this more accurate rendered image - how do you actually display it? We don't yet have HDR displays in consumer hands, and even when we do they don't have the same capabilities as the real world (personally I wouldn't want a TV capable of actually blinding me).

We need to take the HDR image, and compress the range of values down to something that is displayable, while at the same time using rendering tricks to convince the eye/brain that it's as bright/dark as it's supposed to be.

The first thing to do is apply "tone mapping". This is a way of mapping the range of luminance in the image, to the range this is displayable, but in such a way that as much detail (in both dark and bright areas) is preserved.

That's already a fairly hard problem, and one with continuing research. Doing it on a static image is hard enough, doing it in realtime at 60fps is more tricky. There are workable solutions, but they're not perfect, so expect to see improvements in that area as we continue to work with HDR and find better ways of displaying it on a LDR display in a pleasing fashion.
Before we start getting all fancy with this stuff, lets aim for photorealism first.

Just apply a simple tone mapping curve the way film does, and then you've covered this hard problem well enough to match what we see on TV or in a photo. However, our graphics aren't near TV quality. We need to spend a lot more time on the first sentence of yours - rendering the accurate image.
 
I'd be happy with that. Rather than trying to condense a range of 100,000:1 into a 256:1 display, do like film and just cover a range from within the source intensities. Select the intensity you are exposing for and have anything 2^6 intensities above that all white, and anything 2^6 intensities below that all black, with the rest spanning the 256 intensities of the RGB displays. That'll probably help realism a lot more than artifically mapped intensities, as it'll reproduce the results from photos and film and TV which everyone is comfortably used to. After all, being in a dark cave and looking out into the daylight, if you tone map perfectly you end up losing the overexposure of the outside and the illusion of much higher intensities as a result of the contrast.
 
zeckensack said:
Yup.
The common theme seems to be that that average luminance is generated from a downscaled framebuffer readback -- or rather the more efficient equivalent involving mipmap filters as available --, and if you don't allow the values in your framebuffer to exceed 1.0 (i.e. if you don't use a "HDR" data format, whether it be INT16 per component or floating point), many of the values in the fb will be clamped and hence your average luminance reading will be skewed towards the darker range.
That's true, but I think that's a good thing. It makes the image act more like your eye or a camera. If two scenes have the same mathematical average luminance, but one of them has lots of darker areas with one ultra-bright area, you'll get better detail by assuming it has a lower average brightness.

So while your analytic ideas are interesting, I don't think they're necessary. Let them saturate if they want to, as they're already around 100 times greater than the average luminance.

I'm not so sure about that reasoning.
I know my eyes don't appreciate scene contrast ratios of 10000:1. I know I don't have a display that could ever hope to resolve that accurately, and I even think it's fine as it is. A pure grey gradient from black to white looks pretty smooth already to my eyes at just 8 bits in sRGB. In real life only masochists or well-protected people ever look at the sun for more than a fraction of a second. In games you frequently do. And it's great to clamp the sun's intensity to some "large but not crazy" value IMO.
I think you slightly misinterpreted what I was saying (understandably, as I wasn't too clear in hindsight).

Photographs or TVs output 8-bits, but the input that generates the extreme values you see is bigger. I guess I'm talking about compression at each end of the S-curve, as seen here. Think of a mapping like 1->0.0, 10->0.1, 100->0.5, 1000->0.9, 10000->1.0. Its non-linear, with the extremes reserved for only very bright or dark regions compared to the average. However, you need to do scaling and blending etc in the linear space.

I'm not suggesting we have superbright monitors. I'm just speculating that a contrast ratio of ~10,000:1 is needed to capture everything that a camera does, since that's the operating range of the x-axis in those graphs. Saturation of values outside this range is acceptable, since the S-curve pretty much flattens out anyway.

Not sure.
There are limits to how relaxed or contracted the iris will get, and the one effect where this shows, which is also pretty low-hanging fruit for game engine class renderers, is near darkness. Loss of color below certain thresholds and noisiness are phenomena I certainly experience myself in low-light conditions, and I assume that's normal for humans. Right? :D
Okay, you've got a point there. I didn't mean completely meaningless. I was just pointing out that just because in one scene you have an average luminance of 1/1000 and in another its 1000, you don't need a format with a dynamic range of 1,000,000:1. At any one time, you only need 10,000:1 to reach the "ends" of your tone map, if even that much.

In summary, I'm saying if you want photorealism, look at how a camera and its film works. Iris + shutter time = global scale, film = tonemap. Apply a scale before writing the value to the framebuffer, and you don't need a stupid-high format range.
 
Shifty Geezer said:
I'd be happy with that. Rather than trying to condense a range of 100,000:1 into a 256:1 display, do like film and just cover a range from within the source intensities. Select the intensity you are exposing for and have anything 2^6 intensities above that all white, and anything 2^6 intensities below that all black, with the rest spanning the 256 intensities of the RGB displays. That'll probably help realism a lot more than artifically mapped intensities, as it'll reproduce the results from photos and film and TV which everyone is comfortably used to. After all, being in a dark cave and looking out into the daylight, if you tone map perfectly you end up losing the overexposure of the outside and the illusion of much higher intensities as a result of the contrast.
Well, I'm not quite saying that. See the above post of mine to zeckensack. You need a little more than a window spanning a contrast ratio of 256 so that you can do a gradual clamp instead. But yeah, "do like film" pretty much sums up my view of how HDR should be done, at least for now. There are bigger fish to fry.
 
zeckensack said:
A pure grey gradient from black to white looks pretty smooth already to my eyes at just 8 bits in sRGB.
If your setup is properly calibrated and your viewing conditions are good then you should have no trouble discerning the posterisation in a pure grey gradient.

Sane people will adjust their displays' white levels to levels they are comfortable with. A game IMO should not assume that realism is more important than that level of comfort. E.g. my iiyama CRT has an "OPQ" mode, supposedly for watching movies, from greater view distances, where my eyes actually hurt for the split second I tried it out (being a curious cat). I will never go there again.
Ironically this kind of brightness (I have OPQ on my Iiyama Pro 514, "A" variant tube with increased OPQ brightness) is typical of LCDs. It's still a fair way short of a decent CRT TV set, which is obviously designed for distance-viewing.

Jawed
 
i think where many games faulter IRT HDR is they use it for certain effects (maily lighting) but still rely on lossy compressed images (sometimes of low resolution) to texture the majority of the scene. it's a waste of bandwidth IMHO to forgo texture quality for HDR (as it's used today) because your lighting looks swank but the environment you're lighting looks "last gen" in some cases.
 
I can't say I disagree with anything you said. This part of your post however reminded me that I totally forgot to add the final punch to my proposal.
Mintmaster said:
Apply a scale before writing the value to the framebuffer, and you don't need a stupid-high format range.
That kind of is the idea of the analytical approach to average scene luminance. You avoid reading back anything, so not only can you forget about the HDR format, you actually can figure out the average luminance before you even start the color pass, and thus immediately drop "pre-tone mapped" colors into the framebuffer.
 
Mintmaster said:
Well, I'm not quite saying that. See the above post of mine to zeckensack. You need a little more than a window spanning a contrast ratio of 256 so that you can do a gradual clamp instead. But yeah, "do like film" pretty much sums up my view of how HDR should be done, at least for now. There are bigger fish to fry.
That 256 shades is your pre-scaling...
Apply a scale before writing the value to the framebuffer, and you don't need a stupid-high format range.
The backbuffer would be rendered in a range as high as you like, in YUV/HSL type space with each pixel having colour and intensity. To produce the output frontbuffer, choose the midpoint on the logarithmic illumination scale and clamp to a range +/-10^3. eg. on an absolute scale where 0 = black, 1,000,000,000 = direct sunlight, pick 10,000, an indirectly lit face in a room with a window. In the frontbuffer the grey shades would span a range of 100 to 1,000,000, everything below 100 intensity would be black, and everything above 1,000,000 would be white. In between you scale that range from 100 to 1,000,000 to the 256 intensities of RGB output and combine that transformed range with the colour data like HSL, so saturation decreases with brightnesses over the centre point. The absolute range would be a game-wide constant, as it depends how high you rate your highest brightness. The centre point will be based on whatever you're exposing for. Just liek a camera devs would be free to select a method, but the most obvious would be a point sample from the main point of interest or key entity. In an FPS that might whatever you're aiming at, increasing exposure if looking into shadows. In a racer it would probably be the road as that's about medium grey intensity. Once you have the 256 intensity for the front buffer, Gamma and contrast is quick and easy to apply to adjust 'film response' and set the artistic mood.
 
zeckensack said:
I can't say I disagree with anything you said. This part of your post however reminded me that I totally forgot to add the final punch to my proposal.That kind of is the idea of the analytical approach to average scene luminance. You avoid reading back anything, so not only can you forget about the HDR format, you actually can figure out the average luminance before you even start the color pass, and thus immediately drop "pre-tone mapped" colors into the framebuffer.
I'm very much in favour of this, too.

Tonemapping also mucks-up AA (actually, any kind of filtering) - another reason to "tonemap first".

Jawed
 
Jawed said:
If your setup is properly calibrated and your viewing conditions are good then you should have no trouble discerning the posterisation in a pure grey gradient.
Sure. It's not totally smooth but pretty smooth. And then it's a contrived situation. It hasn't been much of a problem in games for me so far. A little dithering here and there should fix it up nicely. Of course I wouldn't mind broader usage of 10-10-10-2 buffers or even INT16 or something. It's just not that urgent for presentation IMO.

If you really think it's disturbing, well, HDR rendering will not help against this issue. If you're going to tone map the result down to an 8 bit-per-component framebuffer that's the kind of gradient you'll get. FP10 should be slightly worse in the upper half of the range and better in the lower quarter, if I remembered the bit distribution correctly.

I don't actually remember why I even mentioned this ... perhaps I can turn it into a point now :D
When looking at movies or still images on an 8 bit display, I don't think many people will complain about color resolution. It's pretty good. You want higher precisions of course if you do many passes over a frame to minimize the error creep, but not necessarily for final presentation.

If we could find ways to render things approximately as nicely as with "true HDR rendering" but in less color passes, we wouldn't have as much of a need for high precision frame buffers. I could think of a way to skip the mandatory tone mapping pass ;)
 
zeckensack said:
Sure. It's not totally smooth but pretty smooth. And then it's a contrived situation. It hasn't been much of a problem in games for me so far. A little dithering here and there should fix it up nicely.
I was just being nit-picky :smile:

Your original point was about sRGB which actually has an advantage in terms of posterisation. Any gamma space (non-linear, sRGB is gamma 2.2) is generally trading extreme shadow posterisation for increased smoothness in the midtones and highlights - it's an approximation for the in-built gamma curve that CRTs have - which is also a good match for the way our eyes work.

sRGB adds roughly a bit of precision in the midtones and highlights, by throwing away precision in the shadows.

http://www.poynton.com/GammaFAQ.html

A linear coding effectively reduces rendering to 7-bits per channel.

Jawed
 
Mintmaster said:
Well, I'm not quite saying that. See the above post of mine to zeckensack. You need a little more than a window spanning a contrast ratio of 256 so that you can do a gradual clamp instead. But yeah, "do like film" pretty much sums up my view of how HDR should be done, at least for now. There are bigger fish to fry.

I would have to say you're missing a fairly big part of the problem here.

With film, you have someone stood behind the camera, choosing where to point it and how to expose the image. Then you have post-production and editing where more people look at the picture and fiddle with it to make it look right.

With a graphical image, you don't have that - you need to find an analytical way of approximating how a photographer would treat the scene, no matter if the player decides to look under a rock or stare at the sun.

Look at pretty much any existing system for dealing with the problem of reproducing real-world lighting on a more limited medium, and you'll generally find there's a human eye and brain doing some of the work.

Applying scale factors before going to the frame-buffer is only good if you have a perfect rendering scheme that never involves any compositing (i.e. blending)... sadly, if we want to have AA or translucent surfaces, we're going to need to represent more range than you would strictly have to deal with on film. Bright light shining through a material on film would just be a matter of adjusting exposure for the brightness of the visible surface. Doing it in graphics using any currently feasibly scheme would generally involve rendering the bright light, and then composing the translucent material over the top - you're only going to approximate the correct result if you have enough range to store both the original bright backlight as well as the darker pixels which are actually all that are visible.

In short, I think you're oversimplifying the problem and not really taking into account how images are actually rendered - it's not just a matter of magically arriving at a final image in a nice handy range and then throwing it at the display.

You're right we're nowhere near TV quality images in rendering yet, but HDR is just one of the tools we need to better accomplish photorealism in rendering. Even if we don't strictly need all that range in the final image - we sure as hell need it while we calculate the image in the first place.
 
MrWibble said:
I would have to say you're missing a fairly big part of the problem here.

With film, you have someone stood behind the camera, choosing where to point it and how to expose the image. Then you have post-production and editing where more people look at the picture and fiddle with it to make it look right.

So how do you feel about what Sony showed at E3, TGS, and GDC with The Getaway? They were showing off HDR lighting as if someone was walking around the London city videotaping the area. In general do you think they "got it right"?

What are your thoughts about it?
 
mckmas8808 said:
So how do you feel about what Sony showed at E3, TGS, and GDC with The Getaway? They were showing off HDR lighting as if someone was walking around the London city videotaping the area. In general do you think they "got it right"?

What are your thoughts about it?

From what little I saw, I think it looked very nice. It's tricky to tell exactly from the footage I saw (which looked like it was itself captured from a hand-held camera) but it looked like it was doing a decent job of showing sunlight and shadow, and without too much bloom in evidence.

It's still all a bit empty clean looking for London though ;)

The PS2 Getaway incarnations were, IMO, seriously let down by the engine and lighting*. It was an impressive feat getting a close approximation of London streaming in on a PS2, but while weather in London frequently is dull and overcast, that's not the most interesting look they could've gone for. A little bit more work on the more incidental parts of the engine would've worked wonders for the presentation.

I'll be fairly impressed if they manage to get the level of detail shown in the demo, across the area Getaway has, and retain that quality of rendering, but I'd love to see the game "done right".


*quite aside from the appalling (lack of) gameplay that is.
 
MrWibble said:
From what little I saw, I think it looked very nice. It's tricky to tell exactly from the footage I saw (which looked like it was itself captured from a hand-held camera) but it looked like it was doing a decent job of showing sunlight and shadow, and without too much bloom in evidence.

It's still all a bit empty clean looking for London though ;)

The PS2 Getaway incarnations were, IMO, seriously let down by the engine and lighting*. It was an impressive feat getting a close approximation of London streaming in on a PS2, but while weather in London frequently is dull and overcast, that's not the most interesting look they could've gone for. A little bit more work on the more incidental parts of the engine would've worked wonders for the presentation.

I'll be fairly impressed if they manage to get the level of detail shown in the demo, across the area Getaway has, and retain that quality of rendering, but I'd love to see the game "done right".


*quite aside from the appalling (lack of) gameplay that is.

Okay let me give you some direct feed to adjust your opinion if needed.

Click Here

And here's a quick picture to go along. ;)

getaway18dj.png



So after watching the direct feed how do you feel about it now? Just curious with you being a developer and probable developer of PS3 games in the future.
 
mckmas8808 said:
Okay let me give you some direct feed to adjust your opinion if needed.

Click Here

And here's a quick picture to go along. ;)


So after watching the direct feed how do you feel about it now? Just curious with you being a developer and probable developer of PS3 games in the future.

Hmm. So there's a little bit too much bloom in evidence here and there - there's one part where there's a sliver of sky popping in and out of view and you can see the bloom filter flicking in and out all over the place, probably... however it's not as bad as some I've seen, so maybe we're moving in the right direction.

I think my opinion generally stands - we're in the early days of dealing with HDR and what we can do with it, and things will be improving as we continue.
 
MrWibble said:
With film, you have someone stood behind the camera, choosing where to point it and how to expose the image. Then you have post-production and editing where more people look at the picture and fiddle with it to make it look right.
For you're camera there's really 3 types of exposure selection. First you've got your overall light, often with some analysis going on. For example my Canon SLR has 36 regions that it measures the light for and then decides what to set the exposure to. This is a more complex process to apply in realtime. 2nd approach is a region using the average intensity of that region. And 3rd is a spot reading, measuring light from a particular point on a surface. The first method, analytically scene based, produces qualities dependent on the method employed. Reviews of cameras tell you when they have got it right or under or overexposed. My Canon does a great job and I'm free to generally just take photos without having to worry much. The third method is also very effective and straightforward, and I think it could be easily incorporated into realtime as I think I mentioned earlier. The game bases the exposure point, halfway between man and max output intensities, on a point in the scene relative to the game style. So for an FPS, this could be where the player is targetting, whereas for an RPG it could take it as centre spot or scripted point on an NPC's face. There could be issues with certain objects, such as a black characters skin, where exposing for that at the midpoint would be overall overexposed. You could provide objects with an exposure correction setting to counter that, so for a dark skinned character you'd aim the exposure at minus 3 stops say. Of course between exposure transitions you'd need to have a delay and gradual change, so moving an FPS target from a dark cave across the sky and to a dark tower, it doesn't instantly change settings, but depending on where you're looking adjusts itself appropriately.

I can't see any reason why this wouldn't work effectively and it should be very simple to implement.
 
Jawed said:
Tonemapping also mucks-up AA (actually, any kind of filtering) - another reason to "tonemap first".
If the mapping is linear, it doesn't break AA. If it's non-linear, "tonemap first" breaks blending.

Jawed said:
sRGB adds roughly a bit of precision in the midtones and highlights, by throwing away precision in the shadows.
sRGB encoded data is used to increase precision over linear format in darker areas, because these need most precision.


zeckensack said:
But I'm actually a proponent of figuring out average scene luminance by other means. There are usually only very few significant light sources in a scene and I consider it to be a worthwhile optimization to take the analytical approach there. If the sun's in view, and you have an occlusion query pending that will tell you with a healthy accuracy how much of its radius will end up in view, you'll have a very good first approximation of light intensity.

If there's no sun, just pick, say, the top 3 of the artificial light sources and run from there.
If there are very large highly reflecting surfaces, you have to do some boiler-plate work to take these into account, but really, the math for doing so is still simple.

The problem with the approach is rendering a scene involving a low sun over an ocean, because the sun's reflection will be "smeared out" over a very large area and it makes it pretty difficult to compute the average scene luminance accurately enough.
I highly, highly doubt that this is the main problem. I think there is much more involved than just a few highly reflective surfaces to get an acceptably accurate measurement of scene brightness.
 
Back
Top