Some new interesting information about valve's HDR implementation

ChrisRay said:
I think he is referring to the drop from normal rendering to HDR on the 7800GTX.

Ah! Not what I meant. I came to praise GTX, not bury it. :smile:
 
Re: ubiquity of lightprobes
Humus said:
What's that supposed to mean?

If future game engines include real world lighting (i.e. image based lighting, e.g. HDR lightprobes ala Debevec), you will see alot of FP16 source artwork, since to make effective use of probes, you must take alot of them.
 
HL-2 HDR looks as arse as FC HDR, to me. When will devs realise that if you manipulate tone mapping in RGB space you alter the saturation of colours. If looks fugly and ridiculous.

Not to mention that the overbright blow-out is exaggerated. Just because digital cameras are shit at capturing dynamic range, it doesn't mean that games have to simulate the same technical problem.

Sigh, lens flare, all over again.

Jawed
 
Jawed said:
Sigh, lens flare, all over again.

I kind of agree, but I think a lot of people would complain if after several months waiting they got an 'HDR Demo' which didn't look much different than the original, and the effect needed effort in order to spot it. I assume they can't do much more than that with this content and technology (both the engine and hardware required) anyway, so at least make it noticeable. I still think it looks better than FC thow.
 
Jawed said:
Not to mention that the overbright blow-out is exaggerated. Just because digital cameras are shit at capturing dynamic range, it doesn't mean that games have to simulate the same technical problem.

I doubt digital cameras are as shit at capturing dynamic range, as compared to monitors being shit at displaying dynamic range.
 
Jawed said:
HL-2 HDR looks as arse as FC HDR, to me. When will devs realise that if you manipulate tone mapping in RGB space you alter the saturation of colours. If looks fugly and ridiculous.

Not to mention that the overbright blow-out is exaggerated. Just because digital cameras are shit at capturing dynamic range, it doesn't mean that games have to simulate the same technical problem.

Sigh, lens flare, all over again.

Jawed

Compared to Sam2 it actually looks great LOL ;)
 
Jawed said:
HL-2 HDR looks as arse as FC HDR, to me. When will devs realise that if you manipulate tone mapping in RGB space you alter the saturation of colours. If looks fugly and ridiculous.

Not to mention that the overbright blow-out is exaggerated. Just because digital cameras are shit at capturing dynamic range, it doesn't mean that games have to simulate the same technical problem.

Sigh, lens flare, all over again.

Jawed

Well put. Call me crazy but i think the non-HDR screenshots look better/more realistic. Very few instances does the human eye percieve HDR type affect except on reflective surfaces like a tin wall but not a clay wall! I dont understand why the emphasis of this feature lately.
 
Pozer said:
Well put. Call me crazy but i think the non-HDR screenshots look better/more realistic. Very few instances does the human eye percieve HDR type affect except on reflective surfaces like a tin wall but not a clay wall! I dont understand why the emphasis of this feature lately.

It has a lot of good things, but none of those are being shown right now. Its more of a thrown in feature right now and not a standard one. Once dev's realize that they can really add some things to a game with this and enhance then it will really show its self off. There is a lot of room for improvment here. Personally the biggest improvments I have seen are outdoor ones where its "clearer" looking to me. But overall HDR has made games look worse to me.
 
Both modern digicams and film cams capture about 7 stops (128:1 range) The issue is, their modeling of the eye's reaction is way over exaggerated. I am certainly NOT blinded by momentarily walking through a dark alley and then into a bright street.
 
DemoCoder said:
Both modern digicams and film cams capture about 7 stops (128:1 range) The issue is, their modeling of the eye's reaction is way over exaggerated. I am certainly NOT blinded by momentarily walking through a dark alley and then into a bright street.
Negative film captures 11-14 stops, normally.

A print, with a positively feeble intrinsic dynamic range, can show 14 stops of dynamic range quite happily. There is no reason to expect a 1:1 correspondence between real-world dynamic range and a representation of the real world.

That's the correct use of tone-mapping.

http://www.dpreview.com/learn/?/Glossary/Digital_Imaging/tonal_range_01.htm

Jawed
 
DemoCoder said:
Re: ubiquity of lightprobes


If future game engines include real world lighting (i.e. image based lighting, e.g. HDR lightprobes ala Debevec), you will see alot of FP16 source artwork, since to make effective use of probes, you must take alot of them.

Why would "real world lighting" require FP16? Using RGBS or RGBE on Debevec's lightprobes works very well. I'd be more concerned about CG art, like creating a skybox with Terragen. In Debevec's rnl_probe.hdr, which I'm using in the HDR sample in the ATI SDK, the range is just 0.01 to 166. That range can almost be handled with plain scaling with I16. The Terragen generated skybox I used in my recent HDR demo on the other hand contains a range from 0.004 to 76800, which still works just fine with RGBE encoded as DXT1-L16, but ironically has a range that is too large for FP16 ...
 
Jawed said:
Negative film captures 11-14 stops, normally.

No, B&W negative film captures "11" stops (also a misnomer. It captures about 7 stops with linear exposure response, the other 4 stops are "crushed" and taper off quickly). *Good* Color negative film (as opposed to slide film) has a density of about 2.8 which is about 8-9 stops.

There is no reason to expect a 1:1 correspondence between real-world dynamic range and a representation of the real world.

But there is reason to expect *unreal* images from being shown, lest you trigger an uncanny valley response.

There is a difference between photographic art and photorealism. Not all zone or tone mapped photographs look real. They look very beautiful. But there's a difference.
 
  • Like
Reactions: Geo
DemoCoder said:
The issue is, their modeling of the eye's reaction is way over exaggerated. I am certainly NOT blinded by momentarily walking through a dark alley and then into a bright street.
I agree, but am not surprised. Just think back to initial tech releases of the past: color TV - extreme saturation, stereo - extreme channel separation, DVD - extreme dynamic range.

Perhaps geo is talking about MF 120/220 stocks.
 
You see it today. Walk into any TV store and look at the PDP, LCD, and DLP TVs being sold. All of them are *mega oversaturated*. Like high volume levels in audio, vendors have learned one sure fire way of getting eyeballs and attention in store demos is to crank up saturation through the roof so your display makes those next to it look plain and desaturated.

In fact, my Samsung DLP has a feature called DNIe which tries to dynamically maximize saturation per frame. Have fun convincing people that properly calibrated displays are "correct" and better.
 
DemoCoder said:
You see it today. Walk into any TV store and look at the PDP, LCD, and DLP TVs being sold. All of them are *mega oversaturated*. Like high volume levels in audio, vendors have learned one sure fire way of getting eyeballs and attention in store demos is to crank up saturation through the roof so your display makes those next to it look plain and desaturated.

In fact, my Samsung DLP has a feature called DNIe which tries to dynamically maximize saturation per frame. Have fun convincing people that properly calibrated displays are "correct" and better.
..Crank up the bass on and treble.
Have fun convincing people that properly calibrated stereos are "correct" and better.
 
DemoCoder said:
No, B&W negative film captures "11" stops (also a misnomer. It captures about 7 stops with linear exposure response, the other 4 stops are "crushed" and taper off quickly). *Good* Color negative film (as opposed to slide film) has a density of about 2.8 which is about 8-9 stops.
Density range has nothing to do with dynamic range. The two are independent. Slide films such as Velvia have a density range of around 4.0 yet represent a dynamic range of around 4-5 stops.

If you've ever scanned negative film with Vuescan, for example, you'd understand this.

That's why a print can show a 14-stop dynamic range even though it only has a 4-stop range, intrinsically.

Here's another excellent page on the subject:

http://www.astropix.com/PFA/SAMPLE1/SAMPLE1.HTM

Jawed
 
I've been making a HDR demo of my own recently. Taking quake3 maps and doing pp lighting/shadowing in them.
What I've been experimenting with recently is not HDR blooming (although I did this earlier), but more what happens in low light.
Currently, I'm doing some slight blue-tinting, but also blurring the image and adding more and more noise. It's surprising actually. I wouldn't say it looks human-eye real, but definitly looks more 'dark'. Unfortunatly, A screenshot can't really show it.

In ultra high contrast scenes the effects add up quite nicly. You get very dark noisy images with a massivly bright spot somewhere, add the typical bloom (with noise) and you get an image that can actually look pretty good, it looks less rendered I guess.

Personally I don't actually want to go for a human-vision-like image, because simply thats not possible with such a limited contrast output on the monitor. So we need to simulate those things that we see on a tv/monitor, ie things shot with a camera.

I put a picture up of what I've done so far (only about a week or so of spare time)
image

As can be seen, I havnt got the blue-shifting quite right, but frankly I think it looks better anyway.

I'm currently writing up code to do reflective water, which HDR will probably have a huge effect on.

My overall feeling is it's most definitly the furture, but the hardware (at least my laptop!) really can't manage it at the moment.

[edit]

ok here is another image. Still looks much better real-time. image (just remember it's just an experiment! :smile: )
 
Last edited by a moderator:
Graham said:
What I've been experimenting with recently is not HDR blooming (although I did this earlier), but more what happens in low light.
Currently, I'm doing some slight blue-tinting, but also blurring the image and adding more and more noise. It's surprising actually. I wouldn't say it looks human-eye real, but definitly looks more 'dark'. Unfortunatly, A screenshot can't really show it.

That's excellent. I really hope we will see more of this in upcoming titles (not to belittle your demo). Too much focus is on blooming, which really has nothing to do with HDR instead of these real benefits, like image fidelity at the extremes of lighting conditions.

I just have to mention that this "play with dark" is what I am constantly nagged by when I play something like Doom 3. It would really help bring it out. The sudden drop offs are just not right and the terrible banding and other aliasing that happens in dark areas is just an eyesore.

You mention that it looks more "dark" and I think I know what you mean. Would "organic" be a good word to use?

PS. What are the odds of the rest of us getting our greedy little hands on your demo?
 
Last edited by a moderator:
Jawed said:
Density range has nothing to do with dynamic range. The two are independent. Slide films such as Velvia have a density range of around 4.0 yet represent a dynamic range of around 4-5 stops.

In theory maybe, in reality, they are not truly independent. Perhaps "usable" density is the operative term. A film's usable dynamic range exists in the linear portion of its characteristic curve, and the reality is, a narrow Dmax/Dmin range produces a more horizontal slope, which on average produces worse usable dynamic ranget. Slide film is special, because it's characteristic curve is linear for only a small portion of exposure.

Yes, in theory, one could have a 0.3D film with a completely linear response between 0 and 4 log exposure. Such a film would require equisite manipulation to get any sort of usable contrast variations out of it when printing, but it is possible.

That's why I dispute your 11-stop claim. Every photographer knows (especially those who use the Zone System) that although most scenes will be covered by 11 zones, only 7 of them are really usable. Zones 0/1 and 9/10 normally aren't. Even if you're excellent in the darkroom, you normally still won't be able to beat 7 stops normally. With super good B&W film and print paper, and a little dodging and burning, you can achieve >10 stops, but that is well beyond the capabilities of most people. Why? Because you're operating in the flat parts of the characteristic curve.

So for all intents and purposes, film = 7 stops. Likewise, if you're an expert ISF guy, they'll tell you that can get 10 stops out of the top of the videocams, but normally video = 5 stops.

I've been doing photography for years. I've scanned over a thousand 35mm and medium format negatives, and slides, on a Minolta DiMAGE Pro, I've done hundreds of hours in the darkroom futzing around with developing to tweak my ranges, making test strip exposures, dogding and burning. After you've seen hundreds of test strips, you recognize the all too familar pattern showing you how much range was on your developed negative.

All of photography revolves around a simple mathematical projection of one range into another. Film attempts to map an interval [1,1000000] into the interval [1,1000]. Through development and printing, you try to futher map that negative into the print with an even smaller latitude. Yes, with an analog medium (modulo noise/fog/grain), one can fit one interval into another. However, going back to the first interval which maps real world lux values into the negative, most films wash out details at the bottom 2 and top 2 stops.
 
Last edited by a moderator:
Back
Top