Mintmaster said:
Iris contraction is not the problem, as that's handled by a simple scale factor. Use the previous frames average luminance to determine how much you want to scale the data being written in the current frame, and keep adjusting this way. You can jump an order of magnitude each frame this way, so it doesn't really limit realism since the eye is much slower.
(BTW, I'm not necessarily explaining this to you, as you probably feel the same way as me.)
Yup.
The common theme seems to be that that average luminance is generated from a downscaled framebuffer readback -- or rather the more efficient equivalent involving mipmap filters as available --, and if you don't allow the values in your framebuffer to exceed 1.0 (i.e. if you don't use a "HDR" data format, whether it be INT16 per component or floating point), many of the values in the fb will be clamped and hence your average luminance reading will be skewed towards the darker range.
But I'm actually a proponent of figuring out average scene luminance by other means. There are usually only very few significant light sources in a scene and I consider it to be a worthwhile optimization to take the analytical approach there. If the sun's in view, and you have an occlusion query pending that will tell you with a healthy accuracy how much of its radius will end up in view, you'll have a very good first approximation of light intensity.
If there's no sun, just pick, say, the top 3 of the artificial light sources and run from there.
If there are very large highly reflecting surfaces, you have to do some boiler-plate work to take these into account, but really, the math for doing so is still simple.
The problem with the approach is rendering a scene involving a low sun over an ocean, because the sun's reflection will be "smeared out" over a very large area and it makes it pretty difficult to compute the average scene luminance accurately enough.
Mintmaster said:
The reason HDR rendering
is needed is that the eye can see a simultaneous dynamic range of 10,000:1. Print film, whats used in movie theatres, has a similar contrast ratio (DemoCoder informed me of this). 35mm film can capture data from a range of 3-4 orders of magnitude in intensity (
data). IMO this is range of data needed for realistic rendering.
I'm not so sure about that reasoning.
I know my eyes don't
appreciate scene contrast ratios of 10000:1. I know I don't have a display that could ever hope to resolve that accurately, and I even think it's fine as it is. A pure grey gradient from black to white looks pretty smooth already to my eyes at just 8 bits in sRGB. In real life only masochists or well-protected people ever look at the sun for more than a fraction of a second. In games you frequently do. And it's great to clamp the sun's intensity to some "large but not crazy" value IMO.
Sane people will adjust their displays' white levels to levels they are comfortable with. A game IMO should not assume that realism is more important than that level of comfort. E.g. my iiyama CRT has an "OPQ" mode, supposedly for watching movies, from greater view distances, where my eyes actually
hurt for the split second I tried it out (being a curious cat). I will never go there again.
We just have to realize that we
must stop way before achieving realism, simply for health and safety reasons. It's pretty much a given that I don't want to risk my eyesight in exchange for having a realistic game.
Mintmaster said:
The FP10 format is just about enough for this (32 / (1/256) = 8192), so it should be adequate. I think some effects can make use of higher dynamic range, but for photorealism it's enough in terms of range in a linear color space. Figuring out what to write in terms of lighting is a much bigger problem in photorealism than storing it accurately.
Agreed.
Mintmaster said:
Basically, what I'm saying is that absolute luminance of what you render is mostly meaningless, since only relative luminance should affect your final image.
Not sure.
There are limits to how relaxed or contracted the iris will get, and the one effect where this shows, which is also pretty low-hanging fruit for game engine class renderers, is near darkness. Loss of color below certain thresholds and noisiness are phenomena I certainly experience myself in low-light conditions, and I assume that's normal for humans. Right?