fp16 render target affects MSAA quality?

Well, how is luminance calculated using a FP16 RT? I am thinking one would do it the same way, but HDR pixel luminance attributes would be recorded just before the tone mapping is done as last step in the shader, followed by writing to integer buffer.

ERK

PS. I guess I was thinking that luminance could be "integrated" as each pixel is rendered, to a single global value, not to another RT.
 
Well, how is luminance calculated using a FP16 RT?
Compute luminance of every screen pixel (or some representative subset of them), then perform a parallel reduction/average over the screen (this can sometimes be done with mipmap generation, although that will really just do the same thing).

[Edit] See this page for more detail, although I don't necessarily agree with how they do bloom ;)

PS. I guess I was thinking that luminance could be "integrated" as each pixel is rendered, to a single global value, not to another RT.
You can't write to global memory in pixel shaders for a very good reason: all of those writes would have to be serialized absolutely destroying the efficiency of massively-parallel GPUs.
 
Last edited by a moderator:
On second thought, one more dumb question...

Could the luminance information be rendered to a vertex buffer, then analyzed and passed to next frame?

Thanks again,

ERK
 
Yeah, but overtime the accumulated error will grow and that's it, end of the game :)

Well, if I channeled the "It" of SMM I should say "well, you are a math guy, figure out exactly how long it takes to build the error to a non tolerable degree and tell that to the game designers, fit the gameplay around that!".
 
It seems video games don't use reasonable tone mapping.
Quite possible, but even something simple like linear or any of the Reinhard-like operations should be sufficient. As long as the operator is continuous and "reasonably smooth", it shouldn't introduce discontinuities into the texture filtering scheme, by my understanding (although I'm still bouncing a few ideas around in my brain).

And nAo the more you talk about how your game does stuff properly the more I really want to play it and see the prettiness in real-time :) Just spend a few hours one day after work and port it to PC for me ;)
 
Last edited by a moderator:
If you wanna use it in the following frames to do tone mapping within your color pass then how do you (properly) compute the luminance in the first place?
Does it matter? Start with near-zero exposure and after a few frames it will adjust, just like a fade-in.
Yeah, but overtime the accumulated error will grow and that's it, end of the game :)
Why? It's a negative feedback system. You render frame N using the calculated exposure of frame N-1. If that gives you an underexposed image for frame N, you increase the exposure for frame N+1, and vice versa. You'd only get accumulated error for pathological cases like alternating bright and dark frames.

The place where tonemapping in the pixel shader breaks down is with alpha blending, e.g. particles or windows. Exposure isn't really the problem with this suggestion.
 
Exposure isn't really the problem with this suggestion.
Only if you use a dumb tone mapping operator (as HL2 does), use something more decent (a sigmoid..) then try to reconstruct the original luminance and tell me what you get on screen :)
You will end up having wide range regions compressed to a bit or two, all those data will be lost, that's why imho HL2 use a linear operator: tone_mapped_pixel = hdr_pixel / ( max_range - a_small_offset_that_allows_to_burn_part_of_the_image )
While I think it all comes down to personal taste I just don't like what you get on screen with this stuff.
 
Only if you use a dumb tone mapping operator (as HL2 does), use something more decent (a sigmoid..) then try to reconstruct the original luminance and tell me what you get on screen :)
You will end up having wide range regions compressed to a bit or two, all those data will be lost, that's why imho HL2 use a linear operator: tone_mapped_pixel = hdr_pixel / ( max_range - a_small_offset_that_allows_to_burn_part_of_the_image )
While I think it all comes down to personal taste I just don't like what you get on screen with this stuff.
You don't need to precisely construct the original luminance. You just need to know which way to adjust the exposure. There's no need to get the exposure correct in 1/60th of a second. Neither the human eye nor a camera adjust that fast.

If large parts of the image are overexposed, for example, then you can easily get a lower bound on the luminance despite precision problems. Adjust for this exposure for next frame and repeat. Within a few frames, you'll have the majority of the scene in the central, non-flattened part of the sigmoid.

I don't like linear tone-mapping operators either. However, you're jumping to conclusions in deeming them necessary to adjust exposure with this technique.
 
I don't like linear tone-mapping operators either. However, you're jumping to conclusions in deeming them necessary to adjust exposure with this technique.
Well, I see games using this method and they don't get exposure right, they just don't.
Play HL2 and have fun, things get overexposed and underpexosed all the time.
I'm not saying that we should implement things like we were writing academia papers and not games, but dumbing things down will break at some point.

edit: and Yes, you don't need to compute a new exposure value at every frame, but it's not a big deal has it very easy to distribute that computation over time.
There's really no excuse to not compute exposure properly, unless you don't have to support older hw.
 
Well, if I channeled the "It" of SMM I should say "well, you are a math guy, figure out exactly how long it takes to build the error to a non tolerable degree and tell that to the game designers, fit the gameplay around that!".
You used an intolerable word -- "gameplay"... The creature has never heard of it, and refuses to do so. There have been conversations where the sentence "What's this whole gameplay thing you talk about? I don't get it, I mean I don't see why we should have it."

Why? It's a negative feedback system. You render frame N using the calculated exposure of frame N-1. If that gives you an underexposed image for frame N, you increase the exposure for frame N+1, and vice versa. You'd only get accumulated error for pathological cases like alternating bright and dark frames.
Then the problem becomes one of coming up with a good metric of "underexposed" vs. "overexposed." I suppose the most naive one is just looking at the mean luminance of the LDR image and saying you don't want it to be too low or too high, but I'd think it partly needs to depend on a user's definition of "just right." Moreover, some scenes can still have issues when you try something like that (e.g. scenes where one object ends up throwing you off). You're almost just as well off marking scene divisions with ideal exposure settings (possibly relative to a user-set default) for a scene block and have the designers lay them in. Yeah, I know... ugly...

There are niceties to not checking every frame... you can get away with only sampling every second or so and just claim that those overexposed or underexposed time periods where you blend between exposures are just "simulating the effect of your eyes adjusting to a new scene." ;)
 
Then the problem becomes one of coming up with a good metric of "underexposed" vs. "overexposed." I suppose the most naive one is just looking at the mean luminance of the LDR image and saying you don't want it to be too low or too high, but I'd think it partly needs to depend on a user's definition of "just right." Moreover, some scenes can still have issues when you try something like that (e.g. scenes where one object ends up throwing you off).
You determine "underexposed" vs. "overexposed" in the same way you do with a FP16 buffer. The problem is almost identical in both situations, except you have temporary precision issues for the flattest parts of the tone mapping curve. Why temporary? Because even with the precision issues, you can still get a rough idea of what their luminance is, and after adjusting in the right direction, the next frame will lessen the precision issues and so on.

There are niceties to not checking every frame... you can get away with only sampling every second or so and just claim that those overexposed or underexposed time periods where you blend between exposures are just "simulating the effect of your eyes adjusting to a new scene." ;)
I think you and nAo misunderstood me. I'm saying that there's no need for intantaneous exposure adjustment, not that we shouldn't check every frame (as we can make that pretty cheap through various techniques). My point is that if a drastic scene change or poor initial guess of exposure makes for an underexposed/overexposed image, the feedback will adjust very quickly anyway (say a factor of 5 per frame).

Do either of you know of an open source FP16 HDR demo with really nice exposure and tone mapping? I can modify it to use a LDR buffer with in-shader tone mapping and you'll see what I'm getting at.
 
I found a site that better explains what I'm talking about:
http://www.imageval.com/public/Products/ISET/ISET_Introduction/AutoExposure.htm

B-pre is what I'm saying to take from the previous frame, and B-opt is your target value. How exactly you calulate B is a bit of an art (several methods are mentioned there, as well as log-average techniques mentioned by a few in this thread which would have to, of course, take into account that the input is tone-mapped and not linear). You might want to tweak that last equation, since it's based on linearity, but it still illustrates the negative feedback I was talking about. Note that good cameras only have 8-9 stops of dynamic range. Discounting the need of more range for alpha blending (as per my argument above), 32-bpp formats are fine for in-shader tone mapping.

If I may reword a famous phrase: If you are in the pursuit of photorealism, do as the cameras do. ;)
 
You determine "underexposed" vs. "overexposed" in the same way you do with a FP16 buffer. The problem is almost identical in both situations, except you have temporary precision issues for the flattest parts of the tone mapping curve. Why temporary? Because even with the precision issues, you can still get a rough idea of what their luminance is, and after adjusting in the right direction, the next frame will lessen the precision issues and so on.
Okay, perhaps I wasn't too clear, but the thing I was getting at was that there are several ways people try to BS this which aren't necessarily good. The problems happen when people try and determine an appropriate exposure level either based on faulty luminance estimates... Possibly because the estimation is too naive, it ignores certain important details (like the fact that the image is not linear), or they just don't sample sufficiently in the name of the almighty microsecond.

Sure, starting from the post-tone mapping image is all well and good, but trying to do something that is both acceptably close to correct and cheap enough for your particular application is where things tend to fall into hell. But then, everybody makes concessions because it's not always worth the trouble.
 
SMM, did you read my last post? How do you think cameras determine exposure? The sensors don't have anywhere near the dynamic range necessary to instantly determine the exposure level given the wide range of lighting conditions on this planet.

Have you ever used the histogram feature on a digital camera? If it's weighted too much on one side or the other, you need to adjust the exposure (though the camera usually does this for you). However, you don't need 5 orders of magnitude on the x-axis of your histogram to determine this. The post-tonemapped image along with the exposure level used to generate it is all you need to figure out which way it must be adjusted.

Let me ask you this: Have you ever come across a photograph which didn't have the correct exposure, yet you could not determine which way it needed to be adjusted?
 
Back
Top