fp16 render target affects MSAA quality?

That's not really what I was getting at (can't speak for nAo) -- I wasn't talking about the necessity of input dynamic range or the ability to recover it, but the necessity of accounting for what your image actually contains and where it will be seen and how wrong adjustments result for lack of consideration. For instance, not accounting for nonlinearity of the image, or not using a sufficiently representative set of samples, or not accounting for the fact that everybody's TVs will post-process on top of your adjusted image in completely different ways. It wouldn't really matter how you determine your adjustment if you construct data that isn't correct in the first place.

My point was that in games, people don't bother as often as you think. Partly because they don't really believe it to be worth it, partly because there's a real lack of people who really have a sense for it, and also partly because they're paranoid about what they'd consider to be unnecessary costs.

Even so, things like camera auto exposure is a different problem because it's working different goals. In the case of a game, a problem like tone mapping is so often approached with the question of "what can I afford not to do?" because it's treated as so superficial that the demand is also that the complexity of the task be equally superficial. I almost wonder if we'd be better off with specialized hardware for the job (I know many here would personally hate it, but if it's the lesser of several evils...well...).
 
Okay, but how does this affect the in-shader tone-mapping argument? I don't see how it's so much easier to use a FP16 backbuffer to determine exposure. If anything, I think it's easier to determine if an image is over/under exposed than predicting an exposure value that will make an image properly exposed.

There's nothing overly complex about what a camera does, either, so I don't agree that they're working towards different goals.
 
Okay, but how does this affect the in-shader tone-mapping argument? I don't see how it's so much easier to use a FP16 backbuffer to determine exposure. If anything, I think it's easier to determine if an image is over/under exposed than predicting an exposure value that will make an image properly exposed.
Well, with an FP16 target, you're also opening yourself up to local/spatially varying tone mapping operators, which can often give you better results as you can localize contrast enhancement and saturation and so on. Granted you could do a separate pass recovering luminance and get spatially varying operators that way even with an LDR image, but that is a different thing from tone mapping at the end of shader output using an already-calculated exposure constant.

About the only thing FP16 gets you for a global operator is just more luminance precision "right now."

There's nothing overly complex about what a camera does, either, so I don't agree that they're working towards different goals.
In general, with a game, what people are paranoid about isn't the complexity of the individual calculations, but the quantity... i.e. they don't want to have to do this simple thing for a million pixels on what is considered a superficial operation that pretty much comes right at the tail end of the frame. I agree that it shouldn't be that way, but I don't think it's that simple to get people to listen, especially when you are aware that there ARE bigger fish to fry.
 
Well, with an FP16 target, you're also opening yourself up to local/spatially varying tone mapping operators, which can often give you better results as you can localize contrast enhancement and saturation and so on. Granted you could do a separate pass recovering luminance and get spatially varying operators that way even with an LDR image, but that is a different thing from tone mapping at the end of shader output using an already-calculated exposure constant.
You are now talking about better-than-photorealism. There is really no pressing need for this in games right now. We don't even see it used in Hollywood movies.
In general, with a game, what people are paranoid about isn't the complexity of the individual calculations, but the quantity... i.e. they don't want to have to do this simple thing for a million pixels on what is considered a superficial operation that pretty much comes right at the tail end of the frame. I agree that it shouldn't be that way, but I don't think it's that simple to get people to listen, especially when you are aware that there ARE bigger fish to fry.
I still don't see how this affects the FP16 vs. in-shader tonemapping argument. FP16 doesn't have easier individual calculations, nor does it have a lower quantity of pixels that need to be calculated. Both are just as easy to perform on downsampled images.
 
Back
Top