Why would we want to average samples after gamma correction?
(btw, I found that swapping the order of tonemapping and averaging passes often produces extremely similar results, at least with a classic Reinhard's tone mapping operator..)
Because we want things to be linear in "screen light emittance" space. So any operation that changes the curve from a linear ramp, including the sRGB response of the monitor, has to be done before you average the samples.
I guess you could call correct coverage the most correct representation, but I don't think it matters much. I certainly wouldn't call it the best representation in terms of perceived quality.
Agreed. Graphics is all subjective anyway. That's why I reject the notion that it's all "science and math" and "no opinions". Ultimately it comes down to opinions.
Come to think about it, I realize my analog camera analogy sucked too. While the surface on the film reacts pretty much like I said (except that it's non-linear, which further breaks my analogy) it won't be a box filter there either but probably something closer to a Gaussian filter since there's an aperture, which means it'll bring in light from the surrounding area as well. With a pinhole camera that would work though. (If we ignore the fact that a zero size aperture would need infinite exposure time). Thinking more about it I think the "most correct representation" must be something that mimics the eye's behavior, and it's basically the same principles as in a camera. Perhaps that means Gaussian (or something close to it) would be the answer after all.
Again, highly subjective. This will be an interesting field to explore in D3D10.
I disagree with that operation order, but I think one source of disagreement is that we might interpret "linear color space" differently.
Yeah, that's what I figure. I think we're really agreeing.
Non-linear tone mapping is a special case that tries to introduce perception characteristics into this model. That's bound to raise problems, and I doubt there's a "correct" way of doing it. I'm not truly convinced tone-mapping should be non-linear at all.
Well, you can achieve much better quality with a non-linear mapping. It all comes down to what goals you have of course. If you try to mimic a camera the tonemapping operator should be something like this:
1 - 2^(-exposure * value)
When rendering or processing an image, you really can't take monitor response curve into account. There might even be no monitor attached.
Well, in that case it doesn't really matter what you do. However, if we're to take linear light into account as you talked about, then clearly we must take the monitor response into account if we want to be 100% correct, otherwise our photon linearity breaks. Of course, in practice we can just assume it's 2.2 and be happy with that. It'll look good enough in the vast majority of cases.