MfA said:
In the sense that it is hard to know when to apply them and when they will cause artifacts without shrinking down the artist to fit on the chip too so he can direct the shot in realtime.
Well I generally agree with you, but there are a couple of easy ways you could use filters in realtime:
- perform it on the whole screen, like a sharpen + glow combination
- render elements separately with a matte in the alpha channel and individually manipulate the layers, then comp everything together for display
And artists can just simply setup a few post effects so that they'll look good for a whole level. Also, users are much more forgiving about rendering artifacts in interactive applications. Just look how long we've been going on without antialiasing, or even perspective correct texture mapping
And of course you don't have to match the realtime imagery to live action, so it won't require so much and complex manipulation, but just a few effects with high "wow" factor.
Oh and I've forgot to mention color correction, which is just as important. Its wow factor can be pretty big as well, and it should also reduce artist time spent on finetuning assets to get a consistent look.
The more physically accurate the models used for rendering are, the less you have to worry about edge cases where your approximations break down. Realtime rendering is more difficult than offline rendering in a way, you control both the view and the scene only in a very limited sense ... and there are no second chances.
I still don't believe in the supremacy of physically correct rendering. It's a lot harder to manipulate reality to get the effects you want; there are too many rules, restrictions, and it needs to much computing capacity.
Let me bring up an example. A few years ago, several renderers have been introduced that used Monte-Carlo sampling for relatively fast Global Illumination. It created a lot of buzz in the industry, and many people thought that the end for lighting artists has come. Everybody started to pump out those dull greyish or blueish skylight + 1 sunlight renders which looked quite realistic but also incredibly boring.
Things soon went back to normal though, as it turned out that a GI renderer can not replace a good artist. And the new abilities of the software have found their right place as they've got integrated into existing toolsets (see about ambient occlusion above). And the most important requirements remain to be the following for a renderer: good support for large scenes with high complexity, support for good displacement mapping, and also motion blur and depth of field.
PS. I think in DX9+ shaders the framebuffer will be an input ... so it really is no problem, no specific support necessary IMO.
Sounds cool, but then again I'm no programmer