Xmas said:
I really trust some people to recognize 2x or 4x AA when they see it, even without having to zoom into screenshots.
I wouldn't. My GF3->GF4 example illustrates this fact.
For the longest time, title after title of seeing the effect in motion and being completely baffled.. then going around the internet and seeing almost a unanimous opinion of this massive improvement, all illustrated with screenshots of QC/xS modes (admittedly, the screenshots were pretty stellar). Then to the opposite effect- websites doing side-by-side comparisons of 2xMS screenshots and condemning it as ineffective or low quality, when the result was actually quite the opposite.
It's as if these people are totally blind to the actual image quality, but instead rely on taking a slew of screenshots, assuming they are representative, then panning around in zoomed regions to make a decision. As it turns out- their methods were wrong when the best representations should have been just simply admiring the output results, in motion, in games- the way it should have been... but wasn't.
----------
On a different note- I don't see NVIDIA ever disclosing the answers we seek concerning post-filter/post-processing and NVIDIA hardware. It just goes against their style and ethic to divulge such things.
I halfway can't blame them. Some amount of "magic" is lost once the true processes are unveiled. It also quickly illustrates any possible flaws or shortcomings this way as well and opens up much discussion and debate.
The way I look at it- it's alot like when a Vegas magic act takes a camera backstage and shows how they actually appeared to cut a woman in half, or quickly change a group of 8 people in a cage to a pair of white tigers with a quick change of a white sheet... from that point forward, you start to notice the flaws and the effect becomes "cheap" or "cheesy"..
But in related news- I think there is more to post-filter/post-processing on NVIDIA hardware than most are aware of. Especially when it comes to things like Digital Vibrance and the like.
A quick experiment for those interested in such things: Digital Vibrance is a post-filter effect for certain as DV does NOT show up in screenshots or in the framebuffer. I have noticed that there is a great deal of artifacting and problems with visuals if DV and anisotropic filtering is used in tandem... which leads me to believe there *may* be some post filter blending used with anisotropic filtering as well, although I havent quite figured out how.
Tests I've used are textured regions with high-contrast fonts/text for the best portrayal. With DV alone, there is no change. With AF alone, there is no change. With the two together, suddenly there are extra pixels- and the coloration shifts (whites become purples/blues.. reds become shifted brighter towards cyan/pink, etc.etc.). It's the extra pixels that interest me and might someday yield insight to some of the effects used on this hardware. It has a similar effect as supersampling has on text, but it doesn't show through on screenshots... only on-screen.
All interesting stuff.. and interesting discussion. I think with enough findings and research, we might get a better feel towards how the hardware works and what processes effect which pieces of the output.