Mintmaster
Veteran
Okay, maybe I'm cherry picking, but I did find at least a couple games with this kind of disparity between computerbase.de and other sites.It wasn't nearly half the framerate for the games I played back then; I had even written a minor write-up about it and the difference was slightly above the oc gain.
Good point.There are way too many applications out there where any developer had the funky idea to use for textures X an idiotic negative LOD. You get shimmering there even without optimisations; enable optimisations and all hell can break lose.
In terms of hardware, sure, but I was talking more about driver settings. There was terrible loss of detail in NV3x cards (e.g. lower mipmap often used) visible in comparison screenshots, and brilinear usage was really over the top with only a small transition region.I don't think that much has changed in terms of AF related optimisations between from NV3x to G7x. IMHLO they merely saved on transistors on NV4x/G7x with the way larger angle-invariance, something that bounced back to NV3x levels with G80 again.
Your AA analogy actually supports my viewpoint more than yours. 2xRGMS is usually equal in quality to 4xOGMS, but is much faster. You sacrifice a bit of quality for a lot of speed.I'd rather have 4x good AF samples than 16x crappy AF samples; just as much as folks used to use extensively 2xRGMS instead of 4xOGMS around the GF3 timeframe.
There's not much perf drop from 4xAF to 16xAF, and I think it would be much less than going from 4x crappy AF to 4x good AF. I doubt you'd have an equal performance comparison there.
If severe, I agree (which is why AF is far more important than resolution IMO), but I really think more reviewers would mention the problem if it was ubiquitous. Disabling all optimizations for all games in a review isn't justified to me.If in a case where severe underfiltering is present, if I say that I'm using AA it would build a nice oxymoron since it's limited to polygon edges/intersections. Given the amount of data that polygon interior data captures, proper texture AA is far more important to me than any MSAA.
My decision in your scenario obviously depends on what the perf change is and what the artifact density is (along with the artifact behaviour in motion). Going back to your earlier GF3 example, 4xOGMS entailed a huge perf drop compared to 2xRGMS, but didn't improve the worst case and only marginally improved a few edges.Imagine you could pick in driver panel X between a losless MSAA and a quite lossy MSAA algorithm; the latter ends up quite a bit faster. Now which one would you chose, given that the latter would fail in a healthy persentage to antialias poly edges for instance.
Now would the same apply for reviewers/users? Or else why wasn't stuff like Quincunx or ATI's new custom filters received generally with enthusiasm?
Quincunx and CFAA reduce the quality of everything, IMO.
The reasoning for the latter is same as for everything. Higher performance with an IQ drop that is generally unnoticeable. If I was developing a game, I'd enable trilinear filtering to eliminate the visibility of mip-map boundaries. Brilinear achieves the same thing most of the time with a lower cost.Well you could have skipped the obvious exaggeration. Shader aliasing and/or crappy content (such as negative LOD on textures) could be catered for by developers themselves; the first needs way more resources and I could find a good excuse for the absence of it. For the latter though I cannot find a single viable excuse.