DaveBaumann said:
A couple of things that strike me: [H] suggests that these reductions in IQ could not be noticed based on the evidence of their screenshots, while that may be the case that there most certainly are combinations of textures and scene that make his difficult to notice, likewise there are combinations that are more easily notable - personally I thought we'd already shown this from the images indicated in our other thread (which I assume were the ones that DoomTrooper tried to post over at [H] forum - if so, Kyle he's free to "leech" them off our servers). In the instance we showed you are actually more likely to notice them in movement.
This is really the crux of the issue. AFAICS, you only posted
one set of regular screenshots in that thread. The mipmap transitions are clearly visible, although perhaps not what I'd call egregious in the stills (in motion I'd imagine they'd stand out quite a bit). [H] has posted a decent number of screens, and I can only see mipmap transitions on one of them, and even then it's very subtle. Several of them are obviously poorly chosen to illustrate the issue, but some of them seem like they should be adequate.
Both Kyle and Brett state that the transitions aren't visible even in motion. You seem to be implying the opposite. I trust your eyes a good deal more than theirs. But for that reason specifically--how noticable is it, exactly? I certainly don't buy the pablum that image quality doesn't matter in fast-paced shooters (if that's the case, why don't you save your $499 and play Quakeworld??). But I'd also like to know whether this is something one needs to actively search for to see it. Particularly if one isn't an eagle-eyed reviewer and hardware expert.
Second: about the most intruiging thing to come out of all this was a throwaway comment made by IIRC Kyle in the [H] forums, to the effect that he'd enabled Performance (i.e. bilinear) on the R3x0, and that the mipmap transitions were immediately obvious and annoying, in a way that is obviously not the case with whatever the crap Nvidia is doing.
Do you concur with this assessment? If so, at the least this suggests that the proper response is not to benchmark Nvidia's Quality against ATI's Performance, as some have suggested. It also suggests that Nvidia is doing something other than straight bilinear (which, in truth, was already suggested by the colored mipmap pics), and even perhaps something a bit more complicated than just mostly bilinear with a brief bit of mipmap blending just as the transition approaches.
What are your thoughts on this? Despite all the condemnation Kyle and crew are recieving for this latest article, IMO it's actually reasonably convincing. Not, of course, that disabling trilinear secretly on an application basis is at all kosher, or that this is a fair comparison to R3x0 with filtering set to application preference. But at least that this may be a reasonably smart approach to minimize the trilinear hit at a seemingly negligable IQ cost, and even one that we should be encouraging.
Now, there's been lots of comments suggesting this is "OK, because there is such a divergence in the boards these days anyway".... NO!, for god sake, this is Trilinear filtering we're talking about here - this is a fundamental filtering process we're taliing about here
Agreed that those comments on the [H] forums to the effect that "GPUs are so complicated these days" are ignorant drivel. Not agreed that IHVs shouldn't be looking for ways to get most of the benefit at less of the (significant) cost. I mean, supersampling is a fundamental antialiasing process, and it still confers some quality benefits over MSAA + AF in certain situations. That doesn't mean ditching SSAA for MSAA wasn't one of the most significant improvements in the last several years.
Tilinear has been with us since the days of Multitexturing; GeForce 256 even did the thing for free (fill-rate at least)
And NV1 did quadratic primatives for free.
Isn't it established (or at least insanely likely) that GF1's free trilinear was more a case of a buggy 2nd TMU/pipe? Even if not, free trilinear is an extremely bad design decision, considering how many textures aren't meant to recieve trilinear even when it's properly enabled. There is a reason we haven't seen it
since GF1.
why, in the era of $500 boards are we finding it acceptable to reduce the quality of such a basic element of image generation? This is fundamental - and we've shown there certainly is no actual issue with running this game with full Trilinear enabled (via Antidetect).
Because trilinear reduces performance significantly. If Nvidia comes up with a method to get most of the IQ benefits of trilinear at a fraction of the performance hit then by all means we should encourage it. As the purpose of trilinear is generally given as "removing the mipmap transitions from bilinear", it does seem a little silly to do trilinear over the entire mipmap if it only matters for a small portion. Now, I'm guessing there may be other benefits to doing "FSTF" (full-screen trilinear), perhaps in the realm of preventing texture aliasing. But I'm mostly just guessing this because otherwise full trilinear would seem a somewhat stupid thing to do. In the vein of doing full screen anisotropic filtering, instead of only oversampling those pixels at large anisotropic angles, and then only doing enough to keep under the Nyquist limit. So if trilinear actually prevents texture aliasing as well, I'd like to see some discussion of that over here for god's sake. And if it doesn't, then this optimization looks long overdue more than anything.
Of course our evaluation of this hinges on the claim that what they're doing really does look much more like trilinear than bilinear even though the mipmaps show that its workload is a lot closer to bilinear than trilinear. Kyle and Brett both say this is the case, and have put up some reasonable evidence. I don't have access to an NV3x card, but many people on this forum do. I'd certainly appreciate more objective testing and some subjective second opinions on the issue instead of just more moans and flames.
None of this excuses Kyle's ridiculous editorials, double-standards, or forum fascism. In particular banning you was about the most ridiculous move I can concieve of. But on the flipside none of that stuff should excuse us from getting to the bottom of what seems to be a very interesting issue.
And, I realize that you may be looking at this issue wearing your reviewer's hat, in which case it is surely absolutely unethical of Nvidia to sneak in an optimization that overrides requested settings (although only to the degree that "quality" implicitly requests "full trilinear filtering"), only in certain highly benchmarked games, and without telling anyone about it. But if you leave that aside for a moment and put on your hardware enthusiast's hat, perhaps there's something worthwhile here?
/turns to crowd
Or am I just totally off base on this one???