Dave H said:
This is really the crux of the issue. AFAICS, you only posted
one set of regular screenshots in that thread. The mipmap transitions are clearly visible, although perhaps not what I'd call egregious in the stills (in motion I'd imagine they'd stand out quite a bit). [H] has posted a decent number of screens, and I can only see mipmap transitions on one of them, and even then it's very subtle. Several of them are obviously poorly chosen to illustrate the issue, but some of them seem like they should be adequate.
Both Kyle and Brett state that the transitions aren't visible even in motion. You seem to be implying the opposite. I trust your eyes a good deal more than theirs. But for that reason specifically--how noticable is it, exactly? I certainly don't buy the pablum that image quality doesn't matter in fast-paced shooters (if that's the case, why don't you save your $499 and play Quakeworld??). But I'd also like to know whether this is something one needs to actively search for to see it. Particularly if one isn't an eagle-eyed reviewer and hardware expert.
I can't speak for Dave B., but...
Dave H, I think you might be getting ahead of yourself here. First you say, "The mipmap transitions are clearly visible..." and then, "in motion I'd imagine they'd stand out quite a bit."
Then you say, "Both Kyle and Brett state that the transitions aren't visible even in motion " and "You seem to be implying the opposite. I trust your eyes a good deal more than theirs. But for that reason specifically--how noticable is it, exactly?"
In the first paragraph you correctly realize that if you can see a mipmap band in a screen shot you can bet it's visible when the game is in motion. But in your second paragraph you state that Dave B. "implies" the visible mipmap boundaries which in your first paragraph you state "... are clearly visible..." and then, "in motion I'd imagine they'd stand out quite a bit." Didn't you more or less answer your question as to what Dave B. "implied" by your observations as you stated them in your first paragraph? IE, the screenshots by Dave B. didn't just "imply" it, they proved it. Right?...
Also, what I got out of [H]'s text was not so much that they "couldn't see any mipmap boundaries" when playing the game, but rather that "what mipmap boundaries you can see while playing the game don't matter since you are running around fragging people and don't have time to judge the IQ." IE, what I got from [H] was simply that "it didn't matter" if mipmap boundaries were occasionally visible since in all likelyhood "you wouldn't notice them" when playing the game because your attention would be elsewhere.
Second: about the most intruiging thing to come out of all this was a throwaway comment made by IIRC Kyle in the [H] forums, to the effect that he'd enabled Performance (i.e. bilinear) on the R3x0, and that the mipmap transitions were immediately obvious and annoying, in a way that is obviously not the case with whatever the crap Nvidia is doing.
Do you concur with this assessment? If so, at the least this suggests that the proper response is not to benchmark Nvidia's Quality against ATI's Performance, as some have suggested. It also suggests that Nvidia is doing something other than straight bilinear (which, in truth, was already suggested by the colored mipmap pics), and even perhaps something a bit more complicated than just mostly bilinear with a brief bit of mipmap blending just as the transition approaches.
A couple of problems with this approach...
Yes, it's accurate to state that a direct comparison of nVidia's faux-trilinear with ATi's bilinear would be incorrect from an IQ standpoint. It is also, therefore, equally incorrect to compare nVidia's faux-trilinear with ATi's full trilinear, for the same reason. [H] incorrectly does this.
Second, it should not be forgotten that nVidia has not stopped doing full trilinear in other games--and that this situation only applies to UT2K3. As someone else stated, if nVidia does full trilinear in Unreal 2, and pretty much everything else, why is it only in UT2K3 that nVidia feels it is necessary or beneficial to eliminate the option of full trilinear filtering (regardless of whether faux-trilinear is made available as an option or not)? Best answer so far is that nVidia has singled out UT2K3 in this regard because its associated canned timedemos (Antalus Fly-By, etc.) are so often used for benchmarking by hardware review sites.
Clearly, nVidia obviously feels there is a difference between its faux-trilinear and full trilinear filtering, else the only thing the nVidia drivers would provide for every application would be its faux-variety of trilinear filtering. Right?
What are your thoughts on this? Despite all the condemnation Kyle and crew are recieving for this latest article, IMO it's actually reasonably convincing. Not, of course, that disabling trilinear secretly on an application basis is at all kosher, or that this is a fair comparison to R3x0 with filtering set to application preference. But at least that this may be a reasonably smart approach to minimize the trilinear hit at a seemingly negligable IQ cost, and even one that we should be encouraging.
Which would be fine and dandy provided nVidia did not displace the option of full trilinear filtering in the game with this faux-trilinear performance-oriented compromise. In fact, it would be even more fine and dandy if nVidia integrated the control of this mode into its control panel directly so that an end user could choose it, without omitting full trilinear capability if the end user desires that, instead, in all 3D games.
Agreed that those comments on the [H] forums to the effect that "GPUs are so complicated these days" are ignorant drivel. Not agreed that IHVs shouldn't be looking for ways to get most of the benefit at less of the (significant) cost. I mean, supersampling is a fundamental antialiasing process, and it still confers some quality benefits over MSAA + AF in certain situations. That doesn't mean ditching SSAA for MSAA wasn't one of the most significant improvements in the last several years.
SSAA and MSAA are simply different methods of doing the same thing--FSAA. Within the SSAA and MSAA IHV subgroups are greatly different methods of the implementation of either technique. The difference here would be the rough equivalent of an IHV claiming to do SSAA while it was in reality doing MSAA while claiming it was legitmate to call it "SSAA" because "most of the time" it looked "almost as good." Problem is that regardless of how good it looks there would be no justification for calling MSAA SSAA as the two aren't the same. Likewise, whatever nVidia's doing in UT2K3 it's not the same as full trilinear and "looks almost as good" simply doesn't count. Whatever is being done is being done at the expense of full trilinear support in the game, and that's the problem. The fact that this situation seems unique to UT2K3 merely complicates the matter even further.
And NV1 did quadratic primatives for free.
Isn't it established (or at least insanely likely) that GF1's free trilinear was more a case of a buggy 2nd TMU/pipe? Even if not, free trilinear is an extremely bad design decision, considering how many textures aren't meant to recieve trilinear even when it's properly enabled. There is a reason we haven't seen it
since GF1.
Nothing is free in 3D (to quote Kristoff.) Any misapprehension you may have along those lines is, well...a misapprehension...
BTW, like nv30, nv1 was a failure commercially.
Because trilinear reduces performance significantly. If Nvidia comes up with a method to get most of the IQ benefits of trilinear at a fraction of the performance hit then by all means we should encourage it. As the purpose of trilinear is generally given as "removing the mipmap transitions from bilinear", it does seem a little silly to do trilinear over the entire mipmap if it only matters for a small portion. Now, I'm guessing there may be other benefits to doing "FSTF" (full-screen trilinear), perhaps in the realm of preventing texture aliasing. But I'm mostly just guessing this because otherwise full trilinear would seem a somewhat stupid thing to do. In the vein of doing full screen anisotropic filtering, instead of only oversampling those pixels at large anisotropic angles, and then only doing enough to keep under the Nyquist limit. So if trilinear actually prevents texture aliasing as well, I'd like to see some discussion of that over here for god's sake. And if it doesn't, then this optimization looks long overdue more than anything.
Of course our evaluation of this hinges on the claim that what they're doing really does look much more like trilinear than bilinear even though the mipmaps show that its workload is a lot closer to bilinear than trilinear. Kyle and Brett both say this is the case, and have put up some reasonable evidence. I don't have access to an NV3x card, but many people on this forum do. I'd certainly appreciate more objective testing and some subjective second opinions on the issue instead of just more moans and flames.
The problem, again, is that it is only for UT2K3 that nVidia has tried to eliminate full trilinear filtering. In most everything else, if not everything else, nVidia still does full trilinear. As such, nVidia's driver behavior in UT2K3 in this regard is very much the exception, not the rule.
The simple answer as to why nVidia does not universally discard full trilinear filtering support in favor of the faux-trilinear employed for UT2K3 should be obvious--full trilinear support produces better IQ than nVidia's performance-oriented compromise, and this is not lost on nVidia. The central question here is not whether nVidia's compromise is "almost as good" as full trilinear, the central question is why has nVidia coded its drivers to deliver a performance trilinear,
even when the application itself requests the drivers provide full trilinear support? And of course there's the question of why nVidia thinks this is needful for UT2K3 but apparently nothing else?
None of this excuses Kyle's ridiculous editorials, double-standards, or forum fascism. In particular banning you was about the most ridiculous move I can concieve of. But on the flipside none of that stuff should excuse us from getting to the bottom of what seems to be a very interesting issue.
It's interesting only because nVidia has removed the option of full trilinear support from its drivers with respect to UT2K3, IMO.
And, I realize that you may be looking at this issue wearing your reviewer's hat, in which case it is surely absolutely unethical of Nvidia to sneak in an optimization that overrides requested settings (although only to the degree that "quality" implicitly requests "full trilinear filtering"), only in certain highly benchmarked games, and without telling anyone about it. But if you leave that aside for a moment and put on your hardware enthusiast's hat, perhaps there's something worthwhile here?
/turns to crowd
Or am I just totally off base on this one???
I think you are reading way too much into it. nVidia is obviously not proposing "an alternative to full trilinear support" or anything like that. If that was the case we'd see an option in the Detonator control panel allowing this technique for all 3D games. Instead, the truth seems much more mundane and, sadly, predictible: it's just a hack nVidia's put into its drivers for UT2K3 to help out its scores relative to R3xx in UT2K3 when trilinear filtering support is enabled in the application.