OMG HARDOCP REVIEW OF UT2003 AND FILTERING

Status
Not open for further replies.
DaveH said:
Both Kyle and Brett state that the transitions aren't visible even in motion. You seem to be implying the opposite.

That depends on the types of textures in use and the locations. Those on the Antalus map (the grass uses detail textures) aren’t actually visible, others are – so its neither true to say this is never an issue or always an issue in terms of visible IQ.

But I'd also like to know whether this is something one needs to actively search for to see it.

Its probably one of those things – you don’t notice until you do, then you always notice it!! ;)

In terms of how noticeable it is on game play, I don’t know as I’ve not really gone through every map to check.

Do you concur with this assessment? If so, at the least this suggests that the proper response is not to benchmark Nvidia's Quality against ATI's Performance, as some have suggested.

I’ve explicitly tested this, but its probably correct since there is a difference between full Bi and the mixed Bi/Trilinear modes.

And, I realize that you may be looking at this issue wearing your reviewer's hat, in which case it is surely absolutely unethical of Nvidia to sneak in an optimization that overrides requested settings (although only to the degree that "quality" implicitly requests "full trilinear filtering"), only in certain highly benchmarked games, and without telling anyone about it. But if you leave that aside for a moment and put on your hardware enthusiast's hat, perhaps there's something worthwhile here?

Dave, take a search in this forum – I’ve actually said this a number of times: the method is actually an exceptionally good one for removing most of the obvious issues with Bilinear, whilst still reducing the performance hit from Trilinear; you can get to close Trilinear quality in many situations, but with the lower cost. The issue I have with the entire concept is that it is open to abuse.

The entire point of NVIDIA’ texture slider is to put these type of controls in the hands of the user – they should be the ones taking control of whether they want full Trilinear or reduced Trilinear at higher performance, the IHV’s shouldn’t take that away from the user entirely (especially if it can be noticed in game). The point of starting that other texture thread is that ATI actually appear to have exactly the same types of filtering methods in 9600 but they have given it to the users to control – the defaul “Highest Quality†gives full trilinear while the other modes reduce the level of Trilinear.
 
Doomtrooper said:
The problem is Nvidia is disabling what you set in your control panel, on your $500 dollar video card. I'm sorry, changing what you select for quality without your permission is wrong.

Well, technically what you set in your control panel is "quality image quality" or something like that. The control panel doesn't mention nor guarantee trilinear. Nor does is offer an application preference setting.

Incidentally, the R3x0 doesn't do full trilinear on all textures when you set it for "quality image quality" (or is that "quality filtering"?), either. No, it doesn't affect most games, but it happens to affect UT2003. Of course, the ATI driver does have an application preference setting.

With the Anti-detector script the 44.03 driver does apply what you set in the control panel...so obviousally Nvidia detects and lowers for frames..like 30% increase !!

Obviously. That's not in doubt. (Although the 30% performance hit may be attributable to more than just forcing full trilinear. In fact it almost certainly is: trilinear will incur a pretty big hit, particularly in a texture-intensive game like UT2003; but 30% seems unreasonably large.)
 
Dave, DT fyi from H forums.

"As for leeching images, you have hit the nail on the head. The work is NOT yours to use. We have been sued over exactly what you are doing. Should beyond3D wish to give you the right to use any and all of their content and also allow you to leech their bandwidth, they will need to send a written letter stating so. I gave Dave Bauman my phone number last week, he can call and discuss the fine points should he wish."

Can you please let him know :) its hard to prove him wrong when he keeps removing the proof :p
 
Dave H said:
This is really the crux of the issue. AFAICS, you only posted one set of regular screenshots in that thread. The mipmap transitions are clearly visible, although perhaps not what I'd call egregious in the stills (in motion I'd imagine they'd stand out quite a bit). [H] has posted a decent number of screens, and I can only see mipmap transitions on one of them, and even then it's very subtle. Several of them are obviously poorly chosen to illustrate the issue, but some of them seem like they should be adequate.

Both Kyle and Brett state that the transitions aren't visible even in motion. You seem to be implying the opposite. I trust your eyes a good deal more than theirs. But for that reason specifically--how noticable is it, exactly? I certainly don't buy the pablum that image quality doesn't matter in fast-paced shooters (if that's the case, why don't you save your $499 and play Quakeworld??). But I'd also like to know whether this is something one needs to actively search for to see it. Particularly if one isn't an eagle-eyed reviewer and hardware expert.

I can't speak for Dave B., but...

Dave H, I think you might be getting ahead of yourself here. First you say, "The mipmap transitions are clearly visible..." and then, "in motion I'd imagine they'd stand out quite a bit."

Then you say, "Both Kyle and Brett state that the transitions aren't visible even in motion " and "You seem to be implying the opposite. I trust your eyes a good deal more than theirs. But for that reason specifically--how noticable is it, exactly?"

In the first paragraph you correctly realize that if you can see a mipmap band in a screen shot you can bet it's visible when the game is in motion. But in your second paragraph you state that Dave B. "implies" the visible mipmap boundaries which in your first paragraph you state "... are clearly visible..." and then, "in motion I'd imagine they'd stand out quite a bit." Didn't you more or less answer your question as to what Dave B. "implied" by your observations as you stated them in your first paragraph? IE, the screenshots by Dave B. didn't just "imply" it, they proved it. Right?...;)

Also, what I got out of [H]'s text was not so much that they "couldn't see any mipmap boundaries" when playing the game, but rather that "what mipmap boundaries you can see while playing the game don't matter since you are running around fragging people and don't have time to judge the IQ." IE, what I got from [H] was simply that "it didn't matter" if mipmap boundaries were occasionally visible since in all likelyhood "you wouldn't notice them" when playing the game because your attention would be elsewhere.

Second: about the most intruiging thing to come out of all this was a throwaway comment made by IIRC Kyle in the [H] forums, to the effect that he'd enabled Performance (i.e. bilinear) on the R3x0, and that the mipmap transitions were immediately obvious and annoying, in a way that is obviously not the case with whatever the crap Nvidia is doing.

Do you concur with this assessment? If so, at the least this suggests that the proper response is not to benchmark Nvidia's Quality against ATI's Performance, as some have suggested. It also suggests that Nvidia is doing something other than straight bilinear (which, in truth, was already suggested by the colored mipmap pics), and even perhaps something a bit more complicated than just mostly bilinear with a brief bit of mipmap blending just as the transition approaches.

A couple of problems with this approach...

Yes, it's accurate to state that a direct comparison of nVidia's faux-trilinear with ATi's bilinear would be incorrect from an IQ standpoint. It is also, therefore, equally incorrect to compare nVidia's faux-trilinear with ATi's full trilinear, for the same reason. [H] incorrectly does this.

Second, it should not be forgotten that nVidia has not stopped doing full trilinear in other games--and that this situation only applies to UT2K3. As someone else stated, if nVidia does full trilinear in Unreal 2, and pretty much everything else, why is it only in UT2K3 that nVidia feels it is necessary or beneficial to eliminate the option of full trilinear filtering (regardless of whether faux-trilinear is made available as an option or not)? Best answer so far is that nVidia has singled out UT2K3 in this regard because its associated canned timedemos (Antalus Fly-By, etc.) are so often used for benchmarking by hardware review sites.

Clearly, nVidia obviously feels there is a difference between its faux-trilinear and full trilinear filtering, else the only thing the nVidia drivers would provide for every application would be its faux-variety of trilinear filtering. Right?

What are your thoughts on this? Despite all the condemnation Kyle and crew are recieving for this latest article, IMO it's actually reasonably convincing. Not, of course, that disabling trilinear secretly on an application basis is at all kosher, or that this is a fair comparison to R3x0 with filtering set to application preference. But at least that this may be a reasonably smart approach to minimize the trilinear hit at a seemingly negligable IQ cost, and even one that we should be encouraging.

Which would be fine and dandy provided nVidia did not displace the option of full trilinear filtering in the game with this faux-trilinear performance-oriented compromise. In fact, it would be even more fine and dandy if nVidia integrated the control of this mode into its control panel directly so that an end user could choose it, without omitting full trilinear capability if the end user desires that, instead, in all 3D games.

Agreed that those comments on the [H] forums to the effect that "GPUs are so complicated these days" are ignorant drivel. Not agreed that IHVs shouldn't be looking for ways to get most of the benefit at less of the (significant) cost. I mean, supersampling is a fundamental antialiasing process, and it still confers some quality benefits over MSAA + AF in certain situations. That doesn't mean ditching SSAA for MSAA wasn't one of the most significant improvements in the last several years.

SSAA and MSAA are simply different methods of doing the same thing--FSAA. Within the SSAA and MSAA IHV subgroups are greatly different methods of the implementation of either technique. The difference here would be the rough equivalent of an IHV claiming to do SSAA while it was in reality doing MSAA while claiming it was legitmate to call it "SSAA" because "most of the time" it looked "almost as good." Problem is that regardless of how good it looks there would be no justification for calling MSAA SSAA as the two aren't the same. Likewise, whatever nVidia's doing in UT2K3 it's not the same as full trilinear and "looks almost as good" simply doesn't count. Whatever is being done is being done at the expense of full trilinear support in the game, and that's the problem. The fact that this situation seems unique to UT2K3 merely complicates the matter even further.

And NV1 did quadratic primatives for free. :) Isn't it established (or at least insanely likely) that GF1's free trilinear was more a case of a buggy 2nd TMU/pipe? Even if not, free trilinear is an extremely bad design decision, considering how many textures aren't meant to recieve trilinear even when it's properly enabled. There is a reason we haven't seen it since GF1.

Nothing is free in 3D (to quote Kristoff.) Any misapprehension you may have along those lines is, well...a misapprehension...;) BTW, like nv30, nv1 was a failure commercially.

Because trilinear reduces performance significantly. If Nvidia comes up with a method to get most of the IQ benefits of trilinear at a fraction of the performance hit then by all means we should encourage it. As the purpose of trilinear is generally given as "removing the mipmap transitions from bilinear", it does seem a little silly to do trilinear over the entire mipmap if it only matters for a small portion. Now, I'm guessing there may be other benefits to doing "FSTF" (full-screen trilinear), perhaps in the realm of preventing texture aliasing. But I'm mostly just guessing this because otherwise full trilinear would seem a somewhat stupid thing to do. In the vein of doing full screen anisotropic filtering, instead of only oversampling those pixels at large anisotropic angles, and then only doing enough to keep under the Nyquist limit. So if trilinear actually prevents texture aliasing as well, I'd like to see some discussion of that over here for god's sake. And if it doesn't, then this optimization looks long overdue more than anything.

Of course our evaluation of this hinges on the claim that what they're doing really does look much more like trilinear than bilinear even though the mipmaps show that its workload is a lot closer to bilinear than trilinear. Kyle and Brett both say this is the case, and have put up some reasonable evidence. I don't have access to an NV3x card, but many people on this forum do. I'd certainly appreciate more objective testing and some subjective second opinions on the issue instead of just more moans and flames.

The problem, again, is that it is only for UT2K3 that nVidia has tried to eliminate full trilinear filtering. In most everything else, if not everything else, nVidia still does full trilinear. As such, nVidia's driver behavior in UT2K3 in this regard is very much the exception, not the rule.

The simple answer as to why nVidia does not universally discard full trilinear filtering support in favor of the faux-trilinear employed for UT2K3 should be obvious--full trilinear support produces better IQ than nVidia's performance-oriented compromise, and this is not lost on nVidia. The central question here is not whether nVidia's compromise is "almost as good" as full trilinear, the central question is why has nVidia coded its drivers to deliver a performance trilinear, even when the application itself requests the drivers provide full trilinear support? And of course there's the question of why nVidia thinks this is needful for UT2K3 but apparently nothing else?

None of this excuses Kyle's ridiculous editorials, double-standards, or forum fascism. In particular banning you was about the most ridiculous move I can concieve of. But on the flipside none of that stuff should excuse us from getting to the bottom of what seems to be a very interesting issue.

It's interesting only because nVidia has removed the option of full trilinear support from its drivers with respect to UT2K3, IMO.

And, I realize that you may be looking at this issue wearing your reviewer's hat, in which case it is surely absolutely unethical of Nvidia to sneak in an optimization that overrides requested settings (although only to the degree that "quality" implicitly requests "full trilinear filtering"), only in certain highly benchmarked games, and without telling anyone about it. But if you leave that aside for a moment and put on your hardware enthusiast's hat, perhaps there's something worthwhile here?

/turns to crowd

Or am I just totally off base on this one???

I think you are reading way too much into it. nVidia is obviously not proposing "an alternative to full trilinear support" or anything like that. If that was the case we'd see an option in the Detonator control panel allowing this technique for all 3D games. Instead, the truth seems much more mundane and, sadly, predictible: it's just a hack nVidia's put into its drivers for UT2K3 to help out its scores relative to R3xx in UT2K3 when trilinear filtering support is enabled in the application.
 
Just to clarify:

I'm in no way happy with or defending Nvidia's behavior in sneaking this "adaptive trilinear" into the drivers without informing anyone or offering the ability to disable it for full trilinear. But frankly it's not like Nvidia doing untoward things with their drivers in order to get a leg up in benchmarks is something new or surprising.

What I'm interested in is the notion that, in this case, the "optimization" really might be a pretty legitimate optimization; one with IQ tradeoffs in certain situations, yes, but ones that are well worth the added performance the majority of the time. I'm interested in examining more fully the space of those tradeoffs. And I'm wondering why we haven't seen more of this "trilinear only near mipmap boundries" before. It seems like a clever idea--perhaps even the Right Thing to do--to me. (Yes, it was part of the original "Aggressive" and "Balanced" settings in the first GFfx drivers, but there it was tacked onto a horrid simplification of the LOD calculation, such that the resulting mode wasn't much worth using. This does look worth using...although the 8xAF results, from [H] and AMDMB, do give one pause.)

Finally, I just can't get myself terribly worked up at Kyle and Brett over this issue. They seem to have done a reasonable job investigating the issue, and come to reasonable conclusions. They appear to agree with all of us that Nvidia should offer the option for full trilinear in their drivers. It certainly doesn't seem like they'll withdraw the UT2003 results in their recent review, but it does appear that they'll at least offer some comments on the issue from now on, which seems like a proper response assuming one comes to the conclusion they have after this investigation.

The thing is that Wavey doesn't appear to agree that they've done a good job or reached the right conclusions. And as I trust him enormously and as he's seen the issue in action and I haven't (beyond a couple screenshots), I'd really like to see him (or any other forum members with NV3x cards) offer a more detailed examination of the IQ loss at stake here.
 
Personally, I do see what NVIDIA is doing as some sort of "optimization" but they are cheating while doing it. Don't make sense? Well, they're cheating because they used UT2003 and SamX's app and said to reviewers (i.e. they explained how to benchmark their cards) that "Quality" means proper trilinear. While this may be true on all games, it certainly isn't the case with UT2003... and they said that it should apply to UT2003. They lied, and hence cheated reviewers, the end result of which is the public being misled. This wouldn't have created a furore as it seems to have if NVIDIA didn't tell the reviewers what they told them. If NVIDIA hadn't told reviewers this, then if reviewers had noticed what was happening with the 44.03 drivers in UT2003 and brought it to light, it is simply information about what's happening.

I am not particularly concerned if Brent or Kyle or whoever said that "We think that what NVIDIA is doing doesn't really affect IQ significantly to the point where it jumps out at you". I am concerned with the fact that NVIDIA lied to reviewers (or didn't update them, as the case may be).
 
Dave H said:
Well, technically what you set in your control panel is "quality image quality" or something like that. The control panel doesn't mention nor guarantee trilinear. Nor does is offer an application preference setting.

Well all tests performed using the filtering test, and other games show full trilinear.


Obviously. That's not in doubt. (Although the 30% performance hit may be attributable to more than just forcing full trilinear. In fact it almost certainly is: trilinear will incur a pretty big hit, particularly in a texture-intensive game like UT2003; but 30% seems unreasonably large.)

UT 2003 takes a serious performance hit with Trilinear AF, as shown here:

http://www.beyond3d.com/previews/nvidia/gffxu/index.php?p=8

44%
 
Thats ok, the proof is there for people to see. It was only a matter of time.. people need to wise up now and realize how much covering up is going on.

They banned Dave for exposing the truth, then banned myself for the same..the actions of a dictator.
 
Reverend said:
Personally, I do see what NVIDIA is doing as some sort of "optimization" but they are cheating while doing it. Don't make sense? Well, they're cheating because they used UT2003 and SamX's app and said to reviewers (i.e. they explained how to benchmark their cards) that "Quality" means proper trilinear. While this may be true on all games, it certainly isn't the case with UT2003... and they said that it should apply to UT2003. They lied, and hence cheated reviewers, the end result of which is the public being misled. This wouldn't have created a furore as it seems to have if NVIDIA didn't tell the reviewers what they told them. If NVIDIA hadn't told reviewers this, then if reviewers had noticed what was happening with the 44.03 drivers in UT2003 and brought it to light, it is simply information about what's happening.

I am not particularly concerned if Brent or Kyle or whoever said that "We think that what NVIDIA is doing doesn't really affect IQ significantly to the point where it jumps out at you". I am concerned with the fact that NVIDIA lied to reviewers (or didn't update them, as the case may be).

Makes perfect sense to me, and I agree. Where I disagree with [H] is in the characterization of this as some sort of glorified "alternative" to full trilinear filtering support, which it certainly isn't. Already this brash, apologetic sentiment has affected the otherwise sound judgement of folks like Dave H., who is driven accordingly to grant it the inflated label of "adaptive trilinear"--Heh--which of course it isn't--except maybe in the sense that when the drivers detect that UT2K3 is being run they "adapt" and substitute a faux-trilinear for full trilinear support in the game when the game calls for full trilinear....;)

[H] has, as usual, managed to miss the point: the crime isn't in offering a performance alternative to full trilinear filtering--of course not. The crime is in substituting it for full trilinear support in the game--that's the issue that [H] has completely missed, IMO.
 
Doomtrooper said:
Yep I'm banned too :LOL:
<The Dig runs up and gives Doomtrooper a BIG fuzzy hug and a BIG wet kiss, in a very hetero-sexual and masculine he-man kind of way.>

Good gods did you last a lot longer there than I thought you would!

THANK YOU DOOMTROOPER, THANK YOU FOR YOUR POSTS THERE!!!!!

Sorry, I just had to get that out. Everytime I was ready to punch me monitor 'cause of something extreme that K or B wrote you were putting up just what I wanted to. ('Cept better since you know more. ;) )

Thanks DT, thank you very much. :)
 
Doomtrooper said:
Yep I'm banned too :LOL:

OK....Hmmmm, let's see....a Purple Heart for you, one for Dave B., and...everybody else felled in the line of fire....! (I've only got about 20 of these...sorry...)

:D
 
When you move the determination of equivalency to descriptions purely dependent on the personal evaluation of the reviewer, you replace user information as a factor with whatever the reviewer can represent, which is by nature incomplete. Incomplete facts are a common tool for outright lying, and from what I've seen, Kyle seems to be trying to ensure that the facts are most assuredly and definitely left very incomplete indeed.

Separately, trust is an issue with regards to the people making the observation and the objectivity of how they are conducting that, and as discussed above, it does not seem to be something that Kyle is earning.

For example, the default for ATI is a boosted LOD and the default for UT 2k3 for high end configuration options is boosted LOD as well. Even aside from the other issues discussed above, did Kyle's comparison between ATI Performance mode and nVidia's tri/bi Quality mode for UT2k3 take this into account at all? The representation provided here doesn't make it seem so.

Given what the AMDMB article is saying, for example, LOD might be turned down as part of the application detection for nVidia...wouldn't that serve to soften mip map transitions? Even aside from that, I thought it was well established that bilinear mip map transitions were worsened for ATI with LOD defaults for both the ATI cp settings and the UT2k3 ini file settings?

At the moment, I'm still wondering: how does bilinear really compare when competent evaluation of LOD settings is a factor...and whether such an evaluation was part of Kyle's representation of this issue?

This is related to whether the image quality observations have validity at all, even before the issue of trust of the personal evaluation of "eyeballing" them come into play.

As for such trust, please note that Wavey's discussion of AF comparisons, as one example, doesn't depend on "trust", but on discussing these factors with specifics and encouraging users to establish and share their own determinations in these forums.

Given the [H] forum environment, I don't think [H] conducts itself the same way, and the viewpoint they are proposing seems to depend on selling the idea of "trust me" in place of that (and any dissenting viewpoint as well, AFAICS).

BTW, the questions above aren't rhetorical...please share your answers if you've formed an opinion.
 
demalion said:
When you move the determination of equivalency to descriptions purely dependent on the personal evaluation of the reviewer, you replace user information as a factor with whatever the reviewer can represent, which is by nature incomplete. Incomplete facts are a common tool for outright lying, and from what I've seen, Kyle seems to be trying to ensure that the facts are most assuredly and definitely left very incomplete indeed.

Separately, trust is an issue with regards to the people making the observation and the objectivity of how they are conducting that, and as discussed above, it does not seem to be something that Kyle is earning.

For example, the default for ATI is a boosted LOD and the default for UT 2k3 for high end configuration options is boosted LOD as well. Even aside from the other issues discussed above, did Kyle's comparison between ATI Performance mode and nVidia's tri/bi Quality mode for UT2k3 take this into account at all? The representation provided here doesn't make it seem so.

Given what the AMDMB article is saying, for example, LOD might be turned down as part of the application detection for nVidia...wouldn't that serve to soften mip map transitions? Even aside from that, I thought it was well established that bilinear mip map transitions were worsened for ATI with LOD defaults for both the ATI cp settings and the UT2k3 ini file settings?

At the moment, I'm still wondering: how does bilinear really compare when competent evaluation of LOD settings is a factor...and whether such an evaluation was part of Kyle's representation of this issue?

This is related to whether the image quality observations have validity at all, even before the issue of trust of the personal evaluation of "eyeballing" them come into play.

As for such trust, please note that Wavey's discussion of AF comparisons, as one example, doesn't depend on "trust", but on discussing these factors with specifics and encouraging users to establish and share their own determinations in these forums.

Given the [H] forum environment, I don't think [H] conducts itself the same way, and the viewpoint they are proposing seems to depend on selling the idea of "trust me" in place of that (and any dissenting viewpoint as well, AFAICS).

BTW, the questions above aren't rhetorical...please share your answers if you've formed an opinion.
Yes. :)
 
Good point about LOD, D...I think it also goes to the broader question of why nVidia might seek to encrypt a driver set to defeat anti-detection scripts. If the driver is overflowing with such detections designed to subtly decrease IQ (and sometimes obviously decrease it) in many games in order to prop up general performance, such a defense would add up. I certainly think it's worth checking out. I fear though that nVidia's foray into such application-specific "optimizations" for nv3x driver sets might well be so extensive that we'll simply never catch them all.

As to your comments characterizing [H]'s "trust me" positioning, I agree, and can only say I'd love to be a fly on the wall at [H] because that's probably the only way I think I might eventually understand the position...;) IE, I think behind-the-scenes observation might be essential to understanding it.
 
Dave H said:
Finally, I just can't get myself terribly worked up at Kyle and Brett over this issue. They seem to have done a reasonable job investigating the issue, and come to reasonable conclusions.

People are annoyed because they completely dodge the most important question: Nvidia was intentionally screwing reviewers, them included. They make no mention of the fact that this is really a question of illegitimate application detection and that Nvidia made false claims of improved performance with UT 2003. There were no performance gains.

Had they mentioned in the article that they were screwed and the issue is about application detection and not about a "new" filtering method at all I would have been content. The article is misleading because it gives the reader the impression that Nvidia has come up with a "new" filtering method when in reality they just made a quality slider dysfunctional.
 
Hrm I wass banned too for discussing this article. Kyles only responce was "We have said what we have to say on it, and our reasoning is not changing. Thanks."

Close-minded anybody?

I think i see a pattern with these banning, everyone banned was speaking the truth and kyle just wont accept it, so he bans you....
 
Status
Not open for further replies.
Back
Top