OMG HARDOCP REVIEW OF UT2003 AND FILTERING

Status
Not open for further replies.
Oblivious said:
I'm not familiar with UT2K3 but wouldn't it be possible to force bilinear (on purpose) in the driver or app, benchmark it, do it again with the same settings and Unwinder's script running and then compare the results to see if there's a difference in fps? You can isolate the performance difference between forcing trilinear via the script and any adverse effects it may have on legitimate optimizations. Would this work?

Theoretically, it might. Except that Nvidia's drivers are not forcing bilinear filtering. They're forcing some sort of quasi-trilinear that only blends mipmaps close to the transition point. The overall surface isn't blended, but the mipmap lines are masked. With a good aniso algorithm this could end up looking just fine for most folks, but Nvidia needs to make it an -option-, not a requirement.
 
That's what happens when you use the Quality image setting, but what I'm wondering is what the Performance and High Performance settings do. Do these force bilinear or is it a lower form of hacked up tri?

I'm wondering if there's a way to force card and the game to do bilinear on everything (none of that quasi-trilinear stuff). It would seem obvious to me that the game would allow this since there are cards out there that only do bilinear (8500) but I want to make sure.

If it is possible to do just bilinear filtering, than someone with a 5900 could test Unwinder's script to see if it disables something important to the performance of the game (see my previous post). Then, we would know if that hit on performance Dave B. noticed earlier is due to the 5900 being forced to do trilinear, if it's disabling something important and legitimate in the drivers or if it's a combination of both.

Of course, I could be way off base here. Anyone care to comment?
 
This question was asked on the Hardforums:

I'm just wondering...
Why you guys banned Dave Baumann's IP from the [H]ardForums? Isn't it a bit childish to ban the owner of one of the most respected hardware sites out there over an argument?

This was Kyle's response:

Dave is not part of community here as he does not spend time here or post here. He only comes here to push his own agenda. He has his own forums for that.

I have given him my phone number and invited him to call should he wish to discuss it. He has not done that.

At least he finally admitted to banning him... :rolleyes:
 
I have given him my phone number and invited him to call should he wish to discuss it. He has not done that.

Can I have Kyle's phone number? :LOL: ;)
I promise I won't make smutty comments on the line. ;)

On a more serious note, Dave is only trying to educate people, I hope he does keep pushing that agenda.
 
As I see it, the degree of performance differences revealed by using anti-detection scripts are irrelevant. I don't think anybody is saying that the whole performance drop is the result of disabling optimizations, whether valid or invalid. The whole point of testing with anti-detector is to ascertain which applications are being detected.

Clearly some or all of the performance discrepancy is from the dialing down of IQ in UT2003's case. Some of the performance drop may be a side effect of anti-detector. Unwinder (I think) has claimed otherwise, but he may be wrong, and DV may be right. But again, it is also clear that IQ is being affected by NVIDIA, and performance is also going up. This is very sensible logic, as it should be obvious that NVIDIA would not spend time hacking away IQ in one game which is highly benchmarked for reviews if there was no apparent and immediate gain.
 
Scorched said:
This was Kyle's response:

Dave is not part of community here as he does not spend time here or post here. He only comes here to push his own agenda. He has his own forums for that.

I have given him my phone number and invited him to call should he wish to discuss it. He has not done that.

At least he finally admitted to banning him... :rolleyes:

Yes, I know. I'd like to know what agenda this it - the same one as AMDMB?

As for phoning him - he's in the US and I'm in the UK, I'm not really bothered enough about not posting on their forums to waste money on it. Email will suffice.
 
StealthHawk said:
As I see it, the degree of performance differences revealed by using anti-detection scripts are irrelevant. I don't think anybody is saying that the whole performance drop is the result of disabling optimizations, whether valid or invalid. The whole point of testing with anti-detector is to ascertain which applications are being detected.

Exactly. I can understand the reluctance to use AntiDetector to measure performance differences, but if it's used careful I see no harm in using it to check for image quality differences, especially when the difference is pretty obvious as with the case of Unreal Tournament 2003.

Regardless of how you feel about AntiDetector, the fact remains that it seems to be the only way to get full trilinear in UT2003 on GeForceFX cards at present. If you are examining the image quality in that particular game in the way that [H] have, it seems prudent to me to use AntiDetector to show the game 'the way it's meant to be played' for comparison purposes.
 
I thought you lived in the USA. Anyway, I can call you if you wish? :D

DaveBaumann said:
Scorched said:
This was Kyle's response:

Dave is not part of community here as he does not spend time here or post here. He only comes here to push his own agenda. He has his own forums for that.

I have given him my phone number and invited him to call should he wish to discuss it. He has not done that.

At least he finally admitted to banning him... :rolleyes:

Yes, I know. I'd like to know what agenda this it - the same one as AMDMB?

As for phoning him - he's in the US and I'm in the UK, I'm not really bothered enough about not posting on their forums to waste money on it. Email will suffice.
 
Heh, it's kinda amusing that Kyle closed a 21 pages thread(the UT2k3 filtering article) citing that it is too long and shoud be continued on another thread. He then closed that second thread when it barely hit second page.

Kyle is really working overtime in damage control the way he bans people, reply to post with mostly semantics, and close threads, and deleting countless post.

I don't feel sorry for the poor bastard from what I heard of him and read from a few of his reply in those threads. He really had it coming. Kinda ironic that his site is getting so many hits for his FUD.
 
HardOCP have come along way since their early NV30 v R300 review where they clearly demonstrated, using UT2003 images, that the NV30 didn't perform AA on the horizontal axis and it's offsets.
I would imagine if they had found that today it would have been ignored or classified as unimportant as it doesn't effect IQ either!
 
The question has been raised before, but now that we have Mr Vogels attetion I'd like to take the opportunity to ask if this is the way Ut2k3 is meant to be played, or if Epic has issues with NVs trilinear filtering. Has Epic been in contact with NV regarding this issue? What is their stance?

Thanks,
Ollo
 
DaveBaumann said:
Scorched said:
This was Kyle's response:

Dave is not part of community here as he does not spend time here or post here. He only comes here to push his own agenda. He has his own forums for that.

I have given him my phone number and invited him to call should he wish to discuss it. He has not done that.

At least he finally admitted to banning him... :rolleyes:

Yes, I know. I'd like to know what agenda this it - the same one as AMDMB?

As for phoning him - he's in the US and I'm in the UK, I'm not really bothered enough about not posting on their forums to waste money on it. Email will suffice.
PM me the number, I'll give him a few words for ya... :bleh:
 
Dave is not part of community here as he does not spend time here or post here. He only comes here to push his own agenda. He has his own forums for that.

I find it funny to see Kyle Bennet talking about someone pushing their agenda. Kyle is pretty quick with questioning everyone else’s motives. I guess stating the facts is now an agenda. You have to ask yourself what is Kyle's agenda?
 
WaltC said:
Dave H, I think you might be getting ahead of yourself here. First you say, "The mipmap transitions are clearly visible..." and then, "in motion I'd imagine they'd stand out quite a bit."

Yes, referring to one screenshot posted here and clearly chosen to highlight the effect.

Then you say, "Both Kyle and B
rett state that the transitions aren't visible even in motion " and "You seem to be implying the opposite. I trust your eyes a good deal more than theirs. But for that reason specifically--how noticable is it, exactly?"

Right, across a cross-section of representative environments throughout the game.

In the first paragraph you correctly realize that if you can see a mipmap band in a screen shot you can bet it's visible when the game is in motion. But in your second paragraph you state that Dave B. "implies" the visible mipmap boundaries which in your first paragraph you state "... are clearly visible..." and then, "in motion I'd imagine they'd stand out quite a bit." Didn't you more or less answer your question as to what Dave B. "implied" by your observations as you stated them in your first paragraph? IE, the screenshots by Dave B. didn't just "imply" it, they proved it. Right?...;)

No. The screenshot by Dave B. proves that the problem is visible in certain selected corners of the game. I only have the 2-level demo of UT2003, so I can't speak on this with any authority, but I get the feeling that the environments shown in Kyle's pics are more representative of the game as a whole. Indeed I don't think Wavey would deny that he picked that spot for the screenshot because it showed off the issue to greater effect than any old average spot in the game. What I'm wondering--and what I'd like Dave to comment on--is just how unrepresentative of the game as a whole that spot is.

It strikes me that Dave mentioned that both the IQ degredation and the performance benefits increase in levels where detail textures were more heavily used. (Which makes a lot of sense.) If it turns out that [H] chose areas of the game where few detail textures were in use, then their investigation would lose a lot of credibility. One would hope they at least checked out all the areas shown in those benchmarks that caused the controversy in the first place; if they didn't, then shame on them.

Again: I don't have the game, so I don't know the answers to these questions. Just one of the things I'm hoping Dave or anyone else with an NV3x will be able to quantify and comment on.

Also, what I got out of [H]'s text was not so much that they "couldn't see any mipmap boundaries" when playing the game, but rather that "what mipmap boundaries you can see while playing the game don't matter since you are running around fragging people and don't have time to judge the IQ." IE, what I got from [H] was simply that "it didn't matter" if mipmap boundaries were occasionally visible since in all likelyhood "you wouldn't notice them" when playing the game because your attention would be elsewhere.

That's not what I got out of their text. What I got from it was that they couldn't see any mipmap boundaries, except perhaps very faintly, only with no AF, and only when looking explicitly for them. They are quite clear that in their opinion the differences in texture filtering quality (or perhaps LOD?) between the two cards (a factor that is almost never discussed) greatly overshadowed any difference due to visible mipmap boundaries.

A couple of problems with this approach...

Yes, it's accurate to state that a direct comparison of nVidia's faux-trilinear with ATi's bilinear would be incorrect from an IQ standpoint. It is also, therefore, equally incorrect to compare nVidia's faux-trilinear with ATi's full trilinear, for the same reason. [H] incorrectly does this.

Not at all. According to [H], the difference between Nvidia's faux-trilinear and ATI's bilinear was immediately noticable, distracting, and caused an obvious degredation in output quality. Whereas, according to them, the difference between faux-trilinear and full trilinear was not at all noticeable unless one went looking for it (and usually not even then); and even then was not as noticable as other IQ differences which are commonly accepted in "apples-to-apples" comparisons.

Should we not compare R3x0 to NV3x with no AF because it is generally agreed that NV3x has slightly better texture quality at those settings? Should we not compare NV3x to R3x0 at 8xAF with UT2003 and 44.03 drivers because both [H] and AMDMB found the R3x0 has significantly better texture quality on certain polygons at those settings? (Possibly...but then what are you going to compare?) Should we not compare R3x0 to NV3x at 8xAF in other games/with other drivers, because previous comparisons have generally found that NV3x has slightly better texture quality at those settings? Should we not compare NV3x to R3x0 at 4xMSAA? (I know your feelings on the subect, but not very many people agree.)

No. We should try to set up as fair a comparison as possible, first of all. But then, in addition to providing fps numbers, we should
  • Provide screenshots and/or other means of verification so that readers can decide for themselves how fair the comparison is and whether they prefer one card to another
  • Discuss in detail the reviewer's subjective opinion on the IQ impact of any output discrepancies
  • Discuss the technological reasons for any performance advantages the various cards may be getting by various known deviations from the competition and/or reference behavior, whether resulting in visible IQ differences or not
Obviously not all reviews are going to live up to this standard, particularly the last part, which is something I would only expect from a technologically competent reviewer like Dave. While [H] has not done much on the technology detail part, with the publication of this article they've done a beautiful job on the first two items. (And frankly, if [H] were to try to go into the details of what's going on they'd get everything wrong, so best that they didn't.)

Just to be clear--"IQ" doesn't refer to a checklist of features. IQ refers to how good the image looks on the screen. If faux-trilinear looks broadly the same as full trilinear, then by definition it has broadly the same IQ. Even though it may be rendering a significantly different workload. If that were the case, comparing faux-trilinear to full trilinear may not be proper in a technology oriented review like one here at B3D, but it would be perfectly appropriate in a game output oriented review like those at [H].

Second, it should not be forgotten that nVidia has not stopped doing full trilinear in other games--and that this situation only applies to UT2K3. As someone else stated, if nVidia does full trilinear in Unreal 2, and pretty much everything else, why is it only in UT2K3 that nVidia feels it is necessary or beneficial to eliminate the option of full trilinear filtering (regardless of whether faux-trilinear is made available as an option or not)? Best answer so far is that nVidia has singled out UT2K3 in this regard because its associated canned timedemos (Antalus Fly-By, etc.) are so often used for benchmarking by hardware review sites.

Another good answer is that UT2003 is heavily texture-filtering bound, and that thus this optimization is particularly likely to address a bottleneck. Another good answer is that UT2003 is particularly full of environments where this optimization is not noticeable, which is why Kyle was able to post so many pictures without finding any problems. Or perhaps it is in use in other games, and we just don't know about it. How many games support turning on colored mipmaps to examing texture filtering properties?

Clearly, nVidia obviously feels there is a difference between its faux-trilinear and full trilinear filtering, else the only thing the nVidia drivers would provide for every application would be its faux-variety of trilinear filtering. Right?

Not necessarily. See above.

SSAA and MSAA are simply different methods of doing the same thing--FSAA.

Yeah, and flat shading and photon mapping are simply different methods of doing lighting. Are you really trying to argue that the difference between SSAA and MSAA causes less of a subjective change in the output than this faux-trilinear does vs. full trilinear in UT203? :?

The difference here would be the rough equivalent of an IHV claiming to do SSAA while it was in reality doing MSAA while claiming it was legitmate to call it "SSAA" because "most of the time" it looked "almost as good." Problem is that regardless of how good it looks there would be no justification for calling MSAA SSAA as the two aren't the same. Likewise, whatever nVidia's doing in UT2K3 it's not the same as full trilinear and "looks almost as good" simply doesn't count. Whatever is being done is being done at the expense of full trilinear support in the game, and that's the problem.

Where did Nvidia claim they were doing full trilinear? All they promised was "quality image quality"!

Now, Rev has suggested that Nvidia did claim in their materials for reviewers that Quality == trilinear, in which case they are reneging on a promise made to reviewers. I don't want to underplay the fact that that's a really bad thing. Bad Nvidia!

But obviously it's nothing new in the context of Nvidia's behavior over the past year or so. Of course it is a problem in that it has lead to a bunch of reviews incorrectly assuming that NV3x Quality was performing the same workload as R3x0 Quality...although the latter turned out not to be doing trilinear on UT2003 detail textures either, let's not forget.

All of this highlights the need for reviewers to be vigilent that their benchmarks are really testing what they expect. In the case of a technically oriented review like those at B3D, that means practicing due diligence to make sure that benchmarked configurations are undergoing broadly the same workload (up to API-conformance invariance). In the case of a game experience oriented review like those at [H], that means practicing due diligence to make sure that benchmarked configurations are producing output of broadly the same subjective quality. Perhaps they should search for and comment on game-specific optimizations so that their audience realizes that results may not be applicable to all games, but that is generally true even without drivers using game-specific settings, and tends to undercut the fallacy that game experience oriented reviews are really all that useful.

But I digress. The point is not that Nvidia was misleading. Everyone knows that. Everyone agrees on that. I'm not defending Nvidia.

The point is that, having discovered that, [H] is behooved only to determine the IQ impact of this optimization, and thus determine whether the assumption underlying their benchmarks--that NV35 and R350 offer comparable image output when running UT2003 at the settings benchmarked--was actually false. They did an apparently quite thorough investigation, and came to the apparently correct determination that the IQ actually is comparable at those settings. And you and others are condemning them for their determination, despite offering absolutely no evidence or reasoning why it was incorrect for [H]'s purposes as a game experience oriented review site.

And NV1 did quadratic primatives for free. :) Isn't it established (or at least insanely likely) that GF1's free trilinear was more a case of a buggy 2nd TMU/pipe? Even if not, free trilinear is an extremely bad design decision, considering how many textures aren't meant to recieve trilinear even when it's properly enabled. There is a reason we haven't seen it since GF1.

Nothing is free in 3D (to quote Kristoff.) Any misapprehension you may have along those lines is, well...a misapprehension...;) BTW, like nv30, nv1 was a failure commercially.

Heh. I know that; that was essentially the subtext of my admonition to Wavey. Of course using quadratics as the primatives for NV1 wasn't free; it gutted the performance of the entire part. (And believe me, compared to NV1, NV30 was the success of the decade.)

Neither was trilinear really "free" on the GF1. Given that the refresh part, GF2, mysteriously made a 4x2xbilinear part out of the 4x1xtrilinear of the GF1, it seems extremely likely that GF1 was designed to be 4x2xbi, but for some reason--whether bug or just some contraint that prevented them from putting in or validating that last little tiny bit of logic functionality--the second TMU in each pipe was unable to read from a different texture than the first, hence its usefulness was restricted to sampling and filtering a second mipmap of the same texture dealt with by the first TMU.

Having said that, and in no way meaning to contradict Kristof's words of wisdom, the notion that nothing comes for free in 3d is an emergent property that just happens to almost always be true, rather than a fundamental law. And it only holds true when the people who have come before you have done their jobs right and not left any pure performance wins on the table. That is, the only reason there are no new techniques to be had that are all advantage and no disadvantage vs. existing methods is that the people who came up with the existing method were not idiots.

But in the case of full trilinear, I'm wondering why not. I'm sure someone on this board call fill in my flimsy knowledge: why should we be sampling from two mipmaps even when the calculated LOD is close to the mipmap LOD (i.e. around the middle of a mipmap band)? If the point of trilinear is merely to avoid visible mipmap transitions, what on earth is wrong with just doing trilinear in a narrow range around the transitions and leaving the rest of the image bilinear? Is there some texture aliasing problem that trilinear helps? Or what?

Surely this faux-trilinear is not a new idea. But what exactly is wrong with it, beyond not conforming to the definition in Fundamentals of Computer Graphics as per Tim Sweeney's email?

The problem, again, is that it is only for UT2K3 that nVidia has tried to eliminate full trilinear filtering. In most everything else, if not everything else, nVidia still does full trilinear. As such, nVidia's driver behavior in UT2K3 in this regard is very much the exception, not the rule.

The simple answer as to why nVidia does not universally discard full trilinear filtering support in favor of the faux-trilinear employed for UT2K3 should be obvious--full trilinear support produces better IQ than nVidia's performance-oriented compromise, and this is not lost on nVidia. The central question here is not whether nVidia's compromise is "almost as good" as full trilinear, the central question is why has nVidia coded its drivers to deliver a performance trilinear, even when the application itself requests the drivers provide full trilinear support? And of course there's the question of why nVidia thinks this is needful for UT2K3 but apparently nothing else?

Well we should hardly forget that this shortcut is being applied only in the case of UT2003's detail textures. Presumably there is something about extremely high-resolution textures that makes enabling faux-trilinear less noticeable. It is obvious why it makes cutting down on trilinear work yield a much higher speedup.

But on the flipside none of that stuff should excuse us from getting to the bottom of what seems to be a very interesting issue.

It's interesting only because nVidia has removed the option of full trilinear support from its drivers with respect to UT2K3, IMO.

Guess this is an agree to disagree kind of thing. But honestly, aren't you the least bit intrigued that they managed to find such a significant performance increase on the table without noticeably affecting output quality? If not, I'd suggest that you're more interested in the graphics market as morality play than for its technological content. (Not that there's anything wrong with that.)

I think you are reading way too much into it. nVidia is obviously not proposing "an alternative to full trilinear support" or anything like that. If that was the case we'd see an option in the Detonator control panel allowing this technique for all 3D games. Instead, the truth seems much more mundane and, sadly, predictible: it's just a hack nVidia's put into its drivers for UT2K3 to help out its scores relative to R3xx in UT2K3 when trilinear filtering support is enabled in the application.

These sorts of hacks have always existed, are used by Nvidia, ATI and every other IHV [Q: then why doesn't Unwinder's anti-detect impact ATI performance on more games? A: we have no proof it is detecting all game-specific optimizations in either ATI's or Nvidia's drivers], and are 100% commonly accepted practice so long as output conformance is not broken. Here we have a situation where the output is not conformant, but the difference is apparently not subjectively noticeable to the end-user. But this is far from the only example of that sort of optimization either.

All those other optimizations are uncontroversial or at least unremarked upon. I'm not saying this sort of thing shouldn't be looked into. In fact that they should be looked into--and that's exactly what [H] has done. They've found that in this case the optimization doesn't impact subjective output quality, which, by the standards of their reviews, is all that matters.

They're also calling on Nvidia to make this optimization selectable in the drivers, and indicated that Nvidia will be doing exactly that. The fact that it wasn't is bad on Nvidia's part, and [H] has criticized Nvidia for that.

The only thing they haven't done is the only thing that would satisfy you, namely disqualify all Nvidia products from consideration because Nvidia has engaged in slimy behavior.

That's the problem with viewing the realtime graphics ASIC industry as a morality play. It may be fun for a while. But the actors would rather view it as reality, and thus they're inevitably going to disappoint you.
 
I only have the 2-level demo of UT2003, so I can't speak on this with any authority, but I get the feeling that the environments shown in Kyle's pics are more representative of the game as a whole.

the points used were only at the spawn points, so it will depend on the number of detail textures, and the types of textures at that point only. the level I used was well away from a spawn point becuase there weren't any detail textures close to the spawn point.

Where did Nvidia claim they were doing full trilinear? All they promised was "quality image quality"!

Dave, I went over to Santa Clara for a days worth of meetings on the day of the 5900 launch - I sat through at least an hour presentation going over the IQ changes of 44.03 and how it matches ATI, and how the "Quality" mode gives Trilinear but without the "Debugging" - I think I even have this recored on my PDA still! (Also, given that they are doing these types of things in the quality mode this does in fact bring into question whether there really was any debug stuff going on). NVIDIA also guied reviewers to use particular tools to highlight the IQ output under the various modes and then, consequently that is removed in UT2003 at least.

But in the case of full trilinear, I'm wondering why not. I'm sure someone on this board call fill in my flimsy knowledge: why should we be sampling from two mipmaps even when the calculated LOD is close to the mipmap LOD (i.e. around the middle of a mipmap band)? If the point of trilinear is merely to avoid visible mipmap transitions, what on earth is wrong with just doing trilinear in a narrow range around the transitions and leaving the rest of the image bilinear? Is there some texture aliasing problem that trilinear helps? Or what?

think about it for a second Dave - what does Trilinear do over Bilinear? Take more samples; by reducing the the level of Trilinear you are reducing the number of samples taken per pixel, and what does undersampling result in? Aliasing.

I mentioned before the Antalus level that was used (this is the grass covered level thats used in the standard UT2003 benchmark, along with another level) does feature a detail map acorss the entire grass surface. The detail map is there to generate the grass detail - now, in this level it is actually more difficult to see the mipmap transistions introduced by the lowering of the filtering, however because of the nature of the textures you are more likely to increase the amount of texture alaising noticed when in motion.
 
Dave, who gives a rats ass about all that.
The point is, when running other games, QUALTIY DOES MEAN FULL TRILINEAR. It also means that when you run popular aniso tests (the tunnel thing).

So to change that in one specific game is wrong. The user expects one thing (and rightly so!) and recieves an inferior result behind his back.
 
Brents views also contradict this thread where he states QUITE clearly:

http://www.beyond3d.com/forum/viewtopic.php?t=4772&highlight=application

Yep, i have looked at it in motion and in motion is where you can notice the differences the most rather then just a static screenshot

clearly in SS2 Application is superior to Balanced and matches ATI's default Tri with no af

Yes we are talking about AF being on here, but he states now that he doesn't notice IQ changes even though it is definatley defaulting back to the old 'balanced mode'.
 
bloodbob said:
Sorry waltc but you keep on saying this but I think if there were buffer overruns it would be more likely crashing the vid card or cpu. The artificates are from failure of clearing the artifacts not writing past the end of the buffer.

I use that phrase to describe not clearing the buffers properly so that visual artifacts result. I've tried to call it "buffer overrun artifacts" simply as a kind of acronym to describe it. Probably I should say "buffer clearing artifacts" instead, no doubt. Main thing I think is that if nVidia did anything like that in a game it would definitely be a serious bug and I imagine would eventually cause a crash...Of course that's moot because nVidia's not going to program its drivers for a game like that.
 
Status
Not open for further replies.
Back
Top