OMG HARDOCP REVIEW OF UT2003 AND FILTERING

Status
Not open for further replies.
Kalbaz said:
hehe... ATI bought Tseng-Labs (tech and engineers) many years ago...

really? didn't know that... so maybe they are renaming the ET4000000000 to the R420? :LOL:

Yes, they did. Long time ago, in 1997:

Press Release - Archives 1997

ATI Technologies Inc. Acquires Graphics Design Assets of Tseng Labs

40 Person 3D Team to Join ATI

December 15, 1997

--------------------------------------------------------------------------------
Toronto, Ontario - ATI Technologies Inc. (TSE:ATY) a world leader in 3D graphics/video acceleration and multimedia solutions and Tseng Labs, Inc. (Nasdaq National Market:TSNG), today announced that approximately 40 members of Tseng's development team have joined ATI. In addition, ATI has acquired substantially all of the graphic design assets of Tseng for approximately US$3 million.


Source: http://www.ati.com/companyinfo/press/1997/4076.html
 
The Baron said:
They acknowledge the problem then say, "Meh, so what?". :(
Didn't mention it in my 5200 review because I wasn't comparing it to anything. Plus, on slow budget cards, it's *not* a bad thing. Should it still be user-configurable? Yeah, of course. But, probably 99% of people who have 5200s would use the UT2003 cheat/optimization/thingy.

And the people who are looking for more speed should be using Performance or High Performance mode instead of Quality mode. Or turning down the resolution, turning down game settings, etc.

The fact of the matter is that people through hissy fits when they can't run things at full quality.*

*Whether real or imagined. In the context of this case, it is the perception of full quality.

edit: fixed quote
 
OpenGL guy said:
andypski said:
[Sarcasm]

Turns round to rest of ATI driver guys...

"It's official - it's now open season on image quality guys - [H] says so."

Let's start hacking it up now, and leave no pixel unpolluted.


[/Sarcasm]
[Sarcasm continued]
"LOD-bias?"
"Check."
"Texture compression?"
"Check."
"Clipping planes?"
"Check."
"Trilinear disable?"
"Check."
"Shader replacement?"
"Check."
"Ship it."
[/Sarcasm]

[Sarcasm continued s'more]
Impressive. Most impressive. [H] has taught you well. You have controlled your image quality. Now, ignore buffer clears and disable aniso! Only your highest framerate can destroy me!
[/Sarcasm]
 
OpenGL guy said:
And it wouldn't help... So many people test the cards with the defaults (including OEMs) that you'd gain no ground. Also, people who want to make you look bad still can by just changing your settings to something more stressful.

Of course the default should be the reduced quality settings. The highest quality settings should be packed away, so it's not immediately enabled. Otherwise true, it would probably be a waste of time.

I'd also guess that the ignorant people far outnumber the malicious ones.
 
Rasmoo said:
OpenGL guy said:
And it wouldn't help... So many people test the cards with the defaults (including OEMs) that you'd gain no ground. Also, people who want to make you look bad still can by just changing your settings to something more stressful.

Of course the default should be the reduced quality settings. The highest quality settings should be packed away, so it's not immediately enabled. Otherwise true, it would probably be a waste of time.

I'd also guess that the ignorant people far outnumber the malicious ones.

Or maybe a wizard that runs straight after the driver installation and asks the user to choose one of the followings:

1: I'm a gamer and only care about games
2: I'm a competent reviewer and am reviewing the Radeon technology
3: I'm a reviewer and I want to compare this card with a Nvidia card
etc.

or something similar ^_^

and then enable the appropiate IQ settings ^_^
 
from H Forums

"This is easy enough. I have a 9800-256 in my personal box and UT2K3 v2225 installed.

I used all the same settings that we describe in our article except I went and turned off "Trilinear" in the UT2K3 control panel.

YES, it made a big difference.

With no AF enabled it took about a full second when moving to spot the mipmap transition points. Quite frankly the quality is not acceptable by my gaming standards.

With 8XAF enabled, the transitions were not near as noticeable when moving, but you could certainly spot them. But it was not the "in your face" lines you see without AF. When stopped and simply looking at the screen it was easy to pick out.

I would suggest that from that quick experience that ATI's Bilinear Filtering is in no way close to how NVIDIA is handling their filtering in UT2K3. So we will stick with the comments that NVIDIA's technique seems to be somewhere between Bi and Tri in this particular game."

Can someone verify this here please? Does FX5900 Aniso+bi look better than ATI Aniso+bi?

Is the NV system similar to forcing Aniso from ATI control panel rather than the application? (in terms of IQ) and what speed difference is there?
 
I said in thread on their 5900 Review:

The precedent that [H] is setting here is just utterly fucking sad and stupid - all ATI has to do is release a set of drivers that does exactly the same - dial down IQ to increase performance, and that ultimately puts us on a path that screws all consumers.

That represented the shock from Brents initial response to me over [H]'s apparent stance on benchmarking and IQ, however he backtracked from that stating that he wasn't fully aware of the situation with UT2003, so ad some hope that that statement was an overreaction, however having seen the report I stand by it. [H] is setting a very dangerous precedent here.

Herr Baumann has read the [H] forum discussion over this (by masking my IP :rolleyes: ) and its good to see that there is quite a discussion going on over there and that nopt everyone has quite agreed with the findings.

A couple of things that strike me: [H] suggests that these reductions in IQ could not be noticed based on the evidence of their screenshots, while that may be the case that there most certainly are combinations of textures and scene that make his difficult to notice, likewise there are combinations that are more easily notable - personally I thought we'd already shown this from the images indicated in our other thread (which I assume were the ones that DoomTrooper tried to post over at [H] forum - if so, Kyle he's free to "leech" them off our servers). In the instance we showed you are actually more likely to notice them in movement. Excuses for the type of game in use is a deflection of the issue as well.

Now, there's been lots of comments suggesting this is "OK, because there is such a divergence in the boards these days anyway".... NO!, for god sake, this is Trilinear filtering we're talking about here - this is a fundamental filtering process we're taliing about here. Tilinear has been with us since the days of Multitexturing; GeForce 256 even did the thing for free (fill-rate at least) why, in the era of $500 boards are we finding it acceptable to reduce the quality of such a basic element of image generation? This is fundamental - and we've shown there certainly is no actual issue with running this game with full Trilinear enabled (via Antidetect).

It good to see that some people over at [H] forum are annoyed that they effectively don't have the option to run the game as they want. It all well and good saying that "well the IQ looks OK and the performance is good", but surely something that renders pretty close with full IQ would perform much faster if the IQ were lowered? Surely thats the point of having the texture sliders - to allow the user to dial down the IQ as they see fit, rather than having that forced upon you.

We can either sit up and question whether these type of optimisation are correct in the hope that the IHV's will listen (and hopefully its beginning to sounds as though NVIDIA are now) or we can roll over and say "Well, that’s the way it is" in which case that will either make comparison just plain useless, false and misleading if the full details are not investigated (as they weren't in the 5900 Ultra review that sparked this - people were not informed) or it will end up on a slippery slope of spiralling IQ from the vendors in order to increase benchmarks scores, which will certainly not benefit the consumer.
 
Dave, ever thought about e-mailing the IEEE to set up a 3D benchmark standard? :)

I would wish one would appear before us. Then again, what are the chances that mediocre reviewers will benchmark by the satndards?
 
Cause nvidia will either A) cheat by lowering IQ or B) use precomputed data ( IE prolly 300 meg downloads )

The precedent that [H] is setting here is just utterly fucking sad and stupid - all ATI has to do is release a set of drivers that does exactly the same - dial down IQ to increase performance, and that ultimately puts us on a path that screws all consumers.

Well in the AF review [H] has accussed ATI of disabling trilinear filtering with AF please keep in mind in the first page in the driver set up they said it used application preference.

In-Game Still Screenshot 8XAF = 9800 Pro SLIGHTLY Sharper Textures with no mipmap transitions that are distracting.

So the question is do you agree that ati should be shot for doing the same thing or should ATI be making a libel case for this fruadlant (SP?) article?
 
kihon said:
I would suggest that from that quick experience that ATI's Bilinear Filtering is in no way close to how NVIDIA is handling their filtering in UT2K3. So we will stick with the comments that NVIDIA's technique seems to be somewhere between Bi and Tri in this particular game."

Can someone verify this here please? Does FX5900 Aniso+bi look better than ATI Aniso+bi?
I thought the comparison was between bilinear [+ AF] and Nvidia's new technique [+ AF]. Nvidia's technique ought to look better than bilinear because it does smooth the mipmap boundaries; the question is whether it look as good as trilinear. So this is independent of the ATI vs Nvidia aniso quality argument.
 
Hmmm - cannot register at Hard OCP - not allowing registering?!?

Given what I saw at the AMDMB site, its blatently apparent that there is IQ degradation - so the question is what are AMDMB doing different from [H]?

Personally, there is no way they can justify the rediction in quality from 44.03.
 
Okay yeah I stuffed up I missed this bit
Then we set the AF slider to Quality AF when AF was tested.
which means brent is forcing the drivers into a comptability mode of AF rather then the actual af that should be done in the game. How does he think he is gonna test filtering in a game if his overiding the settings manually.
 
DaveBaumann said:
We can either sit up and question whether these type of optimisation are correct in the hope that the IHV's will listen (and hopefully its beginning to sounds as though NVIDIA are now) or we can roll over and say "Well, that’s the way it is" in which case that will either make comparison just plain useless, false and misleading if the full details are not investigated (as they weren't in the 5900 Ultra review that sparked this - people were not informed) or it will end up on a slippery slope of spiralling IQ from the vendors in order to increase benchmarks scores, which will certainly not benefit the consumer.

I'm doing both. ;)

Why shouldn't we be pessimistic about the situation though? It could very well be that the best plan is to demand a choice rather than to oppose all of "those" optimisations. Wheels are set in motion, and I don't think NVidia wants to change as much as we are currently demanding.

But count one more who's hoping for a B3D front page story.
 
DaveBaumann said:
...
We can either sit up and question whether these type of optimisation are correct in the hope that the IHV's will listen (and hopefully its beginning to sounds as though NVIDIA are now) or we can roll over and say "Well, that’s the way it is" in which case that will either make comparison just plain useless, false and misleading if the full details are not investigated (as they weren't in the 5900 Ultra review that sparked this - people were not informed) or it will end up on a slippery slope of spiralling IQ from the vendors in order to increase benchmarks scores, which will certainly not benefit the consumer.

Asking the question of "Who benefits?" (at least theoretically) is the aspect from which to properly view these matters, IMO. As you point out, in this case it is not the potential consumer of nVidia's GFFX products. The only potential beneficiary in this situation is hypothetically nVidia, if it's assumed that consumers don't care that nVidia has rigged yet-another-driver-set to recognize a specific game, and in this case substitute an optimized bilinear filtering scheme to replace full trilinear filtering--without informing the GFFX end user of this fact.

If one assumes that the sentiments of [H] represent the norm, that consumers of nVidia's flagship $500US 3D cards only care about frame rates and don't give a hoot about the differences in IQ relative to trilinear and bilinear filtering, because they are too busy fragging people to care about scene rendering IQ, this still does not show how the GFFX consumer benefits by nVidia's decision to eliminate the ability of its GFFX products to do full trilinear filtering in UT2K3. It is easy to see, however, how this might benefit nVidia in terms of better benchmark framerate numbers provided its customers do not know, or better yet do not care, about the loss of full trilinear in this game. There is no benefit to the consumer in losing the ability to choose between full trilinear and nVidia's faux-trilinear while playing UT2K3. In this scenario only nVidia benefits even theoretically.

The flip side is that nVidia suffers if this is revealed and it turns out potential GFFX consumers find it objectionable enough to stay away from GFFX products. Under no circumstances can this be considered a "feature" compelling enough to stimulate sales--it's only value for nVidia is in remaining one of many undetected "application optimizations." The irony is that in attempting to portray faux-trilinear in UT2K3 as full trilinear support nVidia has done nothing more than shoot itself in the foot, whereas including the faux-trilinear support in addition to full trilinear support might well have garnered them praise. But then, obviously, to properly support both in the game would have resulted in the nV35 and the R350 being compared while running full trilinear in UT2K3, and the whole purpose of nVidia's action here was designed to prevent that from happening.

The main thing to me is that even if one accepts [H]'s premise that "the differences don't matter" it still doesn't excuse nVidia deliberately engineering its drivers to be unable to provide full trilinear support for UT2K3 through the application. The potential consumer certainly doesn't benefit from the complete removal of that choice in any fashion whatever that I can see.

In fact, the details of this issue have made me reconsider an earlier opinion of mine that games (as opposed to benchmarks) were "fair game" for IHV driver optimzation. When I formed that opinion I did not envision nVidia attempting to substitute a faux-trilinear filtering method for full trilinear support even when the application asks for full trilinear.

I still think of "optimization" in this case as "coding your driver to more efficiently provide full trilinear support on your hardware for the application requesting it" as opposed to "coding your drivers to apply trilinear filtering to 30%-50% of the onscreen textures even when the application asks that 100% of the textures be trilineared." That isn't "optimizing" in any real sense of the word--that's misrepresentation (or, yes, cheating.) So I guess I have to eat my earlier words about it "not being possible" for IHV's to cheat actual games themselves. nVidia's proved me wrong.
 
K.I.L.E.R said:
Dave, ever thought about e-mailing the IEEE to set up a 3D benchmark standard? :)

I would wish one would appear before us. Then again, what are the chances that mediocre reviewers will benchmark by the satndards?

SPEC is an industry standard group that does benchmarks, including graphics benchmarks with GPC. Believe it or not, one of the first two graphics application benchmarks was Quake II. And there was even a result published, see

http://www.spec.org/gpc/June99/apc.static/index.html .

Click on Quake II Benchmark Results [sic].

Since Quake III was already released, the Quake II benchmark was retired and SPECapc never did another game benchmark.


Also been there done that, on optimization rules, see:

http://www.spec.org/gpc/Jul99/apc.static/ARclean.htm
http://www.spec.org/gpc/Jul99/opc.static/Rulesv11.htm

Specifically, in the SPECopc rules, "General Rules for Optimizations" I.3.h and I.3.i cover optimizations that are not permitted even though not visible to the eye. (They were visible to image diff tools.)

Also, see IV.2.a (example, if the driver is asked for trilinear mipmaping, it must do trilinear mipmapping) and IV.2.c.


-mr. bill
 
DaveBaumann said:
A couple of things that strike me: [H] suggests that these reductions in IQ could not be noticed based on the evidence of their screenshots, while that may be the case that there most certainly are combinations of textures and scene that make his difficult to notice, likewise there are combinations that are more easily notable - personally I thought we'd already shown this from the images indicated in our other thread (which I assume were the ones that DoomTrooper tried to post over at [H] forum - if so, Kyle he's free to "leech" them off our servers). In the instance we showed you are actually more likely to notice them in movement.

This is really the crux of the issue. AFAICS, you only posted one set of regular screenshots in that thread. The mipmap transitions are clearly visible, although perhaps not what I'd call egregious in the stills (in motion I'd imagine they'd stand out quite a bit). [H] has posted a decent number of screens, and I can only see mipmap transitions on one of them, and even then it's very subtle. Several of them are obviously poorly chosen to illustrate the issue, but some of them seem like they should be adequate.

Both Kyle and Brett state that the transitions aren't visible even in motion. You seem to be implying the opposite. I trust your eyes a good deal more than theirs. But for that reason specifically--how noticable is it, exactly? I certainly don't buy the pablum that image quality doesn't matter in fast-paced shooters (if that's the case, why don't you save your $499 and play Quakeworld??). But I'd also like to know whether this is something one needs to actively search for to see it. Particularly if one isn't an eagle-eyed reviewer and hardware expert.

Second: about the most intruiging thing to come out of all this was a throwaway comment made by IIRC Kyle in the [H] forums, to the effect that he'd enabled Performance (i.e. bilinear) on the R3x0, and that the mipmap transitions were immediately obvious and annoying, in a way that is obviously not the case with whatever the crap Nvidia is doing.

Do you concur with this assessment? If so, at the least this suggests that the proper response is not to benchmark Nvidia's Quality against ATI's Performance, as some have suggested. It also suggests that Nvidia is doing something other than straight bilinear (which, in truth, was already suggested by the colored mipmap pics), and even perhaps something a bit more complicated than just mostly bilinear with a brief bit of mipmap blending just as the transition approaches.

What are your thoughts on this? Despite all the condemnation Kyle and crew are recieving for this latest article, IMO it's actually reasonably convincing. Not, of course, that disabling trilinear secretly on an application basis is at all kosher, or that this is a fair comparison to R3x0 with filtering set to application preference. But at least that this may be a reasonably smart approach to minimize the trilinear hit at a seemingly negligable IQ cost, and even one that we should be encouraging.

Now, there's been lots of comments suggesting this is "OK, because there is such a divergence in the boards these days anyway".... NO!, for god sake, this is Trilinear filtering we're talking about here - this is a fundamental filtering process we're taliing about here

Agreed that those comments on the [H] forums to the effect that "GPUs are so complicated these days" are ignorant drivel. Not agreed that IHVs shouldn't be looking for ways to get most of the benefit at less of the (significant) cost. I mean, supersampling is a fundamental antialiasing process, and it still confers some quality benefits over MSAA + AF in certain situations. That doesn't mean ditching SSAA for MSAA wasn't one of the most significant improvements in the last several years.

Tilinear has been with us since the days of Multitexturing; GeForce 256 even did the thing for free (fill-rate at least)

And NV1 did quadratic primatives for free. :) Isn't it established (or at least insanely likely) that GF1's free trilinear was more a case of a buggy 2nd TMU/pipe? Even if not, free trilinear is an extremely bad design decision, considering how many textures aren't meant to recieve trilinear even when it's properly enabled. There is a reason we haven't seen it since GF1.

why, in the era of $500 boards are we finding it acceptable to reduce the quality of such a basic element of image generation? This is fundamental - and we've shown there certainly is no actual issue with running this game with full Trilinear enabled (via Antidetect).

Because trilinear reduces performance significantly. If Nvidia comes up with a method to get most of the IQ benefits of trilinear at a fraction of the performance hit then by all means we should encourage it. As the purpose of trilinear is generally given as "removing the mipmap transitions from bilinear", it does seem a little silly to do trilinear over the entire mipmap if it only matters for a small portion. Now, I'm guessing there may be other benefits to doing "FSTF" (full-screen trilinear), perhaps in the realm of preventing texture aliasing. But I'm mostly just guessing this because otherwise full trilinear would seem a somewhat stupid thing to do. In the vein of doing full screen anisotropic filtering, instead of only oversampling those pixels at large anisotropic angles, and then only doing enough to keep under the Nyquist limit. So if trilinear actually prevents texture aliasing as well, I'd like to see some discussion of that over here for god's sake. And if it doesn't, then this optimization looks long overdue more than anything.

Of course our evaluation of this hinges on the claim that what they're doing really does look much more like trilinear than bilinear even though the mipmaps show that its workload is a lot closer to bilinear than trilinear. Kyle and Brett both say this is the case, and have put up some reasonable evidence. I don't have access to an NV3x card, but many people on this forum do. I'd certainly appreciate more objective testing and some subjective second opinions on the issue instead of just more moans and flames.

None of this excuses Kyle's ridiculous editorials, double-standards, or forum fascism. In particular banning you was about the most ridiculous move I can concieve of. But on the flipside none of that stuff should excuse us from getting to the bottom of what seems to be a very interesting issue.

And, I realize that you may be looking at this issue wearing your reviewer's hat, in which case it is surely absolutely unethical of Nvidia to sneak in an optimization that overrides requested settings (although only to the degree that "quality" implicitly requests "full trilinear filtering"), only in certain highly benchmarked games, and without telling anyone about it. But if you leave that aside for a moment and put on your hardware enthusiast's hat, perhaps there's something worthwhile here?

/turns to crowd

Or am I just totally off base on this one???
 
I would certainly like to see more discussion of the actual algorithm rather than simply stating that it must be "wrong" to use it.

When you use trilinear filtering, you get a linear blend between mipmap levels. So it's highly unlikely that you'll be accessing only one level, the value for the blend factor will hardly ever be integral.

But what exactly is so wrong about using say a piecewise-linear function instead? The function would be flat most of the time, then instead of jumping instantly to the next mipmap level you have a quick linear blend with a relatively high gradient. The mipmap transitions would not be so noticeable because there is some blending going on, yet the card only needs to pull data from one mipmap level most of the time. The tradeoff could be set by describing the function in terms of the slope of the non-flat part; at its minimum you have trilinear filtering, at infinity bilinear. Perhaps this would indeed be an acceptable tradeoff?

The problem I see is not with the use of an algorithm like this, but hiding it away and forcing its use even though "full quality" is specified.

Edit: of course you shouldn't compare performance levels of one system using this algorithm and another using bilinear or trilinear; and if a site doing a review is aware that this is in use then they shouldn't claim the settings are "trilinear" - the word is very specific and wouldn't apply to this algorithm.
 
The problem is Nvidia is disabling what you set in your control panel, on your $500 dollar video card. I'm sorry, changing what you select for quality without your permission is wrong.

With the Anti-detector script the 44.03 driver does apply what you set in the control panel...so obviousally Nvidia detects and lowers for frames..like 30% increase !!
 
Dave H, you are absolutely right that this may in fact be a very good way to get trilinear like filtering without a high performance penalty. The problem is that in the review of the BFG 5900 ultra that sparked this whole issue and the subsequent [H] article that information was not mentioned in the article. Herein lies the problem. I think it is fair to say that most people expect that if a video card runs one game exceptionally well that it would be reasonable to expect that level of capability to be reflected in other games. Maybe in other games, with different types of scenes this type of filtering would be very noticeable. That's why the omission of this information is wrong. [H] should have included a statement like this in that review...“ Using the 44.03 drivers the 5900 does not use full trilinear filtering while playing Ut2003. It is our opinion that this does not have a detrimental effect on image quality. Though it remains to be seen if this effects performance and image quality of other games.â€￾ That’s why I have problem with [H].
 
Status
Not open for further replies.
Back
Top