Can someone sum up the ATI filtering thread please?

Natoma

Veteran
I don't feel like reading all 51 pages. Just a nice, quick, easily digestible summation for those of us who don't give a damn about the technicalities and just want to know the easy-to-read conclusions. :)
 
( Found on page 46 )
cthellis42 said:
I wasn't referring to static benchmarks with my comment, but general gaming. It has always been the case that IHV's work with developers to help the games operate better on their hardware; sometimes the "boost" happens via a later developer patch, and sometimes it happens by reconfiguing things in the driver--the IHV "takes care of it."

The concerns of static benchmarks (and things only having to do with a benchmarking procedure in a game while having no gameplay impact) are entirely different. But I can't think of anyone who disapproves of seeing those "increased performance in UT200X by 5%" messages in new driver notes--unless such performance comes with a cost elsewhere that the player doesn't like. Do we know HOW they're increasing performance? Do we care if it applies to cleaning up one set of mathmatics or another? Certainly not that I've seen. Even "manual replacement of shaders" doesn't matter here, as in-game IQ and performance is all that matters to the gamer, and if a change made carries across the game, increases performance and DOESN'T come with downsides...? How would anyone be labelling that a "cheat"? It's certainly not been the case--not in games. It's a representation of developers not knowing exactly which way to structure their shaders for the FX hardware (and if they did, wouldn't they have wanted more efficient shaders in place to begin with?) and nVidia willing to do the legwork to find what could be changed and do it themselves; the game comes across better, the hardware comes across better... How is that anything but win/win for them AND for the players? Sure the shaders were no longer being produced the same as "the developer programmed," but would they give a rat's ass? The telling point of the hardware is not that nVidia is "cheating to get ahead," but that developers weren't/aren't able to (or willing to) go to the levels or structure that the FX cards need, so games start off worse and take time for nVidia to analyze and get around to increasing it. (And you can then go on about how the emphasis of popular vs. not-so-popular games fit into it and whatever else...)

Let's explore a tangent for a sec, going out to AA. A much-lamented trait of earlier nVidia AA versus ATi's was rotational grid versus lack-of. Non-rotational was by almost everyone accepted to be an inferior process, but was nVidia "cheating" and ATi not because one was better than the other? What if rotational sampling came at a bit of a performance penalty--say 5 percent? If an IHV chose to do the worse-looking method for that performance improvement, are they "cheating" since their competitor doesn't? What if they develop an adaptive algorithm that "looked like rotational sampling" (say, perhaps, it could tell where rotational would make the most visual difference and apply it, and refrain from it where it wasn't needed) to a huge degree and still kept the performance improvement? It still involves rotational sampling, it still looked exactly (or almost exactly--unnoticed except through extreme close-up analysis) like the superior sampling method, yet came with extra performance! When does it become "cheating" as opposed to what we've come to expect (and in fact demand) from IHV's over the years?

Shift to AF, where this example HAS been in place for years. How many have screamed far and wide that adaptive AF was a "cheat"? Certainly you get complaints about it compared to competition (just as people complained about worse-looking AA methods; hell, people still sigh wistfully at the Voodoo 5's method over everyone else. ;) ), and most certainly it is an optimization--a trade-off. Even nVidia's AF method was "somewhat adaptive" but not nearly as angle-dependant as ATi's--though ATi's allowed for higher filtering where they felt it was most needed, and applied less where they felt it wouldn't be noticed to begin with. AF was more hotly debated from an IQ perspective, but still most people preferred nVidia's--and it was certainly closer to a "theoretically pure" filtering method. Yet "cheat" wasn't an applicable word for ATi, and it's not one applied to NV40 either, though it has fallen to using similar methodology. People will lament the choice of one over another--and bitch about not having the option to cycle, say, between NV40's and NV30's AF methods--but they're still not whipping out the "cheat" card. In fact, what if either IHV came up with an "advanced AF algorithm" that changed the levels of adaptation based on situations it determined needed one level over another? If nVidia could give an almost indescernable quality equivalent of NV30 with nearly the performance improvement of NV40, is there ANYONE who wouldn't be a fan? Especially since, as a driver-controlled feature, it could be continually tweaked for more quality and more performance in the future?

Or would they be cheating?

Right, now back to the fun game of bilinear/trilinear/brilinear/trylinear...

The funny thing is, though this is yet another nitpicking of quality features, no one is really doing serious IQ examinations and showing "look! See!" at how they feel the technique is bad. The difference seems to be so small that it requires bit-subtraction to notice it, or 4x magnification of a few small sections of a screen to notice ANY change at all--and those changes have basically been invisible to a game in operation.

The arguing here isn't from a "lying" perspective either, as almost everyone agrees ATi should have mentioned their technique earlier and been more up front about it especially since it was present at the launch of a new architecture so readers would know and reviewers would be able to properly test it in operation. The arguing here also isn't about the "no choice" end either, as almost everyone agrees the drivers should let us apply the algorithm or not to whichever programs we want, as features like this are things advanced users want control of--to be able to personally test the results in action if nothing else.

No, the accusations of "cheating" seem to come from the "theoretical" or "moral" angle--which is just plain laughable. The bilinear and trilinear methods themselves were developed to--at a cost--cover glaring quality issues for the gamer. There is no "theoretically pure" method for them, and I doubt the IHV's come up with a method that is a 100% bitrate match--nor is there a "you must do this" template to aim for; and yet no one is "cheating" on it now. We also know that there are situations where applying trilinear (and various levels of AA, and various levels of AF...) are wastes of time and processing cycles--as the work wouldn't be noticed anyway--but to appease a "theoretical" angle we should waste the time anyway?

There has never been any "exact blueprint" of any feature that must be obeyed, nor a moral obligation to waste power towards it that need not. As with everything in life, trade-offs are what we get and what we expect to get. (At least you damn well should, or you're going to be perpetually sour.) There is no "best" to aim for, as what "looks best" to us now isn't what did years ago, and I certainly don't want any IHV's thinking that what we have NOW is all they'll have to work with in the future. Trilinear isn't the "best quality" we'll ever acheive, and it doesn't have to be the "best option" now; there are computers here... there are always going to be more efficient processes. And from an IQ perspective, there will always be better trade-offs to make for the end user.

Just one question: If ATi has announced this technique being experimented back with the 9600's when they introduced it (with Catalyst 3.4 or whatever), would anyone have reacted to it as "HOLY CRAP U CH34TZ0RZ~!" or would they have been going "hey cool!" and experimenting with it to the hilt--producing lots of quality comparisons and performance comparisons, and plenty of feedback to ATi on what improvements need to be made, or what sore spots show up most? (And would or would not their competition start working up their own methods for the same thing?) Even had it been a mandatory change instead of a toggle, I imagine everyone would have reservations, but hold their full remarks for when a lot of reviews and analyses were out. As with Temporal AA, new enhancements are usually treated with "hey cool" and a lot of vigorous testing to examine the merits, rather than fire and brimstone comments.

We're geeks, what do you expect? :p ;) Is there any one of us who isn't willing to take a barely-noticable IQ hit once in a long while to get a good performance improvement across the board? (Just what how often IS quality affected, and how how noticable...? I dunno. That's what everyone does all the time, and what--shockingly--is usually done FIRST! We really expect this to change?)

People should either blame all IHV's who've ever been or who are to come as "cheaters" now, or learn to analyze the situations properly. Or take umbrage for the right reasons--in this case ATi's handling of the situation (whether you accept their given reasons or not)--not the feature itself, which by all accounts is the kind of thing we ALL want to see the IHV's doing.
 
Because not everyone who's in here cares about the technicalities or has the knowledge to even understand what the hell is being said half the time (I count myself in the latter category). But that doesn't mean we aren't interested in getting past the ATI vs Nvidia bs (which I'm sure is what makes up the majority of that thread anyway) and just wants the low down summation. :)

p.s.: Thank you AAlchemy. :)
 
Well, I think you can sum it up like this:

Nvidia Fans - "OMG! ATI are cheating!!!!!!!!!!!"

ATI Fans - "No they are not!!"

Nvidia Fans - "Oh yes they are...."

ATI Fans - "Oh no they're not!"

It's basically a 50 page version of Aladdin ;)
 
Natoma said:
I don't feel like reading all 51 pages. Just a nice, quick, easily digestible summation for those of us who don't give a damn about the technicalities and just want to know the easy-to-read conclusions. :)

Your a lazy lieing chicken .


Basicly ati is doing an adaptive filtering deal. Its like brilinear but ati claims it wont degrade image quality like brilinear and at times will actually increase iq over brilinear.

Some people figure lets make a big stink out of this and cast ati in a bad light as nvidia has been in a bad light for a long time .

So now we are doing a witch hunt for something that has been there with out notice on the mid end cards for over a year now .

Hope that helps you lazy lieing chicken
 
Ok. So ATI's filtering can be toggled by the end user, unlike Nvidia Brilinear then? If so, then I don't have a problem with it. If not, then I do.

Is there any definitive statement regarding the quality of ATI's filtering yet? Or is it still up in the air?

p.s.: It's lying, not lieing. :)
 
Natoma said:
Is there any definitive statement regarding the quality of ATI's filtering yet? Or is it still up in the air?

I foresee this statement catalyzing another 50 page thread :)

But it really is a witch hunt for the most part. Personally I got bored with it at around page 30 :rolleyes:
 
Natoma said:
Ok. So ATI's filtering can be toggled by the end user, unlike Nvidia Brilinear then? If so, then I don't have a problem with it. If not, then I do.

Is there any definitive statement regarding the quality of ATI's filtering yet? Or is it still up in the air?

p.s.: It's lying, not lieing. :)

I meant iq over trilinear not brilinear .

No ati does not allow you to change the filtering .

Personaly after using a 9600xt and a x800pro (my buddies) I can't see a diffrence over my 9700pro. So i really don't care .
 
Natoma said:
Ok. So ATI's filtering can be toggled by the end user, unlike Nvidia Brilinear then? If so, then I don't have a problem with it. If not, then I do.

It can't be turned off. The argument as it increases performance without reducing image quality there is no need to turn it off. (The option would only confuse end users).

The “without reducing image qualityâ€￾ part still needs some research, although no one has been able to find any quality problems yet.
 
Tim said:
The “without reducing image qualityâ€￾ part still needs some research, although no one has been able to find any quality problems yet.

and it certainly isn't from the lack of trying.
 
there is one important difference:

ati claims, its optimized filtering does detect, wether or not it will result at a certain situation in quality degradation, and disable itself.

nvidias filtering is static, independent on such measurements.

thus, if atis filtering works, it can be a speed boost without any quality loss. nvidias can only be a balance-act between speed and quality over the whole frame.

wether or not atis claim is true is now a fanboy topic. fact is, ati implemented it over a year ago, and nobody till now realised something is wrong. this is quite a bit of a prove, showing it really works.

and the rest was discussing, wich approaches they could have taken to get this optimisation, wether its just another approach to trilinear filter, or a non-linear filtering algorithm instead. it _looks_ like its a trilinear approach that detects the box-filtering to optimize for cache-use by not accessing two layers in the mipmap.


hope thats it. rest is just fanboy. ati lies, ati doesn't lie, ati lies, ati doesn't lie. bla bla. no company, or all, lie.
 
davepermen said:
wether or not atis claim is true is now a <bleep> topic. fact is, ati implemented it over a year ago, and nobody till now realised something is wrong. this is quite a bit of a prove, showing it really works.
A. They implemented it on a low-end card. I don't think it was really a big issue to many people. But the R420 is a high-end card, and so any "cheats" are much higher-profile.
B. Some people definitely knew about it. There were questions early-on after the R420's release whether or not it was doing "brilinear" like the Radeon 9600.
 
so is the general concencus that this is a bad optimization and affects image quality? or is it a good optimization and image quality is still top notch? as far as im concenred thats all that matters.
 
Chalnoth said:
davepermen said:
wether or not atis claim is true is now a <bleep> topic. fact is, ati implemented it over a year ago, and nobody till now realised something is wrong. this is quite a bit of a prove, showing it really works.
A. They implemented it on a low-end card. I don't think it was really a big issue to many people. But the R420 is a high-end card, and so any "cheats" are much higher-profile.
B. Some people definitely knew about it. There were questions early-on after the R420's release whether or not it was doing "brilinear" like the Radeon 9600.

Where is your proof that it is a cheat. I can't believe after all that has been discussed your still calling it a cheat. When no valid proof has been shown to back this THEORY up..Couldn't your put your NVidiotism away for one day and say "Possible cheat" Or is that too hard for you. :rolleyes:
 
Chalnoth said:
davepermen said:
wether or not atis claim is true is now a <bleep> topic. fact is, ati implemented it over a year ago, and nobody till now realised something is wrong. this is quite a bit of a prove, showing it really works.
A. They implemented it on a low-end card. I don't think it was really a big issue to many people. But the R420 is a high-end card, and so any "cheats" are much higher-profile.
B. Some people definitely knew about it. There were questions early-on after the R420's release whether or not it was doing "brilinear" like the Radeon 9600.

So as of a year ago the ATI 9600 nonpro, 9600 pro, and more recently the 9600 XT are all low end cards? If the 9600XT is a low end card, what do you call the FX-5200 -- a piece of shit?
 
BRiT said:
Chalnoth said:
davepermen said:
wether or not atis claim is true is now a <bleep> topic. fact is, ati implemented it over a year ago, and nobody till now realised something is wrong. this is quite a bit of a prove, showing it really works.
A. They implemented it on a low-end card. I don't think it was really a big issue to many people. But the R420 is a high-end card, and so any "cheats" are much higher-profile.
B. Some people definitely knew about it. There were questions early-on after the R420's release whether or not it was doing "brilinear" like the Radeon 9600.

So as of a year ago the ATI 9600 nonpro, 9600 pro, and more recently the 9600 XT are all low end cards? If the 9600XT is a low end card, what do you call the FX-5200 -- a piece of shit?

lol !!!!!!

Man you just insulted alot of shit right there :)
 
Back
Top