ATi is ch**t**g in Filtering

davepermen said:
if something only works in one special situation, then its only placed in to cheat. if its not to cheat, then to bugfix a certain app. and THAT would be something the developers of the APP should have to solve, a.k.a. a patch.

I still don't agree. What if something only works in a special situation, gives comparable IQ and is 50% faster in that specific situation, perhaps because the developer has done something that is very problematic for a specific architecture (that's probably going to be a lot of things for the NV3X :)) ?
 
Bjorn said:
davepermen said:
if something only works in one special situation, then its only placed in to cheat. if its not to cheat, then to bugfix a certain app. and THAT would be something the developers of the APP should have to solve, a.k.a. a patch.

I still don't agree. What if something only works in a special situation, gives comparable IQ and is 50% faster in that specific situation, perhaps because the developer has done something that is very problematic for a specific architecture (that's probably going to be a lot of things for the NV3X :)) ?

Wouldn't that be possible to add into the compiler so that it recognised when such situations arises, and then can deal with them regardless of the game/app?

But as far as I understood davepermen correctly, that would fall in under the bugfix-part of his post.
 
I wasn't referring to static benchmarks with my comment, but general gaming. It has always been the case that IHV's work with developers to help the games operate better on their hardware; sometimes the "boost" happens via a later developer patch, and sometimes it happens by reconfiguing things in the driver--the IHV "takes care of it."

The concerns of static benchmarks (and things only having to do with a benchmarking procedure in a game while having no gameplay impact) are entirely different. But I can't think of anyone who disapproves of seeing those "increased performance in UT200X by 5%" messages in new driver notes--unless such performance comes with a cost elsewhere that the player doesn't like. Do we know HOW they're increasing performance? Do we care if it applies to cleaning up one set of mathmatics or another? Certainly not that I've seen. Even "manual replacement of shaders" doesn't matter here, as in-game IQ and performance is all that matters to the gamer, and if a change made carries across the game, increases performance and DOESN'T come with downsides...? How would anyone be labelling that a "cheat"? It's certainly not been the case--not in games. It's a representation of developers not knowing exactly which way to structure their shaders for the FX hardware (and if they did, wouldn't they have wanted more efficient shaders in place to begin with?) and nVidia willing to do the legwork to find what could be changed and do it themselves; the game comes across better, the hardware comes across better... How is that anything but win/win for them AND for the players? Sure the shaders were no longer being produced the same as "the developer programmed," but would they give a rat's ass? The telling point of the hardware is not that nVidia is "cheating to get ahead," but that developers weren't/aren't able to (or willing to) go to the levels or structure that the FX cards need, so games start off worse and take time for nVidia to analyze and get around to increasing it. (And you can then go on about how the emphasis of popular vs. not-so-popular games fit into it and whatever else...)

Let's explore a tangent for a sec, going out to AA. A much-lamented trait of earlier nVidia AA versus ATi's was rotational grid versus lack-of. Non-rotational was by almost everyone accepted to be an inferior process, but was nVidia "cheating" and ATi not because one was better than the other? What if rotational sampling came at a bit of a performance penalty--say 5 percent? If an IHV chose to do the worse-looking method for that performance improvement, are they "cheating" since their competitor doesn't? What if they develop an adaptive algorithm that "looked like rotational sampling" (say, perhaps, it could tell where rotational would make the most visual difference and apply it, and refrain from it where it wasn't needed) to a huge degree and still kept the performance improvement? It still involves rotational sampling, it still looked exactly (or almost exactly--unnoticed except through extreme close-up analysis) like the superior sampling method, yet came with extra performance! When does it become "cheating" as opposed to what we've come to expect (and in fact demand) from IHV's over the years?

Shift to AF, where this example HAS been in place for years. How many have screamed far and wide that adaptive AF was a "cheat"? Certainly you get complaints about it compared to competition (just as people complained about worse-looking AA methods; hell, people still sigh wistfully at the Voodoo 5's method over everyone else. ;) ), and most certainly it is an optimization--a trade-off. Even nVidia's AF method was "somewhat adaptive" but not nearly as angle-dependant as ATi's--though ATi's allowed for higher filtering where they felt it was most needed, and applied less where they felt it wouldn't be noticed to begin with. AF was more hotly debated from an IQ perspective, but still most people preferred nVidia's--and it was certainly closer to a "theoretically pure" filtering method. Yet "cheat" wasn't an applicable word for ATi, and it's not one applied to NV40 either, though it has fallen to using similar methodology. People will lament the choice of one over another--and bitch about not having the option to cycle, say, between NV40's and NV30's AF methods--but they're still not whipping out the "cheat" card. In fact, what if either IHV came up with an "advanced AF algorithm" that changed the levels of adaptation based on situations it determined needed one level over another? If nVidia could give an almost indescernable quality equivalent of NV30 with nearly the performance improvement of NV40, is there ANYONE who wouldn't be a fan? Especially since, as a driver-controlled feature, it could be continually tweaked for more quality and more performance in the future?

Or would they be cheating?

Right, now back to the fun game of bilinear/trilinear/brilinear/trylinear...

The funny thing is, though this is yet another nitpicking of quality features, no one is really doing serious IQ examinations and showing "look! See!" at how they feel the technique is bad. The difference seems to be so small that it requires bit-subtraction to notice it, or 4x magnification of a few small sections of a screen to notice ANY change at all--and those changes have basically been invisible to a game in operation.

The arguing here isn't from a "lying" perspective either, as almost everyone agrees ATi should have mentioned their technique earlier and been more up front about it especially since it was present at the launch of a new architecture so readers would know and reviewers would be able to properly test it in operation. The arguing here also isn't about the "no choice" end either, as almost everyone agrees the drivers should let us apply the algorithm or not to whichever programs we want, as features like this are things advanced users want control of--to be able to personally test the results in action if nothing else.

No, the accusations of "cheating" seem to come from the "theoretical" or "moral" angle--which is just plain laughable. The bilinear and trilinear methods themselves were developed to--at a cost--cover glaring quality issues for the gamer. There is no "theoretically pure" method for them, and I doubt the IHV's come up with a method that is a 100% bitrate match--nor is there a "you must do this" template to aim for; and yet no one is "cheating" on it now. We also know that there are situations where applying trilinear (and various levels of AA, and various levels of AF...) are wastes of time and processing cycles--as the work wouldn't be noticed anyway--but to appease a "theoretical" angle we should waste the time anyway?

There has never been any "exact blueprint" of any feature that must be obeyed, nor a moral obligation to waste power towards it that need not. As with everything in life, trade-offs are what we get and what we expect to get. (At least you damn well should, or you're going to be perpetually sour.) There is no "best" to aim for, as what "looks best" to us now isn't what did years ago, and I certainly don't want any IHV's thinking that what we have NOW is all they'll have to work with in the future. Trilinear isn't the "best quality" we'll ever acheive, and it doesn't have to be the "best option" now; there are computers here... there are always going to be more efficient processes. And from an IQ perspective, there will always be better trade-offs to make for the end user.

Just one question: If ATi has announced this technique being experimented back with the 9600's when they introduced it (with Catalyst 3.4 or whatever), would anyone have reacted to it as "HOLY CRAP U CH34TZ0RZ~!" or would they have been going "hey cool!" and experimenting with it to the hilt--producing lots of quality comparisons and performance comparisons, and plenty of feedback to ATi on what improvements need to be made, or what sore spots show up most? (And would or would not their competition start working up their own methods for the same thing?) Even had it been a mandatory change instead of a toggle, I imagine everyone would have reservations, but hold their full remarks for when a lot of reviews and analyses were out. As with Temporal AA, new enhancements are usually treated with "hey cool" and a lot of vigorous testing to examine the merits, rather than fire and brimstone comments.

We're geeks, what do you expect? :p ;) Is there any one of us who isn't willing to take a barely-noticable IQ hit once in a long while to get a good performance improvement across the board? (Just what how often IS quality affected, and how how noticable...? I dunno. That's what everyone does all the time, and what--shockingly--is usually done FIRST! We really expect this to change?)

People should either blame all IHV's who've ever been or who are to come as "cheaters" now, or learn to analyze the situations properly. Or take umbrage for the right reasons--in this case ATi's handling of the situation (whether you accept their given reasons or not)--not the feature itself, which by all accounts is the kind of thing we ALL want to see the IHV's doing.
 
bloodbob said:
I really doubt this actually happens. ATI haven't provided us with any proof they are doing this. Remeber its *proprietary* ( although if they really did file a patent we will find out after a while ).
Actually, it's not up to them to show you "proof they are doing this"--it's not what any IHV does. They have made comments, and you either accept them at face value, or you find examples where it is NOT the case. Showing contrary evidence (that can be replicated by others and doesn't have its own explanation) is pretty much the way things go.

Since when has ANY IHV sat down and told people the in-and-out specifics of their techniques, rather than give the general explanations, at which point people test what they want with the tools available (sometimes provided by IHV's, but usually all 3rd-party programs) and make the best analyses they can?

Finding the glaring flaws and discussing quality/performance trade-offs is what has ALWAYS been done, whether we're talking about various AA methods, AF methods, or bi/tri/bri/try/WTF... If you're not a part of the process doing the testing, and not examining what has come out and offering your own analyses of the presented information, then what the heck does it matter what you doubt or don't?

The proof of the pudding is in the eating. You don't get much by smelling the aroma of what's cooking next door.
 
Very true, to much chatting and not enough down to the bone testing. So far it looks like ATI filtering does not negatively effect IQ but then very little tests has been done in a broad area of gaming looking for anything significant. If it takes a magnifying glass to see the difference then it isn't significant as far as I am concerned. Test test test. . ., evaluate evaluate, conclusion. I wished I had a card that can explore these areas :(. No that isn't a good reason to buy a card either :).
 
cthellis42 said:
Actually, it's not up to them to show you "proof they are doing this"--it's not what any IHV does. They have made comments, and you either accept them at face value, or you find examples where it is NOT the case. Showing contrary evidence (that can be replicated by others and doesn't have its own explanation) is pretty much the way things go.

Since when has ANY IHV sat down and told people the in-and-out specifics of their techniques, rather than give the general explanations, at which point people test what they want with the tools available (sometimes provided by IHV's, but usually all 3rd-party programs) and make the best analyses they can?

Finding the glaring flaws and discussing quality/performance trade-offs is what has ALWAYS been done, whether we're talking about various AA methods, AF methods, or bi/tri/bri/try/WTF... If you're not a part of the process doing the testing, and not examining what has come out and offering your own analyses of the presented information, then what the heck does it matter what you doubt or don't?

The proof of the pudding is in the eating. You don't get much by smelling the aroma of what's cooking next door.

Touche'...you get it!

Not sure about anyone else, but I have a trilinear headache from all of this....shouldn't we be playing games or something constructive with our time?
 
cthellis42 said:
Someone buy me an X800 Pro... I promise to do a LOT of testing with it! Not play games! Really! ;)

Well I would have offered if you tested it and played games on it but since you are only going to test it you can't have one.

Sorry... ;)
 
cthellis42 said:
Finding the glaring flaws and discussing quality/performance trade-offs is what has ALWAYS been done, whether we're talking about various AA methods, AF methods, or bi/tri/bri/try/WTF... If you're not a part of the process doing the testing, and not examining what has come out and offering your own analyses of the presented information, then what the heck does it matter what you doubt or don't?

Its a bit hard when you don't got a frigging card with cheats.

And yes I have already been doing my own testing if you had read the other threads in the 3d technology and hardware forum you would have found one my threads asking to people to test stuff and found my concluisions.
 
I think we need to drop this for a week or two. Let more people get the cards and then revist it when we all cool off.
 
jvd said:
I think we need to drop this for a week or two. Let more people get the cards and then revist it when we all cool off.

Well I don't even need a X800 I a damn 9600 would help me but all my mates either have NV cards or R3XXs :/
 
bloodbob said:
jvd said:
I think we need to drop this for a week or two. Let more people get the cards and then revist it when we all cool off.

Well I don't even need a X800 I a damn 9600 would help me but all my mates either have NV cards or R3XXs :/
having a x800pro and a 9600xt i don't see any diffrences in the image compared to my 9700pro but i'm not an expert
 
Sounds like ATI should name their new filtering technique something other then Bilinear or Trilinear. Then anyone can just compare the IQ with what ever the other card/chip offers to it. Name it SMARTFILTER or something like that and don't claim anything in being Trilinear, Brilinear or Bilinear. Sounds like it is a new filter algorithm anyways different then plan trilinear. While it may indeed do trilinear at times, that is really a subset of this new filter technique from ATI.

Now where are all the test that show the weaknesses of this new filter method? hmmmm, a week and yet no one has found a weakness? This is starting to make ATI look rather good now as time goes on.

Now if one want apples to apples benchmarks I guess one could turn on color mipmaps on a number of applications, confirm the filtering and then benchmark it ;).
 
JVD, do you own the game Mafia? It is a great game to see mipmap broundaries when you fource AF with the performance setting on an ATI card.

The game normally only allows trilinear filtering but the Control Panel will force bilinear. Then you will see a huge and ugly difference if you keep the AF to like 2x. To get full Trilinear/SMARTFILTER with AF (hmmmm) use rTool. The mipmap lines will crawl everywhere on this game when in motion, blantantly.
 
noko said:
Sounds like ATI should name their new filtering technique something other then Bilinear or Trilinear. Then anyone can just compare the IQ with what ever the other card/chip offers to it. Name it SMARTFILTER or something like that and don't claim anything in being Trilinear, Brilinear or Bilinear. Sounds like it is a new filter algorithm anyways different then plan trilinear. While it may indeed do trilinear at times, that is really a subset of this new filter technique from ATI.

Now where are all the test that show the weaknesses of this new filter method? hmmmm, a week and yet no one has found a weakness? This is starting to make ATI look rather good now as time goes on.

Now if one want apples to apples benchmarks I guess one could turn on color mipmaps on a number of applications, confirm the filtering and then benchmark it ;).
The main problem is to demonstrate the lack of effectiveness you generally use colour mip-maps. Now as ati have stated that for customer mip-maps they disable it. I guess their adaptive alogorithim isn't adaptive enough anyway back to the point the Alternate way to test it is with movie but uncompressed movies are rather Huge and that causes all sorts of problems trying to disrupt it.
 
A 120 by 80 pixel movie may be the ticket for keeping the file size resonable (just an small area of the screen) with 20-60 frames, even 10 frames can show aliasing or mipmap lines clearly. I don't think this would to hard to do.
 
noko said:
A 120 by 80 pixel movie may be the ticket for keeping the file size resonable (just an small area of the screen) with 20-60 frames, even 10 frames can show aliasing or mipmap lines clearly. I don't think this would to hard to do.
The challenging part would be picking the right situation to stress the video card, making sure that the video was full-resolution, and compression artifacts don't disturb the rendering.
 
noko said:
JVD, do you own the game Mafia? It is a great game to see mipmap broundaries when you fource AF with the performance setting on an ATI card.

The game normally only allows trilinear filtering but the Control Panel will force bilinear. Then you will see a huge and ugly difference if you keep the AF to like 2x. To get full Trilinear/SMARTFILTER with AF (hmmmm) use rTool. The mipmap lines will crawl everywhere on this game when in motion, blantantly.
no i'm sorry i do not own it. I'm a rpg fan
 
Best commment i ever read @ cthellis42

The Problem in these discussing boards are the typical fanboyism wich destroys everytime the constructive discussions about the Feature,Implementation...

How it works and how useful it is vs true Full_AF.
 
MrGaribaldi said:
Wouldn't that be possible to add into the compiler so that it recognised when such situations arises, and then can deal with them regardless of the game/app?

I'm guessing that it will be very difficult to do a perfect compiler that cover all the corner cases.

But as far as I understood davepermen correctly, that would fall in under the bugfix-part of his post.

If the situation only occurs under certain circumstances and the performance is still great when it happens that i doubt that the developers would call it a bug. And what if the performance is say, 20% below what it could be ? That's definitely not enough to immedietly call it a bug and the difference between being 10% faster then your competitor or 10% slower. That's a very big difference for a IHV, unfortunetly.
 
Back
Top