Ways to solve complaining about ATI's filtering?

Ostsol

Veteran
Easiest way I can think of is for ATI's texture quality slider to force full trilinear on everything when it is set to full. One notch down and the optimizations are put in place. Further notches down, do whatever else they've already been doing. Default the drivers to the second notch from maximum. That way the casual gamer will be able to get the best performance with arguably just as good quality. The hardcore users can tweak all they want to get what they want. Also, full documentation into each setting and what they do would be helpful.

Would anyone complain about that?
 
Not at all, I'd like it actually. I think that they would put it in the next Cats if enough people mentioned it- they can make the claim that with the new X800's, the optimizations aren't necessary in most games and can be disabled without the card running poorly.
 
I'd be an awful lot more comfortable if they had an option to turn it on/off if that were feasible, I don't really care how it's done.
 
Ideally, I would like to see the algorithm which they are using. That would alleviate any concern about it. It still seems possible to me that their optimization might just use some clever arithmetic or bit manipulation that takes advantage of the relationship between the two mip levels, but which results differ from the conventional calculation by an acceptable tolerance. Alternatively, it may show that the degree of error is too large. But we just can't know without having the algorithm to look at...

Of course, since it's patent pending, I don't really expect to see that algorithm that soon...
 
Why the heck would you need to see the algorithm itself? That is highly confidential lol. Are you an expert in 3D Graphics? Is it some sort of competition and you are the judge? Sorry but I think that is just a really odd thing to ask for. Can you point to the "convention algorithm" ? Since ATI explained (and surely Nvidia does as well) that they improve it with each hardware release and various driver releases, how can anyone decide with algorithm is the best one to use? Since it is all subjective, other may have a different opinion, thus solving nothing. Unless you then want all possible algorithms listed in a list box and choose which one you want to use? :)

Sorry for forgetting to turn off my sarcasm before writing the above post :) You can't mathematically determine "the degree of error" being too big or small. It is all subjective....
 
Simple ATI need to offer a full trilinear option in the drivers so we can retest and get a real idea of the performance of the part. Brilinear/Adaptive trilinear are all approximations to real trilinear and should be benched on different terms.

At the end of the day it appears that ATI just kept shut about this to look faster, cause really the nv40 and x800 are pretty much on par until you bring AF into the picture, but to get a true picture on how good the AF is we need to bench the card with full trilinear enabled to do an honest apples to apples comparison.
 
mozmo said:
At the end of the day it appears that ATI just kept shut about this to look faster, cause really the nv40 and x800 are pretty much on par until you bring AF into the picture, but to get a true picture on how good the AF is we need to bench the card with full trilinear enabled to do an honest apples to apples comparison.

Without the faster AF speed, the x800PRO/XT are harder sells IMO. I mean, if the x800PRO/XT were not significantly faster than the 6800 in any area, wouldn't most who don't prefer one brand or another go for the 6800 for the Shader Model 3.0 and hardware video encode/decode features?

People are questioning why ATI would do this on a high end part - I think if they didn't it would make their job of selling x800s a lot more difficult. And if they enable full trilinear now and the 6800 is just as fast, it could create even more problems for them IMO.

Edit: problem in sales, it would fix the problem of people complaining :)
 
I'd solve the complaining by stapling peoples' mouths shut ;)

But I guess we could settle for an option in the drivers to turn off any optimizations :)
 
Sandman said:
I'd solve the complaining by stapling peoples' mouths shut ;)

But I guess we could settle for an option in the drivers to turn off any optimizations :)
No, after looking at screenshots and playing games on a 9600 & 9700 for a while I think I want to explore the first option more. :devilish:

It's really looking like a big deal about nothing, the image quality is so damned close that I'm really having a hard time noting any difference.

No harm, no foul. :p
 
they dont have to do any thing in the CP, but it would be nice if a reg setting kinda slipped out.... and to realy stop the complaining ask DCS (DSC?) what will make him stop trolling from the nvida network
 
Hmm. . . I tend to think that hidden registry settings are best for untested features that aren't necessarily well optimized for in drivers.
 
Ruined said:
mozmo said:
At the end of the day it appears that ATI just kept shut about this to look faster, cause really the nv40 and x800 are pretty much on par until you bring AF into the picture, but to get a true picture on how good the AF is we need to bench the card with full trilinear enabled to do an honest apples to apples comparison.

Without the faster AF speed, the x800PRO/XT are harder sells IMO. I mean, if the x800PRO/XT were not significantly faster than the 6800 in any area, wouldn't most who don't prefer one brand or another go for the 6800 for the Shader Model 3.0 and hardware video encode/decode features?

People are questioning why ATI would do this on a high end part - I think if they didn't it would make their job of selling x800s a lot more difficult. And if they enable full trilinear now and the 6800 is just as fast, it could create even more problems for them IMO.

Edit: problem in sales, it would fix the problem of people complaining :)

How would it make selling it harder..It appears there is no IQ difference. and if you can get faster AF but the same IQ, who would not want it. As for the 6800 and SM3.0?? It is not proven it can even run it yet. Then we still have the IQ issues with FarCry...I would not even think of t=getting a current NV40 card until both of those issues are straightened out. This issue with ATI and this filtering, If it is proven IQ is the same, is a plus, not a minus for me..Why enable full Tri filtering at a cost in performance if your going to get the same IQ :rolleyes:

If people want it to be disabled in CP though I am not against that..But think it would cause unbalanced reviews. As only if you compair IQ between NV's full Tri and ATIS new filtering and see if they are different would it be a valid comparison..IMHO. Unfortunately their are to many fanboy sights out there that will take advantage of this.,
 
keep the CP simple, the vast majority of users have no clue. But for the few who need to test, try out diff theoretical situations, then a reg seting would be fine. Then again im one of the few.....
 
Bry said:
How would it make selling it harder..It appears there is no IQ difference. and if you can get faster AF but the same IQ, who would not want it.

What I'm saying is if they take out the optimization it will be a harder sell, because for the most part it will have little to no significant performance advantage over Nvidia, and it will have less features, too. That would make it a harder sell IMO, and is a reason why ATI might have opted to keep the optimization in for the x800.

As for the 6800 and SM3.0?? It is not proven it can even run it yet.

There are threads at nvnews and EB where numerous people played the SM3.0 FarCry expansion on the 6800 at E3. A video was even posted from E3, so yeh it has been proven it can run it.

Then we still have the IQ issues with FarCry...I would not even think of t=getting a current NV40 card until both of those issues are straightened out. This issue with ATI and this filtering, If it is proven IQ is the same, is a plus, not a minus for me..Why enable full Tri filtering at a cost in performance if your going to get the same IQ :rolleyes:

Check out the FarCry SM3.0 e3 video jakup posted at nvnews. The IQ is amazing, much better than anything we've seen with SM2.0 FarCry to date.

If people want it to be disabled in CP though I am not against that..But think it would cause unbalanced reviews. As only if you compair IQ between NV's full Tri and ATIS new filtering and see if they are different would it be a valid comparison..IMHO. Unfortunately their are to many <bleep> sights out there that will take advantage of this.,

Sites should have the opportunity to use full trilinear as should users IMO. That being said, I don't think this optimization is a big deal.
 
Afraid of the truth? If ATI really has any merit, they should allow full trilinear so that sites can retest and rebench. But since you'll never ever get full trilinear again on newer R4xx and RV4xx cards..... enjoy your new "trylinear" filtering then.
 
DSC said:
Afraid of the truth? If ATI really has any merit, they should allow full trilinear so that sites can retest and rebench. But since you'll never ever get full trilinear again..... enjoy your new "trylinear" filtering then.
I'm not talking about truth or lies. I'm talking about the value of your posts in this discussion. That value is zero, which equates those posts to nothing but trolling. In any case, this thread is about how ATI can provide quality options in their control panel. Discuss whether the optimization is a cheat or not in another thread.
 
DSC, this is a technical forum. Try to keep at least the majority of your posts technical in nature, or do us a favor and post elsewhere. You can come up with (not so) clever names in the 3DGC&I forum, if you must.

Ruined said:
Check out the FarCry SM3.0 e3 video jakup posted at nvnews. The IQ is amazing, much better than anything we've seen with SM2.0 FarCry to date.
That Far Cry mod Jakup saw was probably the same one debuted at nV's 6800U launch, and Crytek's rep repeatedly said it was SM2.0 *and* 3.0. I don't think you'll see new effects with a 6800U in the FC Mod, but you may see better performance due to reduced passes or shading (with a SM3.0 compile of SM2.0-level shaders). This isn't to dismiss the merits or importance of SM3.0, just to clarify.
 
DSC, I think it's time for you to stop your trolling. It serves no purpose other than to show you're totally biased, and very unknowledgable. You've given absolutely no reasons to believe anything you say. If you can't add to a discussion, then keep you mouth shut......
 
Back
Top