radar1200gs
Regular
beyondhelp said:Christ! that story was the final straw. I have to vent... not ONE peep out of them when Nvidia was doing all their Drivers hacks and filtering optimizations and now all of a sudden they pick up the story now that ATI has been accused of cheating, (Which I think is rather a harsh judgement of what they're doing.). ATI has been about saving bandwidth when they can for some time. Hyper Z, Hierarchical Z, Adaptive Anistropic... So why should Adaptive Trilinear surprise anybody? It goes right along with all the other bandwidth saving features ATI has implemented over the last couple of years. So why should it be considered a cheat? Because they didn't tell you what they're doing? Why broadcast a new filtering technique to your Competitor? I would have kept my mouth shut about it too.
As for all the bit comparisons and screenshots making the rounds, I say I could care less about numerical precision and differences when screenshots are blown up 1000%. When Full Tri is needed (Or seemingly, if theres any doubt) it is used. When it isn't needed, they don't waste bandwidth on it. Seems pretty Smart on ATI's part to me. Subtle. Neat. A rather Elegant optimization on their part imo.
I'd kindly like to invite all the anal rententive Yammerheads who have been going on and on about this to take a flying leap. ( Oh and I'd like to ask you when Nvidia is going to actually let you turn on Full Trilinear and not just pay lip service to the option? and WHY, if the 6800 is so Fine, is Nvidia still using _pp hints with it? I thought it was this all powerfull Full 32bit precision powerhouse -lol- )
The only thing ATI SHOULD have done was allowed all the Anal retentive Yammerheads to turn it off if they wanted, to shut them up.
On Topic, The article is pretty good when it comes to helping explain Filtering in General, but the overall tone of the article Implies that ATI is cheating, and I don't see it that way. Sorry. To equate what ATI is doing now with what Nvidia has been doing for nearly 2 years is a joke. (imho of course ). Go ahead...Flame away!
Well, to paraphrase you:
nVidia has been about saving bandwidth when they can for some time
Partial precision is part of the DX9 specification and is a bandwidth saving feature. If ATi is so into bandwith consevation, why don't they also do it?