ATI - Full Tri Performance Hit

Status
Not open for further replies.
radar1200gs said:
Its the exact same argument fanATIcs used against nVidia and their optimizations. You can't have your cake and eat it too you know.
And--no, wait for it--"the same" thing nVidiots spent a year and a half forgiving at every turn, but is now both supreme cheat and the only concern one should have now... Am I right? Oh, I'm right, aren't I?

Wow, that game is both fun and completely pointless! I shall regurgitate it wherever I go, I shall!
 
I don't think you will find that many who forgave the behaviour, some may have accepted it. Some of us chose not to purchase nV3x because of it.
 
jvd said:
http://www.beyond3d.com/forum/viewtopic.php?p=302916#302916


How it looks, we were somewhat too hasty. Full, trilinear AF gives it only over all texture daily, if the application requests it or with an external Tool is helped. The AF forced over the driver filters only on the first texture days trilinear AF, over it only bilineares AF is ordered - similar to what is ordered in the ATi driver, if one does without external Tools.

from how it reads it seems they thought the new nvidia drivers gave the option to use trilinear filtering across all texture stages on the geforce 6800ultra and fx series.

This is not the case. IT just allows you to turn of brilinear. IT still only uses trilinear on 1 texture stage like ati.

THey went on to force trilinear on all stages of the texture on the radeons and increased the load on them.

Then comparing them to what they thought were equal benchmarks to nvidia (i asume they had benchmarked with these drivers somewhere else )

That is why the update is there.

That is what i gather from the broken english.

Don't know, if you've been corrected on that yet, but the article is divided into two parts. One concerned with the leaked nVidia driver and it's function, at least to enable trilinear filtering on all stages with AF turned off (after the mistake with AF became known) and the second part solely with ATi.

Even if you'd just look at the Benchmarks, they consist purely of X800 in 1600x1200 and R9600XT in 800x600 - neither of which are produced by nVidia, AFAIK.

edit:
If there are more translation issues or things, that are misunderstandable, please let me know and i'll try to put it in understandable (albeit not perfect) english.
 
Even if you'd just look at the Benchmarks, they consist purely of X800 in 1600x1200 and R9600XT in 800x600 - neither of which are produced by nVidia, AFAIK.


Yes . But i thought they were refering to these x800 benchmarks to be comparable with the 6800 benchamrks previously done. Could be wrong.

Still doesn't explain why this was done.

They show no reason to disable optimization . What was the reason ? Did it lower performance ? No it increased it . Did it incease iq over the optimization being enabled ? No i don't believe show other wise they would have told us so.

So why these numbers ? All they show is how slow you can get a x800 and 9600 to run if you take off optimizations which seem to have (At least on ut2k3) no image quality problems .

So whats the point of doing it I ask again ?
 
jvd said:
then exactly what does the update in the article mean?

The bad translation via online translators is quite annoying, agreed. At least you do not blame the author but the translation - not everyone is so wise. :)

As foru your question:

-the update only concerns the leaked nVidia drivers 61.23, they do allow for full-tri AF over all stages only if the application requests AF (no matter where and on how many stages), thus complying to the DX-Rules, where application decides over which kind of filtering which texture stage will get. Some apps like Aquanox2 for example will request tri-AF only on texture stage 0, leaving the others with bi-AF, which is fine, as long as the app itself and thus the developer decides it. Ok, granted, AN2 does look better with real tri-AF forced onto it. ;)

-the performance-diagrams are solely concerning ATis cards. The UT-Benchmarks were done between X800 with different leves of AF. RED is what you'll get, if you let the driver decide. In UT2003, there is quite visible Mip-Banding in many detail textures, which only receive bilinear AF this way. GREEN is what you get using application-requested AF (or forced via rTool and similar programs).
Bilinear-AF (the longer grey bar) can also be forced via rTool fpr example and represents maximum performance, including heavy mip-banding (same would of course apply to nVidia here), reduced trilinear filtering, changed threshold where within the base texture ("before" Mip1 so to say) a higher number of samples is taken and a slight negative LOD on all further mip-boundaries, which saves additional fillrate.

800x600 was choosen for the 9600XT, as it represents one quarter of pixels to render analog to the 9600XT only having one quarter of Pipelines available to render this quarter of pixels.

But in the diagrams there are already english terms chosen, so i do not fully understand, what could make these diagrams somehow misleading?
 
Okay . I get the deal about the beta drivers.

This is what i don't get

the performance-diagrams are solely concerning ATis cards. The UT-Benchmarks were done between X800 with different leves of AF. RED is what you'll get, if you let the driver decide. In UT2003, there is quite visible Mip-Banding in many detail textures, which only receive bilinear AF this way. GREEN is what you get using application-requested AF (or forced via rTool and similar programs).
Bilinear-AF (the longer grey bar) can also be forced via rTool fpr example and represents maximum performance, including heavy mip-banding (same would of course apply to nVidia here), reduced trilinear filtering, changed threshold where within the base texture ("before" Mip1 so to say) a higher number of samples is taken and a slight negative LOD on all further mip-boundaries, which saves additional fillrate.

I get the first is bilenar only .

The second is application prefrence which means there is no aniso going on correct ?


The third is with aniso on ?


and this
X800 XT PE Full trichloroethylene AF
is what you get with what the game asks for ?

Or does it just apply it over all stages no matter what (since its a hack in the drivers .

ALso what is the point of benchmarking this mode if there is no image comparison ? They claim it will depend on the user ? Why is that because there is no image diffrence ?

Also why did they change the lod ?

You mention it. By why did they change it. Did it affect image quality ?

It sounds like they are just trying to find worse case senarios with the drivers .

(it doesn't help that the article makes no sense through a translator)
 
Oh, ok, i see. Online-translation seems worse than i thought, when it changes term which where already in english... :(

Ok, here we go.

First bar (longer grey one) is bilinear-AF (real bilinear on all stages with all other optimizations i mentioned turned on, i.e. maximum performance).

Second bar (Red) is what the driver gives you, if you use the control panel without third party tools and/or registry-hacks. (This is known as TS-optimizied AF, where only the first texturestage gets "trilinear" AF (albeit with all Optimizations like brilinear, negative LOD etc. left ON)

Third bar (shorter grey one) is what you get, if you let your Application decide (i.e. edit the ut2003.ini). Normally, this can be forced with third-party tools like rTool and results in "trilinear" AF on all texturstages.

Last bar (green one) is with application AF enabled and the three mentioned registry-keys set, so this value is comparable to the degree of image quality Radeon9700/9800 delivered, if AF was chosen by the application or forced via rTool.

edit:
jvd said:
ALso what is the point of benchmarking this mode if there is no image comparison ? They claim it will depend on the user ? Why is that because there is no image diffrence ?
Also why did they change the lod ?
You mention it. By why did they change it. Did it affect image quality ?
It sounds like they are just trying to find worse case senarios with the drivers .
(it doesn't help that the article makes no sense through a translator)
Image quality always depends on the user. As you've seen in some launch-reviews where iq-comparisons were made, most people looked at some piece of a 200% zoomed (still!) screenshot. With brilinear filtering, in some areas you get "sharper" textures, which would be great, if this sharpness would not be the result of underfiltering which causes texture shimmering.
Texture shimmering, unfortunately, is not visible on screenshots, as are the mip-boundaries, which are not properly trilinear filtered. You need to see it in Motion. Unfortunatey i do not see a possiblity to capture a sufficiently high detailed Video from some scenes as of yet.

The LOD was changed back to its default value, which is used, for example in the (non-optimized) AF-Testers and was used by the whole generation of Radeon9700/9800 cards. LOD is not a criterium, you can play around with at will. Reducing LOD without applying suitable better texture filtering results in texture shimmering and -aliasing.
And sorry to say that, but especially Radeons should, of all cards, not change anything remotely connected with texture-artifacts in the "wrong direction" because that area is, as opposed to Anti-Aliasing or power consumption, definitely not one of their strengths.

Again, this is barely conveiable using screenshots only.

And no, i do not think, ATi's right to propose, that their optimizations do not reduce IQ unless proven otherwise. They "changed" the use of mathematically defined formulas and principles in their products, so they should try and prove, how the mathematicians are wrong.
 
Well last questions

Last bar (green one) is with application AF enabled and the three mentioned registry-keys set, so this value is comparable to the degree of image quality Radeon9700/9800 delivered, if AF was chosen by the application or forced via rTool.

Why is there no proof showing that the quality is the same ?

All the settings are now equal but is the image quality the same ?

Third bar (shorter grey one) is what you get, if you let your Application decide (i.e. edit the ut2003.ini). Normally, this can be forced with third-party tools like rTool and results in "trilinear" AF on all texturstages

Hows this compare to the 9700s ? is this image quality the same


I'm still asking why was this done if they can't provide image quality problems with the x800s ?

(thanks for helping me understand the translation)
 
WaltC said:
My primary question is: what's the purpose of disabling optimizations which you cannot demonstrate to cause IQ degradation? Doing so would seem to me to be tantamount to crippling the drivers, as the only reason to want to disable an optimization in the first place is because it clearly degrades IQ. Apart from that, I can find no reason to desire to disable them.

Dare I say "apples to apples, oranges to oranges ". This forum has only been going on about it for over the last XXX months.

Ati made a bad decision a long time back. Rather than tout the merits of what seems a very nice optimisation that allowed higher frame rates someone decided that it would be better to claim it was still unoptimised and beat nvidia in " apples to apples " tests by a larger margin ( maybe they thought the reviewers would turn it off if they told them and then people would not see the benefit ). They even went to the lengths of telling people to level the playing field by running non optimised filtering in tests.

Actually instead of apples to apples Ati had slipped everyone a big banana. And they knew it and kept quiet until it came out in the wash. Then they claim the patent thing ....


So that's why they should turn it off now, so it is apples to apples again, because if you do not and you allow testers to select which is a good optimisation and which is not, then you are testing peoples minds and viewpoint and not the hardware.

The best thing to do is to test with and without optimisations and show screen shots so people know the "pure" result and then can work out which "unpure" trade off they want to select.
 
jvd,
please see my edited bits regarding your question. It took a little while to write since, as you might guess, i am not a native english speaker.

The IQ for the green bar is the same, as compared to Radeon9700/9800-products, yes.

p.s.:
you're welcome :)
 
opts.gif


(Dunno if it was posted already.) Optimisation info for the new ForceWare.

I think it's time for ATI to stop thinking that all their customers have some kind of an eye defect and won't see their trilinear optimisation (including the really old one with bilinear on >0 texture stages). Give us the way to control it!!! (Like NV just did ;))
 
Quasar said:
jvd,
please see my edited bits regarding your question. It took a little while to write since, as you might guess, i am not a native english speaker.

The IQ for the green bar is the same, as compared to Radeon9700/9800-products, yes.

p.s.:
you're welcome :)

I know thats what they are saying.

But is it true :)

They say

Whether and how strongly the image quality affects, we do not want to commentate with intention, since on in-Game-Screenshots hardly proof-strong statements are to be met here and individual feeling plays also a role.

Doesn't sound like the images are backing up what they are saying . It sounds like the images aren't convincing enough to show the diffrence between this extremly slow forced version of trilinear on the x800s :)

If they can't show in screen shots (or movies) that hte image quality hasn't increased from going from trylinear af on the x800 to this increased lod with trlinear forced on all texture stages then what is the point ? if the image quality is so close that you can't see a diffrence and in the end its personal feelings and bias that will play a role on if it looks the same or looks worse what is the point ?

Thank you again . *btw that is how the translator did the whole article*
 
jvd,

i think we are in the same place, as in the diskussion regarding the first computerbase-article on X800-texture filtering, where two UT2003-Screenshots in png-format were showed.

First some people said, they think those looked identical, then some people made a difference-image between those two, which of course resulted in the groundbreaking discovery, that different architectures have bit-wise differences all over the place.
Then it was pointed out, where to look for those differences you could see with the naked eye (and without 200% zoom applied). Then some people defended their point not to see any difference, while others denied, that this difference would be visbible "in-Game" and thus would be a valid optimization.

I don't know about, but i think it's of no use to spin that old carousell again.

And on this matter i agree with you, that it's debatable to a point, where valid optimization ends and "invalid" optimization begins - but on a highly subjective level.

*shudder_about_online_translation_quality*
 
Yes i remember those pictures .


Thanks for clearing it up.

Going to be a long product cycle thats for damn sure
 
Just one note... the performance hit when enabling full trilinear from rTool is the same in other games/engines?

Afaik UT2003/2004 is the worst case.
 
WaltC said:
The kind of approach they've consistently demonstrated seems to me an inductive process: they first saw a difference in DX rasterizer results and improperly concluded it meant a degradation in IQ which they have improperly concluded was caused by ATi's automatic trilinear optimizing algorithms (when we have an unknown employee quoted verbatim on Tech Report and THG saying M$ hasn't yet updated its DX rasterizer software for the newest generation of 3d hardware.)

So, they started with the idea that IQ was degraded by the optimizations before proving such IQ degradation actually existed, and they have been working from the same inductive premise ever since--namely, that a Trilinear optimization must cause IQ degradation.

Not quite, WaltC.
First, there was not a single DX RefRast screenshot or something similar used, not even "standards" from another IHV - purely ATi-internal.
Secondly, as i've told jvd before, the same thing regarding IQ was brought up during the first round of discussion with responses ranging from "dunno" over "don't care" to "don't see" - no point in "proving" things, people can argue about endlessly, especially, if they do not want to see.
Additionally, most people might think, that on still-shots "sharper" textures automatically mean better filtering - dead wrong, if they make a general rule out of this.

If you think there is no IQ degradation, then please be so kind and explain us, why the Nyquist-criteria used so far (and not only by ATi) for determining correct LOD (up to the edge of apparent texture-shimmering) is not valid any more with the emergence of R420-Chips?
 
crusher_pt said:
Just one note... the performance hit when enabling full trilinear from rTool is the same in other games/engines?

Afaik UT2003/2004 is the worst case.

Depends on how near you come to the border of being fillrate-limited. In most flight-sims disabling the optimizations would yield no lower frame rates, i guess.
 
Quasar said:
If you think there is no IQ degradation, then please be so kind and explain us, why the Nyquist-criteria used so far (and not only by ATi) for determining correct LOD (up to the edge of apparent texture-shimmering) is not valid any more with the emergence of R420-Chips?

Possibly for the same reason M$ says the current DX rasterizer isn't really valid for nV40? What is it about nV40 which would cause that to happen, do you think? And, if M$ hasn't updated its rasterizer to support the capabilities of nV40, because nV40 is capable of producing a better image than nV3x, then might not the same thing hold true for the current DX rasterizer and R420? Perhaps you can answer your own question by pondering that a bit.

Then, too, there's this issue: despite the wishful thinking of some, R4x0 is indeed a new architecture for ATi, and so we ought to expect many things to improve over the next few months in terms of Driver Maturity--another concept worth pondering here.

The initial reports I read when this "story" broke had to do with using the DX9 rasterizer, finding small differences in pixels, and making conclusions based on those pixel differences. I believe Dave B. here at B3d ran the same tests and got similar results. It was at that point that ATi began publicly talking about its adaptive trilinear optimization. Only then did the Search for IQ Degradation begin...;)

Last, can I take it from your comments here that since you apparently can't find any visible mipmap boundaries relative to ATi's trilinear optimization, that you have shifted the focus to The Search For Shimmering Textures? ...:D
 
If driver maturity is meant to be an issue, I'd say ATi would have had to have gone for the less complicated and more proven trilinear & AF style of R300, not a complex, twitchy optimized implimentation.
 
jvd said:
I recall nvidia saying the same thing to not use dynamic branching with the 6800s . I will try to bring up the quote.

You missed my point. What I mean is that ATI itself admits its intentions of supporting S.M. 3.0. That, coupled with NVIDIA actually supporting S.M. 3.0 and the ease of development it gives, leads to the conclusion that S.M. 3.0 will be widely used.

rgrds
 
Status
Not open for further replies.
Back
Top