ATI - Full Tri Performance Hit

Status
Not open for further replies.
jvd said:
Ante P said:
jvd said:
This is not the case. IT just allows you to turn of brilinear. IT still only uses trilinear on 1 texture stage like ati.

Not true. With the new drivers it's fully possible to get full trilinear on all texture stages.

I'm using 61.41 right now and it has an option called anisotropic optimization as well as trilinear optimization.
If you turn on High Quality both these options are set to Off and you get full trilinear on all texture stages both with and without aniso.

(Still no possibility of turning off angle dependency though.

are u sure

Yes.

IMAGE
 
trinibwoy said:
Ardrid said:
Isn't it pushing 2x as many pixels?

Nope it's pushing twice as many pixels in each dimension (horizontal/vertical)

16x12 = 4x8x6 :D

Yeah, I realized that the minute I posted, which is why I deleted the post. You were just too fast for me :)
 
Yes but that program is asking for trilinear across all stages . Unreal does not.

Which is what i'm asking.

It obviously works when the program asks for it. That website is saying it doesn't when the program doesn't ask for it .

So if u set the af test to just trilinear on the first texture stage will the driver over ride that and use trilinear on all stages ?
 
jvd said:
Yes but that program is asking for trilinear across all stages . Unreal does not.

Which is what i'm asking.

It obviously works when the program asks for it. That website is saying it doesn't when the program doesn't ask for it .

So if u set the af test to just trilinear on the first texture stage will the driver over ride that and use trilinear on all stages ?

aha, you mean forcing it, dunno about that, what's the commandline for ut2004 to color the various mip map levels, I always forget
(I also always forget which key brings down the console :) )

hmm UT2004 does indeed "ask" for full trilinear on all texture stages by default
thus the large difference between forcing "quality" AF on ATis board or just enabling it through the app
 
Ante P said:
jvd said:
Yes but that program is asking for trilinear across all stages . Unreal does not.

Which is what i'm asking.

It obviously works when the program asks for it. That website is saying it doesn't when the program doesn't ask for it .

So if u set the af test to just trilinear on the first texture stage will the driver over ride that and use trilinear on all stages ?

aha, you mean forcing it, dunno about that, what's the commandline for ut2004 to color the various mip map levels, I always forget
(I also always forget which key brings down the console :) )

hmm UT2004 does indeed "ask" for full trilinear on all texture stages by default
thus the large difference between forcing "quality" AF on ATis board or just enabling it through the app

then exactly what does the update in the article mean .

As its saying that they were to hasty and that the driver works like ati's driver nad only applys tri to the first texture lvl ?

Its pissing me off cause its such a bad translation.

I figured that they were forcing it on all stages with the radeons through a reg hack and they thought they were doing it for nvidia through nvidia's control panel but it turned out that they needed a third party program. As ut2k4 only does the first texture unless the program asks for it
 
WaltC said:
Do you really think it might be an object of "scientific curioisty" to discover how unnecessarily slow we can *force* each product to run those APIs and games? Heh....;) I'm not the least bit interested in that...;)

Neither are my customers, it's all about FPS ;)
 
WaltC said:
Do you really think it might be an object of "scientific curioisty" to discover how unnecessarily slow we can *force* each product to run those APIs and games? Heh....;) I'm not the least bit interested in that...;)

first, the numbers from computerbase are commented with something like 'this numbers were obtained in an unrealistic, fillrate limited situation, its a worst case scenario. in a typical game situation the performance loss is more around 5-14%'.

then i would like to judge iq by myself thus i want control. with the raw power of the new generations i dont see the urge to go with anything less than the max filtering quality possible on a given accelerator in many if not most situations.

at least as long as were going the multisampling aa route i consider texturefiltering quality high priority with the need to improve further.
its dissapointing in this regard that nv40 and r420 offer no improvements but are actually steps backwards.
 
WaltC said:
Do you really think it might be an object of "scientific curioisty" to discover how unnecessarily slow we can *force* each product to run those APIs and games? Heh....;) I'm not the least bit interested in that...;)

Yes your point is well understood but given this forum's premise I would think that FPS would not be the be all and end all of all discussions. If it were a gaming forum or some reseller's review page that's another story. My understanding of filtering methods on 3D hardware has been enhanced by all the press on this ATI trilinear issue. All I am saying is that there is merit in their investigation which you seem to have dismissed completely. ;)
 
Tim said:
hstewarth said:
Very interesting article, from what I can tell from poorly translation is that X800 XT without optimizations is about the same as default 9600. This might confirm that X800 is not much more than a slightly upgraded 9800.

The x800 runs at 1600x1200, the 9600XT only runs at 800x600. The x800 pushes 4 times as many pixels per frames.

The graphs in that article can be misleading. but it would be interesting to do the same thing with 9800 XT and 6800 at same resolutions for better accuracy. But I still believe the x800 is not much marke than a slightly upgraded 9800. If it had SM 3.0 support, then the story would be different.
 
hstewarth said:
Tim said:
hstewarth said:
Very interesting article, from what I can tell from poorly translation is that X800 XT without optimizations is about the same as default 9600. This might confirm that X800 is not much more than a slightly upgraded 9800.

The x800 runs at 1600x1200, the 9600XT only runs at 800x600. The x800 pushes 4 times as many pixels per frames.

The graphs in that article can be misleading. but it would be interesting to do the same thing with 9800 XT and 6800 at same resolutions for better accuracy. But I still believe the x800 is not much marke than a slightly upgraded 9800. If it had SM 3.0 support, then the story would be different.

so what is the problem with that ?

no one complained about the geforce 4 which is nothing bu a faster geforce 3 or faster geforce 3 ti.

How about the geforce 2 ultra . Nothing but a geforce 2 pro clocked faster that was nothing but a geforce 2 clocked faster .
 
hstewarth said:
But I still believe the x800 is not much marke than a slightly upgraded 9800. If it had SM 3.0 support, then the story would be different.
Slightly updgraded? Please tell me you are kidding here. How can a "slightly" updgraded card walk all over the previous?

Considering that optimizations were turned off on all texture stages I think the X800 did very well.
 
jvd said:
no one complained about the geforce 4 which is nothing bu a faster geforce 3 or faster geforce 3 ti.

Well, that's not true.

This forum was full of people ranting about how it is sucks that GF4 doesn't support PS 1.4, that R8500 is superior because of truform support, etc.

(Not to mention the 3dmark bashing because the APS test ran on GF4 _and_ because it wasn't included in the score.)
 
Hyp-X said:
jvd said:
no one complained about the geforce 4 which is nothing bu a faster geforce 3 or faster geforce 3 ti.

Well, that's not true.

This forum was full of people ranting about how it is sucks that GF4 doesn't support PS 1.4, that R8500 is superior because of truform support, etc.

(Not to mention the 3dmark bashing because the APS test ran on GF4 _and_ because it wasn't included in the score.)
well trueform and p.s 1.4 made it more advanced.

But almost everyone recommended the geforce 4 ti 4200.

Of course when the 8500 was first released it was the best card (against the ti 500)
 
jvd said:
well trueform and p.s 1.4 made it more advanced.

No argument there.

But almost everyone recommended the geforce 4 ti 4200.

Except HB that insisted on that the R8500 is faster ;)

But right but my answer was on your comment about complaining - not about recommendation.

It's simular now - many complain about X800 not having PS3.0 support - but it's still a card of choice to buy.
 
Except HB that insisted on that the R8500 is faster

But right but my answer was on your comment about complaining - not about recommendation.

It's simular now - many complain about X800 not having PS3.0 support - but it's still a card of choice to buy.

actually very few complained about the geforce 4. Most said p.s 1.4 wasn't much of a jump or would never be used.

Much like now with sm 3.0

I think history is going to repeat itself with nvidia the one with the barely supported shader model.

which is not a problem as the 6800 ultra is still not a bad card. THough i don't think its worth 500$ . It should be around the price of the x800pro (speed , power requirements , space)

If i was to recommend right now for future games

100$ 9600pro (or wait till the x600s)

200$ 9800pro

300$ geforce 6800gt

400$ x800pro or geforce 6800ultra (if u can get it for that) though i would tlel u to save the extra 100$ as the jump to the xt is more than worth it

500$ x800xt .

unless nvidia gains a ton of speed in drivers with out affect image quality i can't say the 6800ultra is great buy at its price range.
 
trinibwoy said:
Yes your point is well understood but given this forum's premise I would think that FPS would not be the be all and end all of all discussions. If it were a gaming forum or some reseller's review page that's another story. My understanding of filtering methods on 3D hardware has been enhanced by all the press on this ATI trilinear issue. All I am saying is that there is merit in their investigation which you seem to have dismissed completely. ;)

Well, I don't think that fps is the be-all, end-all, certainly...;) The point is that defeating performance optimizations which don't degrade IQ is more of a dead-end, get-nowhere-fast kind of thing, imo. Again, there are driver performance optimizations of all kinds which do not degrade IQ, and there are those which do. While I can see the benefit to defeating optimizations which clearly degrade IQ, there doesn't seem to be a benefit to defeating those which increase performance without sacrificing IQ. Defeating non-IQ related performance optimizations seems tantamount to deliberately slowing down performance for no good reason at all--which is senseless, I think. The flaw in some thinking here seems to be that all optimizations must degrade IQ, so since ATi has admitted to including a trilinear optimization in its drivers there must be IQ degradation, somewhere, which will become apparent if only we look for it hard enough...;)

To that end people are looking so hard that they are creating silly 2d "movies" in obscenely low res, which skip frames like crazy, suffer from artifacts internal to the conversion process in which lots of pixels are either dropped or distorted or doubled or halved, and which are limited to the exclusive constraints of the 2d encoding and playback programs they use, all for the strange and bizarre purpose (to me) of attempting to find that which they cannot find in screen shots taken directly from actual gameplay, or actually see while playing the game...;)

I predict that they will find bunches of artifacts using such methods--but may not realize (or want to realize) the artifacts have been created by their methods as opposed to reflecting artifacts visible in actual gameplay using these products. That's the problem with setting out to prove an assumption not clearly in evidence: the methods and tests and settings used to demonstrate the hypothesis may be manipulated, consciously or otherwise, to reflect the premise instead of the reality. Can you live with that? If you can, fine, but I can't.

Take the original case of the discovery that nVidia had special-case optimized its drivers to defeat trilinear filtering on detail textures in UT2K3, regardless of control-panel or in-game settings made by the user to the contrary, in which the trilinear filtering of detail textures should have occurred, but didn't. It was discovered by way of visually noticing filtering artifacts while playing UT2K3 that shouldn't have been there if detail textures were being filtered correctly; namely, mipmap boundaries which were visible but shouldn't have been. Screen shots were also easily made which proved the premise--it surely was never necessary to make low-res 2d "movies" to demonstrate what was happening, was it?
It was only after the degradation to IQ was observed while playing the game, and clearly evidenced in direct screen shots, that other tests, like the DX rasterizer, were employed to analyze the issue at a deeper level and to further confirm its root cause as being nVidia's trilinear optimization--which originally affected UT2K3 gameplay and benchmarking and nothing else.

But what happened in the present case? Basically, a web site said, "While we can't detect an IQ degradation in playing these games with our naked eyes, or through an examination of screen shots, use of the DX rasterizer shows some pixel differential that in our opinion ought not be there." Then came ATi's statement about its adaptive trilinear filtering method--and next followed all of these "looking for artifacts in a haystack" efforts based wholly on an assumption that because the optimization exists so must visible artifacts relating to its use also exist. Next, a very interesting statement was made by an unknown M$ employee and was first quoted by Tech Report, and later by THG, plainly stating that the DX rasterizer had not yet been updated to reflect the capabilities of the new generation of 3d gpu/vpus.

This should have immediately caused the original site to *discard* its assumptions about the current DX rasterizer pixel differentials for R420 on the basis of the fact that if the DX9 rasterizer currently in use by M$ did not accurately reflect the rendering capability of nV40, which M$ said is better than its current DX9 rasterizer reflects, then it also was not accurately reflecting similar rendering capability for R420, either (this is actually what the unkown M$ employee said in his statement, even though it was made in an nV40-specific context.) But sadly, such was not the case, and objectivity was thrown out of the window in the quest to find The Artifacts Not Actually There, more or less, and the rest is history.

The basis of the theory seems to be this: if an IHV admits to any kind or type of trilinear filtering optimization then it must be true that visible IQ degradation results. Presumably, the theory is based solely on the fact that IQ degradation was visible when nVidia first used such an optimization for trilinear with nV3x for UT2K3. However, the theory conveniently overlooks the fact that all driver performance optimizations need not produce IQ degradation, and in fact many of them certainly do not; and overlooks the fact that nVidia's own Trilinear optimizations since slipping them into UT2K3 way back then have improved dramatically in terms of visible IQ degradation.

Simply put, it just isn't true that trilinear optimizations must degrade IQ when used correctly or intelligently (as ATi's algorithms attempt to do.) Way back when all of this started relative to nVidia, Kyle B. of [H] made his now infamous procalmation (paraphrased): "If you can't see the cheating it isn't cheating." The only disagreement I had with KB about that then was that it was because we could see the cheating that we objected...;) I think what Kyle might have meant to say at the time was, "If you are not supposed to see the cheating, it isn't cheating." Heh...;) We weren't supposed to take the camera off track in 3dMk03, you see, and if we'd have done what we were supposed to do we'd never have seen the cheating and it wouldn't have been cheating...;) Just doesn't quite fly, though, still, does it?...;)

Anyway, I agree with KB to this extent: if I can't see any visible difference when playing a game between brilinear and trilinear (when I'm playing the game, mind you and not watching a 3d camera on a fixed track), then, yes, I agree with KB that if I can't see the cheating then it isn't cheating, and I'm pleased to accept the performance benefits the optimization provides with no cost to IQ. This is where this issue stands for me at present.
 
jvd said:
Yes but that program is asking for trilinear across all stages . Unreal does not.

Which is what i'm asking.

It obviously works when the program asks for it. That website is saying it doesn't when the program doesn't ask for it .

So if u set the af test to just trilinear on the first texture stage will the driver over ride that and use trilinear on all stages ?
Well in that case (I don't play Unreal Tournament myself) just look at their application controlled results and you will have comparable results to what the NV40 did (assuming they didn't set NV40 to all trilinear with a tweaker also) since if you read the text you will see that in app preference mode the ATI driver will only apply trilinear AF to the stages the app asks for.
 
radar1200gs said:
jvd said:
Yes but that program is asking for trilinear across all stages . Unreal does not.

Which is what i'm asking.

It obviously works when the program asks for it. That website is saying it doesn't when the program doesn't ask for it .

So if u set the af test to just trilinear on the first texture stage will the driver over ride that and use trilinear on all stages ?
Well in that case (I don't play Unreal Tournament myself) just look at their application controlled results and you will have comparable results to what the NV40 did (assuming they didn't set NV40 to all trilinear with a tweaker also) since if you read the text you will see that in app preference mode the ATI driver will only apply trilinear AF to the stages the app asks for.

sigh i'm not going to post the update again. But if you read it they say they were to hasty and they thought that it was applying it to all stages on both cards. THe drivers doing it for nvidia and a registry hack on the other.

But they did not see any iq problems nor did they want to talk about if they did or did not except to say its up to the users opinion.

So the un hacked version of atis drivers are fine for comparision.

Since there is no iq decrease compared to nv40 with trilinear inabled.

IF you do not like that then please show me where in that article it states that the image quality was affected do to ati's method of optimizing and i will gladly take the second to last results and compare them nvidias trilinear scores. If not the third from last still stands as valid.

Its very simple .
 
read the graphs jvd. They tested the ATi cards in several configurations. One of those configuarations was app preference, which will yield the same filtering you claim NV40 was doing (tri first stage AF, Bi thereafter) unless the app specifies otherwise.

If the app specifies otherwise then there is no forced slowdown going on - the driver is merely obeying the app.
 
jvd said:
actually very few complained about the geforce 4. Most said p.s 1.4 wasn't much of a jump or would never be used.

Much like now with sm 3.0

I think history is going to repeat itself with nvidia the one with the barely supported shader model.

I recall something like this from the notes of an ATI's presentation: "try to steer people away from dynamic branching at least until R500 shows up with decent performance" or so, I am writing from memory.

So IMHO your statement about graphic manufacturers' planned S.M. 3.0 support does not reflect reality in an accurate way.
 
Status
Not open for further replies.
Back
Top