ATi is ch**t**g in Filtering

Borsti said:
Well, there´s another interresting point. NV40 is loosing performance with colored mipmaps in UT2004 as well:

...

all the engine does when you use colored miplevels is replace the color data. Texture format, dimension and all other properties remain unchanged so you shouldn't see a change in performance if everything is handled according to the specification.


I don't think that this has to be true.
You don't know if their filtering algorithms cause different results with different data.

EDIT: ups - my fault
 
This is quite fishy. After a year of very good behaviour from ATi compared to nVidia, I wouldn't have expected them to pull this stunt if it is true. Finally nVidia gave users a choice between real trilinear and brilinear, and now ATi pulls a NV3x. :(
 
certainly looks 'interesting'. :)


bug (could just be a simple mixup on tex pref settings) or not it won't look too good for ati.
 
So Dx9/Ps.2.0/Fp24 are all a marketing ploys, same for Valve and Gabe's "my miracle Dx7 + ps2.0 engine-with-shadows-going-the-wrong-way-from-light-sources" runs 87698764% faster on Ati hardware".

Too bad Ati and Valve don't have the graymatter to turn tables on Opengl and Id Soft. and Nvidia. Even M$ gave up on that...


It's aways nice to watch BIASED ppl hanging in their own ropes :)

So everyone does optimeze a bit//such BIG news for BIASED ones..

[]'s
 
meh...this doesnt prove anything
bleh2.gif
 
micron said:
meh...this doesnt prove anything
bleh2.gif

This proves 76565653454 millions ZEALOTS are plain wrong, and all their arguments are WORTHLESS.

The fact that they are POINTLESS has been proved long ago..specially by multiplayer gamers..

This is all "nerd bliss".

Welcome to the "unknown"

[]'s
 
Well, maybe we will start to see "apples to apples" AF tests from now...
megabot_lolwhack.gif


The disparity of the results in AF tests always shocked me. Maybe ATI has more powerful hardware, maybe its AF algorithm is better... maybe not.
 
I'll wait until Terry or OGL Guy makes an explanatory post before I pass any judgement. I hope it's a good explanation. Either way this is going to be spread around just like Quack all over again.

Can anyone summarize what exactly is the problem here? I'm seeing these color difference shots and I'm not sure what I'm supposed to be seeing. Whatever it is, I understand it's a little sketchy no matter what :)
 
Hehe i figured something was fishy with ATI's AF, the performance hit with it enabled was a bit too sus in my mind and I can see how you could write a driver to detect colour mip maps and enable full trilinear, certainly not a difficuilt feat.

At the moment it seems we can only compare these cards without any AF applied, it seems that enabling such things on both cards produce different results and it's impossible to have an apples to apples comparison. I'm really disappointed with both card manufacturers, it seems like each one is pulling shifty schemes to win your dollar, ie hacked drivers, endorsing specific benchmarks to tune to their hardware, paying off developers to endorse hardware without any factual proof to back it up, bribing review sites for favourable reviews, ahh the list goes on, this industry is rife with coruption.
 
Once again the vigilantes are out to have the hanging before the trial. Some pretty good sleuths are on the case, and the defendant has not yet had an opportunity to explain his side.

There will be plenty of time a week from now to find a sturdy tree branch to throw a rope over --there is absolutely no reason to rush to do it now unless you have an interest in seeing the defendant dead before all the facts are in.
 
mozmo said:
Hehe i figured something was fishy with ATI's AF, the performance hit with it enabled was a bit too sus in my mind and I can see how you could write a driver to detect colour mip maps and enable full trilinear, certainly not a difficuilt feat.

Given that there is nearly a 9600XT's (or GF4 Ti4200's) worth of fill-rate difference between an X800 XT and 6800U, if you can get cache hits right then its not unreasonable to assume that X800 XT would have a lower penalty.

At the moment it seems we can only compare these cards without any AF applied

This is not about AF, this is about Trilinear implementations.

Anyway, I've just tried the Cat3.7's with a 9600 and the output is the same, so this has been there for some time.
 
What I would like to see now are benchmarks with brilinear enabled on the GF6800 vs the X800 with disabled texture stage optimization. The X800 might not assrape the GF6800 with AF enabled after all.
 
Given that there is nearly a 9600XT's (or GF4 Ti4200's) worth of fill-rate difference between an X800 XT and 6800U, if you can get cache hits right then its not unreasonable to assume that X800 XT would have a lower penalty.

The X800Pro (which has more than a 10% deficit in fillrate to the 6800 Ultra) has just as small a penalty on UT2004 and FarCry when moving from 4xAA to 4xAA/8xAF as the X800XT, so I'm not sure that this argument really holds up.

Interestingly, in the OpenGL game IL-2 Sturmovik, we see some different results. When moving from 4xAA to 4xAA/8xAF, the X800XT's performance decreases by about 20%, the X800Pro's performance decreases by about 30%, and the 6800U's performance decreases by about 15%.

http://www.firingsquad.com/hardware/ati_radeon_x800/page14.asp

http://www.firingsquad.com/hardware/ati_radeon_x800/page15.asp
 
Again, we're looking at AA cases - different architectures will do different things in different applications. Second, NV40 has a pixel shader performance hit when utilising AF, R420 does not.
 
Well, my statement still stands, because the 6800U only has a deficit in fillrate vs the X800XT and not the X800Pro. Also see my comments above about how this situation is different in IL-2, where the R4xx cards see a decrease in performance from 20-30% when enabling 4xAA/8xAF vs 4xAA.

You see only a 3-4 difference in fps for the X800XT and X800Pro when moving from 4xAA to 4xAA/8xAF, even up to 1600x1200 and in games such as UT2004 and FarCry, and the most you can say is that "different architectures will do different things"? That doesn't sound very satisfying to me as an explanation of why we are seeing what we see in these two games.
 
You see only a 3-4 difference in fps for the X800XT and X800Pro when moving from 4xAA to 4xAA/8xAF, even up to 1600x1200 and in games such as UT2004 and FarCry, and the most you can say is that "different architectures will do different things"? That doesn't sound very satisfying to me as an explanation of why we are seeing what we see in these two games.

I'm saying you need to eliminate as many variables as possible as different architectures will do different things dependant on how they are constructed and where their particular bottlenecks are with a given game. Comparing the drop for AF with AA is not a good test because this can often hide the performance issues associated with AF. Likewise, when you introduce shaders to a title this also has an effect on AF performance as NV40 takes a shader performance hit and R420 doesn't.

Look for a title that shows the a performance drop on the two without AA and just using the fixed function pipeline and you'll probably get a clearer picture removing other variables. For instance, if you compare our two reviews and look at the CoD and SS:SE tests you'll see that, in fact, both R420's have a larger AF performance penalty at high res (however, this isn't 100% comparable at the R420 is with 16X and the NV40 at 8X, however the performance drop between these two modes is often fairly small).
 
Back
Top