ATi is ch**t**g in Filtering

I should have paid better attention to what you were saying in that original 9600 thread....I wouldnt be so shocked about what I'm reading here today.
Thanks for uploading those pic's.
 
Isn't it already well established that they are some legitimate trilinear optimizations possible in hardware on mipmaps that are generated in the "normal" way(ie; bilinear down-sample the top levels to generate the lower detailed levels)

If I remember right - there is already plenty of precedence dating way back to 1998(S3 Savage3 or 4 were the first) were you get lower performance with "non-standard" mipmaps like colored-mips or mips- generated with any other techniques. I also remember reading here that the delta chrome may be doing this too

Unless I missed it - I find it surpising that this thread has gotten this long and yet none of the beyond3d gurus pointed this out....

I wouldn't necessarily jump to any conclusions that ATI is 'concealing' anything


On a related - it is very plausible in the future that we may see performance vary dramatically based on the 'content' of a texture...Imaging a future lossless compression algorithm - A texture with a low-frequency black and white checkerboard pattern may run much faster than some high-freqency image. Infact, all credit to ATI if they are already doing things along these lines
 
Bob3D said:
fallguy said:
Does anyone else find Deco-Rj's post, hard to understand, and pretty much useless?

look's like another nvidiot
Deco-Rj's, what's your vid card?

no man, really, call me an "optimized idiot' if you will.

The funny thing?

I'm actually defending ATI as much as I ever stuck on Nvidia's side on those issues.

Is ATI's AF "hat tricks" noticeable? In real life gameplay?


no..

I'm speaking out as a gamer here, I love Nvidia cards, but I'd never EVER would NITPICK Ati for their BRILIANT aproach on Af/ IQ.

They suck for putting up r42x hardware that they know will be replaced by sm3.0 hw as soon as Ati/tsmc can deliver.

They SUCK for throwing pennies at Valve.

But their FU**ING ANISO aproach is FINE, fps x IQ is GREAT...

I'm a NVIDIOT ok, but Ati rocks on filter OPTIMIZATIONS. They benefit everyone:

Image quality can't be measured by rasterizers and screenshots, but 20, 30% fps diference is sure apreciated.

*yes, englis is not my 1st language



[]'s
 
Oh good Grief... :rolleyes:

Then i suppose Nvidia Sucks 10x worse becuase of all the money they throw at id and Epic. and the STALKER developer and EA and the Tomb raider publisher and the list could go on and on and on.

Did Nvidia Suck for the entire Nv3x line then as well?

It is also a little irritating how it can be explained as plain as day that Nvidia is losing performance becuase of the way they share the load on ALU1 in certain situations while ATi is fully paralell... yet ask the same insinuating Questions over and over and over.

As for the driver issue here with AF. I am bordering on a little irked currently and waiting for ATi's official response before i decide one way or the other. On another note lets not forget that Nvidia is hacking AF in OpenGL and Quality is being affected they are also hacking AF in Several other Application specific cases like UT. And its apparent that their hacks reduce Quality noticably.
 
IgnorancePersonified said:
The silence is deafening...

Honestly, You wouldnt expect to hear anything specific from either company On a Sunday Night would you :)
 
Hellbinder said:
On another note lets not forget that Nvidia is hacking AF in OpenGL and Quality is being affected they are also hacking AF in Several other Application specific cases like UT. And its apparent that their hacks reduce Quality noticably.
So ATi gets caught doing something screwy, and you bring up nVidia's past to make everything seem O.K?....

**edited **
 
Dave, would you mind closing this bullshit thread, banning the fanboys/trolls, and then starting a nice thread where we can have a sane conversation about what is going on here?
 
The Baron said:
Dave, would you mind closing this bullshit thread, banning the fanboys/trolls, and then starting a nice thread where we can have a sane conversation about what is going on here?

Its a good idea.

Its sad to see its kind of hard to have a sane discussion here nowadays because of fanboi talk.
________
TOYOTA VAN
 
Last edited by a moderator:
Deco-Rj said:
Bob3D said:
fallguy said:
They suck for putting up r42x hardware that they know will be replaced by sm3.0 hw as soon as Ati/tsmc can deliver.

They SUCK for throwing pennies at Valve.

[]'s

Sorry, but these is stupid.
What about all twimtbp titles?
If anyone here is "cheating" it's Nvidia.
Last year i bought a FX5900 and i see all my friends with R3xx cards running twimtbp titles with better speed/IQ.
I really like geforce cards/drivers, but i don't want to get "cheated" again by nVidia.
 
I see we have a new test to see filtering being used, which I find kinda brillant because you don't need to have color mipmaps to see the differences. Now what happens when you change the level of detail LOD? Seems like it will mess up the results since the mipmap distances will change. So the test will be confined with the same LOD settings I take it. Still a 8xAF using bilinear should be able to be compared to a 8xAF using trilinear samples where the differences will be evident of the grandient filtering between the mipmaps/non black (that should be there). In brief a good tool to use to show what each design/card/driver/etc. is doing with filering.

As for ATI cheating/deceiving?? I think ATI was very clear in the past that In-game settings control the filtering that is if drivers are set for Application Preference. If Tri is called for in a game ATI will give Tri filtering. Now has that change? Or does ATI's Trilinear filtering have a new algorithm which can also be adaptive? Colormips with same color per mipmap does indeed require more filtering for blending as compared to very similar smaller versions of the same texture, doesn't it? Application detection or is it smarter filtering? Is ATI looking at color information between the two mipmaps and adjusting the trilinear algorithm adaptively? I think ATI needs to answer this. What is going on here?

Maybe I should be in PR or something . . . ;)
 
DaveBaumann said:
Again, we're looking at AA cases - different architectures will do different things in different applications. Second, NV40 has a pixel shader performance hit when utilising AF, R420 does not.
How could AF affect the shader performance? I'm really curious. :?:
 
Hellbinder said:
It is also a little irritating how it can be explained as plain as day that Nvidia is losing performance becuase of the way they share the load on ALU1 in certain situations while ATi is fully paralell... yet ask the same insinuating Questions over and over and over.
Would this ALU load sharing be necessary for for SM 3.0 support? I really don't know... so that's why I'm asking. If it is I think you could hardly criticize the NV40 architecture over something like this if ATi would have to move to something similar to support it in the future.
 
How could AF affect the shader performance? I'm really curious.

What I get is that during the filtering stage from a texture in which bilinear samples are being done, the texture unit in the pixel pipeline when used prevents ALU or PS operatons on the NV40. As you increase the AF, the more bilinear samples are being process therefore more clock cycles where you have no PS opts. ATI allows some parrallel operation between pixel shading and texture sampling while Nvidia doesn't. The higher the AF, more filtering required, more texture opts required, blocking PS opts for that pixel until the texture filtering is done.

Now I don't see any reason for the PS opt to be held up for the filtering operation from the source texture. The PS opt only modifies the color value of the the pixel after the texture filtering for that pixel. Except when you are doing some branching in PS3, in that case isn't it necessary to wait if you have a conditional value of the pixel you are working with? In otherword the PS is waiting to see the value of the pixel before it decides which branch to take. So for PS3 types operation, more specifically conditional branching seems to limit operations to this. Unless I am way off on this.

edit I think I need to clarify what I said above since it is in error and revised it here (probably more confusing then ever now) so here is take two:

What I get is that during the filtering stage from a texture in which bilinear samples are being done, the texture unit in the pixel pipeline when used prevents ALU1 or PS operatons on the NV40 for ALU1. As you increase the AF, the more bilinear samples are being process therefore more clock cycles where you have limited PS opts. ATI allows more parrallel operation between both ALU/pixel shading units and texture sampling while Nvidia method limits ALU 1. The higher the AF, more filtering required, more texture opts required, blocking ALU1 PS opts for that pixel until the texture filtering is done.


Good grief :?
 
This is what DaveB said in his 6800U review regarding the ALU unit:

The “Superscalarâ€￾ nature of the pixel shaders should provide an extra level of flexibility that NVIDIA’s driver compiler should be able to take hold of and increase the shader performance. Although NVIDIA have opted to stick with an ALU that’s also used for texture address processing, this is probably no bad thing either because as shader lengths increase the ration of texture accessed to ALU use will drop, reducing the need for a dedicated texture address processing unit.
 
noko said:
What I get is that during the filtering stage from a texture in which bilinear samples are being done, the texture unit in the pixel pipeline when used prevents ALU or PS operatons on the NV40. As you increase the AF, the more bilinear samples are being process therefore more clock cycles where you have no PS opts. ATI allows some parrallel operation between pixel shading and texture sampling while Nvidia doesn't. The higher the AF, more filtering required, more texture opts required, blocking PS opts for that pixel until the texture filtering is done.
Would it be needed to issue instruction for every texture sample? I thought the sampling thing is taken care of in the TMU, correct me if I was wrong.
 
Wow!

I actually read all seven pages and wanted to post a quick thanks to Quasar, Dave, and all the other reasonable people exploring this.

Gonna be an interesting morning tomorrow, glad I caught this thread tonight! :D
 
umm why are u waiting for atis reply? all they will give u is pr crap about how its just a bug in the driver and weill be fixed in a future catalyst release. just like nvidia does. this industry is becoming such a joke.
 
Back
Top