ATI Filter tricks at 3Dcenter

what part of this article wasn't already known 12 months ago? Or did I miss something? seems like they are a little late to the party.
 
From an "ethics" point of view (whatever that means to ATI and Nvidia) the competition can easily reduce image quality through drivers, to squeeze a bit of extra performance out of the chips and keep up.
This line is terrible - talk about levels of difference!
ATI follows the minimum spec in something, so we slam them, and then say nVidia's driver cheats that offer much farther reduced IQ are ok.
wtf? Praise nVidia for following higher precision than required, but its not a terrible thing to follow the spec - if the spec isnt good enough, BITCH ABOUT THE SPEC!
 
vrecan said:
what part of this article wasn't already known 12 months ago? Or did I miss something? seems like they are a little late to the party.
The rather newish issue is the reduced precision for interpolation weights, ie plain bilinear filtering.

The article is a bit drawn out and hard to follow in places, but I think that's the (intended) point.
 
Ridiculus nitpicking IMO. Seems like they had to publish an anti-ATI article after the series of NV-critic articles and digged till they found something (although there would be real issues with ati hardware)

Even a 10 bit filter shows artifacts if you zoom in enough. The point is if you zoom in enough to actualy see these artifacts, the texture is already blocked like hell anyway. There is no practical game situation where this could result in a reduced image quality (as they agree themself in their article), which shows that ATI did some wise things in their hardware.

Kudos at least for finding a way showing the artifacts with special textures and utilities.
 
*puts on patented Defender Of 3DC hat*
indio said:
Talk about nit-picking!
That's intentional! :D
Althornin said:
This line is terrible - talk about levels of difference!
ATI follows the minimum spec in something, so we slam them, and then say nVidia's driver cheats that offer much farther reduced IQ are ok.
wtf? Praise nVidia for following higher precision than required, but its not a terrible thing to follow the spec - if the spec isnt good enough, BITCH ABOUT THE SPEC!
NVIDIA cuts hardware capabilities through driver restrictions. ATI designs simpler hardware incapable of higher precision. Go figure :p

At no point does the article state that NVIDIA's driver meddlings are "ok". NVIDIA already had their fair share of "bashing". They're both cutting corners. You be the judge.

The gist - as I see it - is that both companies currently somewhat disappoint (!=suck).

Mephisto said:
Ridiculus nitpicking.

Even a 10 bit filter shows artifacts if you zoom in enough. The point is if you zoom in enough to actualy see these artifacts, the texture is already blocked like hell anyway. There is no practical game situation where this could result in a reduced image quality (as they agree themself in their article), which shows that ATI did some wise things in their hardware.
Exactly. So you do agree with the article?
aths@3dc said:
ATI hardware is very carefully designed. The cut corners in texture filtering we've been criticizing are hardly noticed in practice.

This is nitpicking, and it never pretends to be something else.
 
The GeForce card exhibits imperfections the size of a quad in this example (a quad is a 2x2 pixel block, and lod calculations are performed per quad, not per pixel). On the other hand the Radeon card produces a chaotic pattern, with wildly varying LOD. Apparently the LOD calculation was implemented with as few transistors as possible, sacrificing accuracy.

AFAIK R300's calculations are per quad as well, however I don't know why you got the pattern that you did.
 
WRT Trilinear LOD precision, here's something that may be of interest:

GeForce 4
GF4.png


GeForce FX
GFFX-52-16.png


Radeon
Radeon9700Pro.png


Refrast
RefRast.png
 
"NVIDIA cuts hardware capabilities through driver restrictions. ATI designs simpler hardware incapable of higher precision. Go figure "
------------------------------------------------------------------------------------
neither part has every capability. try rendering some nice floating point render targets with a gf-fx. it's not disabled in drivers, it's a limitaion of the hardware. not saying ati is any better, the r300 lacks w-buffer support, ferchisakes! both companies are lacking features, and weather they are limited by hardware or software, we (end users as both comsumers, programers, ect) don't have access to them.


what got me about the article is this bit...

"This is somewhat disappointing: whatever area of R300 we were poking at, we've always found something that could have been done better, as demonstrated by the competition's textbook quality. It's likely that there are even more filtering simplifications on the Radeon, that we simply haven't found yet."


competitions textbook quality? huh... funny how that doesn't exactly mesh with...

"But enough rambling, "perfect" AF is virtually not available on any current hardware."


my only other gripe, is they assume that since bilinear is filtered incorrectly (or sloppy), that those irregularities will automaticly be passed on to trilinear and anisotropic filterings. while this may be the case, and it logicly and likely does occur, they offer no proof. what if ati's bilinear optimazations are only applied when blinear filtering is requested by the software, and when trilinear filtering is requested, different optimization are performed.

c:
 
DaveBaumann said:
The GeForce card exhibits imperfections the size of a quad in this example (a quad is a 2x2 pixel block, and lod calculations are performed per quad, not per pixel). On the other hand the Radeon card produces a chaotic pattern, with wildly varying LOD. Apparently the LOD calculation was implemented with as few transistors as possible, sacrificing accuracy.

AFAIK R300's calculations are per quad as well, however I don't know why you got the pattern that you did.

I've noticed the same thing on my 8500 and 9800.
 
DaveBaumann said:
WRT Trilinear LOD precision, here's something that may be of interest:

Refrast
RefRast.png

I've been told (I'm too lazy to look at the code myself) that the refrast limits its trilinear blend precision to only 5 bits. I find this surprising <shrug>
 
Doomtrooper said:
NVIDIA cuts hardware capabilities through driver restrictions. ATI designs simpler hardware incapable of higher precision. Go figure

Explain 'simpler hardware'
5 bits < 8 bits. This isn't about shading capabilities or render targets, it's about the texture filter circuitry.

Dave,
The author is aware of refrast's 5 bits. He still would like higher precision filter circuitry.

Ailuros said:
*whistles and walks away with a nasty smile...*
Look what you've done! This is all your fault! :p
see colon said:
what got me about the article is this bit...

"This is somewhat disappointing: whatever area of R300 we were poking at, we've always found something that could have been done better, as demonstrated by the competition's textbook quality. It's likely that there are even more filtering simplifications on the Radeon, that we simply haven't found yet".
NV25 is the reference here (refer to the screen shots). The same author isn't exactly pleased with NV3x either, I've already posted the link.
 
zeckensack said:
*puts on patented Defender Of 3DC hat*
NVIDIA cuts hardware capabilities through driver restrictions. ATI designs simpler hardware incapable of higher precision. Go figure :p

At no point does the article state that NVIDIA's driver meddlings are "ok". NVIDIA already had their fair share of "bashing". They're both cutting corners. You be the judge.
I still disagree.
nVidias "driver hacks" are more noticeable, gameplay wise. The line i quoted puts the two on an even playing field - says driver hacks are equal to lower (but still within spec) precision hardware. What horseshit.
 
The main problem with comparing to Nvidia here is that the FX series is a great example of how NOT to allocate transistors. The NV35 has ~25% more transistors than the R350, yet is much slower clock for clock. Maybe that's because they wasted them on things like 8-bit LOD and bilinear filtering precision that aren't visible without special tools...
 
I guess I dont understand the "issue" here? To fit as many transistors as the could onto a .15u they had to make some tradeoffs. For me, it seems like that made decent ones that most people will never notices. Why is that a "bad" thing?
 
Back
Top