This line is terrible - talk about levels of difference!From an "ethics" point of view (whatever that means to ATI and Nvidia) the competition can easily reduce image quality through drivers, to squeeze a bit of extra performance out of the chips and keep up.
The rather newish issue is the reduced precision for interpolation weights, ie plain bilinear filtering.vrecan said:what part of this article wasn't already known 12 months ago? Or did I miss something? seems like they are a little late to the party.
That's intentional!indio said:Talk about nit-picking!
NVIDIA cuts hardware capabilities through driver restrictions. ATI designs simpler hardware incapable of higher precision. Go figureAlthornin said:This line is terrible - talk about levels of difference!
ATI follows the minimum spec in something, so we slam them, and then say nVidia's driver cheats that offer much farther reduced IQ are ok.
wtf? Praise nVidia for following higher precision than required, but its not a terrible thing to follow the spec - if the spec isnt good enough, BITCH ABOUT THE SPEC!
Exactly. So you do agree with the article?Mephisto said:Ridiculus nitpicking.
Even a 10 bit filter shows artifacts if you zoom in enough. The point is if you zoom in enough to actualy see these artifacts, the texture is already blocked like hell anyway. There is no practical game situation where this could result in a reduced image quality (as they agree themself in their article), which shows that ATI did some wise things in their hardware.
aths@3dc said:ATI hardware is very carefully designed. The cut corners in texture filtering we've been criticizing are hardly noticed in practice.
zeckensack said:Exactly. So you do agree with the article?
NVIDIA cuts hardware capabilities through driver restrictions. ATI designs simpler hardware incapable of higher precision. Go figure
The GeForce card exhibits imperfections the size of a quad in this example (a quad is a 2x2 pixel block, and lod calculations are performed per quad, not per pixel). On the other hand the Radeon card produces a chaotic pattern, with wildly varying LOD. Apparently the LOD calculation was implemented with as few transistors as possible, sacrificing accuracy.
DaveBaumann said:The GeForce card exhibits imperfections the size of a quad in this example (a quad is a 2x2 pixel block, and lod calculations are performed per quad, not per pixel). On the other hand the Radeon card produces a chaotic pattern, with wildly varying LOD. Apparently the LOD calculation was implemented with as few transistors as possible, sacrificing accuracy.
AFAIK R300's calculations are per quad as well, however I don't know why you got the pattern that you did.
DaveBaumann said:WRT Trilinear LOD precision, here's something that may be of interest:
Refrast
5 bits < 8 bits. This isn't about shading capabilities or render targets, it's about the texture filter circuitry.Doomtrooper said:NVIDIA cuts hardware capabilities through driver restrictions. ATI designs simpler hardware incapable of higher precision. Go figure
Explain 'simpler hardware'
Look what you've done! This is all your fault!Ailuros said:*whistles and walks away with a nasty smile...*
NV25 is the reference here (refer to the screen shots). The same author isn't exactly pleased with NV3x either, I've already posted the link.see colon said:what got me about the article is this bit...
"This is somewhat disappointing: whatever area of R300 we were poking at, we've always found something that could have been done better, as demonstrated by the competition's textbook quality. It's likely that there are even more filtering simplifications on the Radeon, that we simply haven't found yet".
I still disagree.zeckensack said:*puts on patented Defender Of 3DC hat*
NVIDIA cuts hardware capabilities through driver restrictions. ATI designs simpler hardware incapable of higher precision. Go figure
At no point does the article state that NVIDIA's driver meddlings are "ok". NVIDIA already had their fair share of "bashing". They're both cutting corners. You be the judge.