ATI Filter tricks at 3Dcenter

zeckensack said:
5 bits < 8 bits. This isn't about shading capabilities or render targets, it's about the texture filter circuitry.

I know, but you stated ATI hardware is simpler... I disagree. The R300 is a more complete chip, offers more functionality, doesn't require partial precision for speed, has MRT and FP Render Targets.

The FX requires alot more tinkering, and 5600's, 5800's have lots of issues..besides spending most if not all rendering in non-compliant DX9 FP 16 mode. Only the newer cores use full precision, sometimes.
 
Doomtrooper said:
zeckensack said:
5 bits < 8 bits. This isn't about shading capabilities or render targets, it's about the texture filter circuitry.

I know, but you stated ATI hardware is simpler... I disagree. The R300 is a more complete chip, offers more functionality, doesn't require partial precision for speed, has MRT and FP Render Targets.

The FX requires alot more tinkering, and 5600's, 5800's have lots of issues..besides spending most if not all rendering in non-compliant DX9 FP 16 mode. Only the newer cores use full precision, sometimes.
Believe me, when I said "simpler" I wanted to refer to the texture filter thing only. I very much like my 9500Pro. I should buy an FX5700 for testing purposes I think but they didn't yet come down to 100€ - which is exactly what they are worth, IMO. That's where I stand.

We've been discussing this internally (I did most of the translation which put me in sort of a conflict). While I think that the whole thing could have been structured better - and that the issues mentioned really don't affect me - it's only fair to point out the facts. That's how I see it now, after much initial disagreement. You really shouldn't take it as anti-ATI or something.
 
This is somewhat disappointing: whatever area of R300 we were poking at, we've always found something that could have been done better, as demonstrated by the competition's textbook quality. It's likely that there are even more filtering simplifications on the Radeon, that we simply haven't found yet.
This was my favorite part of the whole article...I think someone feels a need to make up for the past few anti-NV articles lest they appear biased against NV. ATI has done a wonderful job with the R300 but some people just can't seem to acknowledge it without first pointing out any fault it has in an attempt to downplay the R300's superiority over the competitions part.

As others have already mentioned this is getting really really picky and seems rather pointless to me especially when it has been shown that the R300 produces the same output as RefRast. While the NVXX does look better does it really matter in this case? The R300 is producing the desired output and it just boggles my mind why this had to be made into a huge article if the output is the same as RefRast. I think we all know by now that the R300 (and all ATI hw for the most part) is designed to follow the DX specs exactly and produce the same output as RefRast so does this really surprise anyone? It could be considered dubious or a cheat if it wasn't the same output.

Unfortunately some sites will grab hold of this article and plaster it all over their front pages crying the much better image quality of an NV card and how great those cards are compared to an ATI card. My personal feeling is this article is misleading for the most part without a RefRast picture or an idea of what is really going on overall. Anyways I just got back from dealing with stupid people at work and it is time for me to head out for Thanksgiving with the rest of my family. Have a safe and happy Thanksgiving everyone!
 
zeckensack said:
The author is aware of refrast's 5 bits. He still would like higher precision filter circuitry.

Does he not also say "The cut corners in texture filtering we've been criticizing are hardly noticed in practice." - this being the case, then why would he like higher quality filtering? Of course, if ask for better filtering you are likely to end up with worse something else, so if the differences are "hardly noticed in practice" is it worthwhile jeapardising something else in order to gain quality that you won't notice? ;)

The critical point is: "ATI hardware is very carefully designed." And thats probably the truest statement since they have made decisions on the filtering front that are not the same as their competitor but "are hardly noticed in practice" for their primary target application of games.

The same can be said to be true of some of NVIDIA's filtering option choices - while the earlier preset Brilinear in UT2003 was visible, it does appear to be much less so now, and much more difficult to point out, which is good for the end user if it results in little visible quality loss and higher performance. (Although, it should be noted that ATI does actually meet the specification - so this is not a shortcut, its just not as good as the best their competition could offer - but with the 52.16 drivers this doesn't meet DX specification)
 
"This is somewhat disappointing: whatever area of R300 we were poking at, we've always found something that could have been done better, as demonstrated by the competition's textbook quality. It's likely that there are even more filtering simplifications on the Radeon, that we simply haven't found yet".
|
\/
NV25 is the reference here (refer to the screen shots). The same author isn't exactly pleased with NV3x either, I've already posted the link."
--------------------------------------------------------------------------------------
but the "reference" piece here is not perfect either...

"The GeForce card exhibits imperfections the size of a quad in this example "
+
"GeForce cards aren't outright perfect either, we can produce situations where they, too, show "dithering" patterns. "

yet it's still "textbook quality"? and in "whatever area of R300 we were poking at"? they should first define "textbook quality". who says that the nVidia solution is doing it right. it definatly looks better in this case, but even they point out it isn't perfect. show me some shots from a matrox or powervr (who they state in the article does perfect anisotropic), or refrast. thats if they are using d3d, but i don't know what api they are using for the test, it's never mentioned in the article. (or maybe a nice software opengl renderer, if that's the case)

or how about this... do ati cards show the same artifacts in both d3d and opengl? it wouldn't be the first time a piece of hardware reacted differerntly at the same funcition depending on the api. if it's truely a limitation of the hardware it should have simliar artifacts regardless of api.

and again, since bilinear filtering is the basic foundation of other filtering methods, there should have been tests done in the "other" (trilinear, anisotropic) methods to determain if the artifacts are indeed magnified or visable at all.

it's not that i hate the article. in fact, i rather like it. the issue is a valid one. i'm also glad some people in the press don't have blinders on regarding image quality from ati. i just find it lacking in detail about how something "should" look. i want a clear point of reference; this article seamed more like an nVidia vs Ati battle for filtering quality.
c:
 
"ATI has done a wonderful job with the R300 but some people just can't seem to acknowledge it without first pointing out any fault it has in an attempt to downplay the R300's superiority over the competitions part."
-------------------------------------------------------------------------------------
why not point out the faults of the r300. it's a fantastic piece of engeneering and design, but it is in some ways flawed. it's missing features that were in the r200! i'm a happy r3xx and nv3x owner, but neither card is perfect. in fact, some days i really miss my voodoo3.
c:
 
991060 said:
Dave, could you explain what the color means in the pics? :D

This images display the level of Trilinear filtering LOD accuracy. If you look at the difference between the R300/Refrash and GeForce 4 images you can see that the GeForce 4 has much smoother gradients from one colour to the next. In fact GeForce 4 has 256 gradients, which equates to 8-bits precision, whereas R300 only has 32 gradients, which equates to 5-bits of precision (which is evidently what the DX reference rasteriser requires, so this is probably the minimum that DX specifies).

What you might want to pay attention to is the areas of a “pureâ€￾ colour – these are the eras between mip map transitions and you are only filtering from one map. The point here is that if you are only filtering from one map for these areas you only need a bilinear sample, which means there is a lower bandwidth usage and slightly higher performance (FX uses both its texture units for Trilinear, while R300 uses two cycles – this cost is halved with Bilinear).

With the GeForce 4 there is only one step from 256 which will be sampled from just one mip layer, so less than 0.5% is Bilinear sampled; with the 5-bit filter on R300 there is 2 of only 32 steps which means about 3% of the area is Bilinear filtered. Although the FX maintains the 8-bit filter they have extended the Bilinear region to about 50%, giving roughly the same performance as a 2-bit filter!
 
Just thought it worthwhile to add:
This sort of thing is not noticeable in real(tm) applications as the two adjacent primary colors from Dave's screenshots would simply be two mip-map levels of the same textures (i.e. looking rather similar), unless one does something fishy to the smaller mip-map levels (e.g. emulating distance fog).
 
The article is written from a perfectionist POV. I don't agree with the opinion stated, but I do understand the way of thinking.

I think ATI did a great job with the R300, and they did a great job at cleverly cutting corners most of the time. My only gripe is with the AF implementation, though even if all those clever ideas interacting with each other produced some visible artifacts, we probably couldn't tell.

What I think is most interesting about the article is that it shows some of the reasons why ATI managed to get more (performance) with less (transistors), which ultimately leads to the clearly different design philosophies used by ATI and NVidia. NVidia clearly aimed at a chip-family fit for "professional use", so they probably never even thought of cutting corners without letting the Real Thing(tm) in, even if it's slower.
 
[maven said:
]Just thought it worthwhile to add:
This sort of thing is not noticeable in real(tm) applications as the two adjacent primary colors from Dave's screenshots would simply be two mip-map levels of the same textures (i.e. looking rather similar), unless one does something fishy to the smaller mip-map levels (e.g. emulating distance fog).
If you have a high-contrast edge where the smaller mip-map texels are a blend of the colors on both sides of the edge, you could possibly notice it. But it's a rather hypothetical situation.
 
GraphixViolence said:
The main problem with comparing to Nvidia here is that the FX series is a great example of how NOT to allocate transistors. The NV35 has ~25% more transistors than the R350, yet is much slower clock for clock. Maybe that's because they wasted them on things like 8-bit LOD and bilinear filtering precision that aren't visible without special tools...
I completely disagree.

These artifacts in texture LOD selection manifest themselves in a very specific way that can be very, very visible: texture aliasing.

Aliasing is the bane of 3D graphics rendering, but it is notoriously hard to pinpoint in a screenshot (particularly texture aliasing). You absolutely need to use contrived scenarios to show texture aliasing in a single screenshot.

This is because texture aliasing is most noticeable in motion.

I, for one, have noticed vastly more texture aliasing on the Radeon 9700 than I ever saw on the GeForce4.

So, these differences are easy to see in motion. They're just not easy to show somebody from a specific screenshot, and certainly not a randomly-chosen in-game screen (since texture aliasing is one of those things that is only very noticeable every once in a while).

Side note: I had always assumed that the increased texture aliasing was due to more aggressive LOD selection, but the inaccurate LOD selection could easily account for the inreased texture aliasing.
 
Am I the only person with an R3xx card who plays every game with trilinear on every texture stage?

I also like how they call brilinear filtering nVidia's toy. Ati uses brilinear filtering under quality AF.
Of course I play with TF on every texture stage so what do I have to worry?

Chalnoth, I have yet to see any texture aliasing in any game I played over the last year with my R300. That may be because I use trilinear filtering on every texture stage. :D
 
Xmas said:
The article is written from a perfectionist POV. I don't agree with the opinion stated, but I do understand the way of thinking.

I think ATI did a great job with the R300, and they did a great job at cleverly cutting corners most of the time. My only gripe is with the AF implementation, though even if all those clever ideas interacting with each other produced some visible artifacts, we probably couldn't tell.

What I think is most interesting about the article is that it shows some of the reasons why ATI managed to get more (performance) with less (transistors), which ultimately leads to the clearly different design philosophies used by ATI and NVidia. NVidia clearly aimed at a chip-family fit for "professional use", so they probably never even thought of cutting corners without letting the Real Thing(tm) in, even if it's slower.

I agree with you Xmas. I think it would be fair to say that ATI made all the right decisions in designing the R3xx. There are compromises of course but what processor does not have them. To me it seems strange to fault the VPU for specific compromises without looking at the context of why these decisions were made in the first place. That’s where this article fails, it never mentions the benefits derived from these compromises. As Dave mentioned ATI felt (correctly) that transistors could be better used elsewhere (or omitted completely).
 
K.I.L.E.R said:
Am I the only person with an R3xx card who plays every game with trilinear on every texture stage?
Do you use Rivatuner to enable this?

Chalnoth, I have yet to see any texture aliasing in any game I played over the last year with my R300. That may be because I use trilinear filtering on every texture stage. :D
Some people just aren't as sensitive to texture aliasing as I am. I see it all too frequently. Though I will have to admit that with some games, it's the fault of the game developer (some developers have this strange idea that setting a negative LOD globally is a good thing).
 
Zecki,

I plead guilty 8)

The idea for the article must have originated from a long-winded debate where my points where more or less in the following department:

What I think is most interesting about the article is that it shows some of the reasons why ATI managed to get more (performance) with less (transistors), which ultimately leads to the clearly different design philosophies used by ATI and NVidia.

The critical point is: "ATI hardware is very carefully designed." And thats probably the truest statement since they have made decisions on the filtering front that are not the same as their competitor but "are hardly noticed in practice" for their primary target application of games.

Quotes borrowed from this very thread on purpose.
 
I, for one, have noticed vastly more texture aliasing on the Radeon 9700 than I ever saw on the GeForce4.

While "vastly" is a tad exaggerated IMO, aliasing is definitely not on the same level. There I'd like to add that both the NV3x models I've tried with different driver revisions had more texture aliasing than the NV25 too.

In the majority of cases using high resolutions or in extreme cases reducing mipmap LOD to "quality" cures most of it on the R3xx's.

Some people just aren't as sensitive to texture aliasing as I am. I see it all too frequently. Though I will have to admit that with some games, it's the fault of the game developer (some developers have this strange idea that setting a negative LOD globally is a good thing).

Good example would be NFS PU. Someone had the funky idea that a -3.0 LOD on some road textures looks fine and left it there. Even the NV2x had problems there and only a positive 0.5 offset combined with Level8x Aniso would get rid of the moire and aliasing patterns. Alternative sollution would have been a 4xS/2xAF combination there, but you'd be obviously limited to medium resolutions.

Same resolution, default settings the R3xx looks worse, yet if you increase the resolution, initialize AF and offset mipmap LOD by +0.5 it goes away.

If we really want to get rid entirely of texture aliasing then a SSAA/AF combination would do the best job IMO; yet we all know how feasable that is with all the fillrate considerations...
 
K.I.L.E.R said:
Man, you must have microscope sensetivity. :LOL:
Heh, I don't think so. Perhaps I just pay attention to the filtering while I'm actually playing the game (Maybe that's because of some of the games I play? I play a lot of RPG's...which tend to have more situations where you're not doing much except looking at the scenery).

Anyway, the way I see it, my arguments against ATI's increased texture aliasing are every bit as valid as arguments against nVidia's lower-quality FSAA (than ATI's).

In other words, if you're complaining about nVidia's edge AA, you should definitely be complaining about ATI's texture filtering, since that texture filtering is applied to much more of the screen.
 
In other words, if you're complaining about nVidia's edge AA, you should definitely be complaining about ATI's texture filtering, since that texture filtering is applied to much more of the screen.

If you compare NV25 to R3xx then yes. In any other case since the thread is about 3DCenter, the site in question has more than one extensive articles about NV3x texture filtering degradations.

***edit:

same author, just about a month ago:

http://www.3dcenter.de/artikel/2003/10-26_a_english.php
 
Ailuros said:
If you compare NV25 to R3xx then yes. In any other case since the thread is about 3DCenter, the site in question has more than one extensive articles about NV3x texture filtering degradations.
Yes, the NV3x anisotropic filtering is also a disappointment. But, then again, ATI also drops to bilinear filtering for all but the first texture stage, so I'm not sure you can really say that what nVidia is doing with the NV3x is any worse than what ATI is doing to optimize the anisotropic on a software level.
 
Back
Top