ATI Filter tricks at 3Dcenter

+/- 15% sounds like an extremely lax requirement; with good rounding, you can achieve that kind of results with something like 2 bits of subtexel precision.

And it should be possible to test if trilinear behaves according to the standard even if the standard is lax; this does hoewever require tests that are a bit more complicated than just comparing to the refrast (I seem to remember that an old version of 3dmark - 99? 98? - used to have tests for whether trilinear was implemented correctly, even taking into account the large slack allowed by the then-current standards)
 
bloodbob said:
5. The ability to switch between MSAA and SSAA "on the fly."
Is that per frame or per pixel?
Per triangle. Would be useful for managing branching pixel shaders or alpha test surfaces with minimized performance hit, and without having to deal with reordering surfaces.
 
Exxtreme said:
Almost every image quality comparison with RefRast is useless because the most hardware implementations are too different to the RefRast. RefRast renders every(!) shader with FP32. ATi's R200 and R300 doesn't support FP32, a GF1/2/3/4 supports integer precision only etc.
Much like the NV30, the GeForce3/4 supported FP32 in one unit, and had a separate unit for integer calculations. The FP32 unit was solely for texture ops, while the integer unit was for the majority of color operations.
 
Re: ATI Filter cheats at 3Dcenter

nelg said:
http://www.3dcenter.de/artikel/2003/11-21_a_english.php
Damn. Only just seen this thread. Oh my.

Folks, this article is NOT about filtering "cheats". Far from it. It's a theoretical / technical analysis of filtering behaviour on ATI hardware. I personally don't agree with all of aths's conclusions, but said conclusions weren't the aim of the article. Just a, hmm, editorial comment (i.e. opinion) on aths's part. As far as I'm concerned, I probably wouldn't have phrased them the way they were, but then, the English translation sounds harsher to me than the German original, too. And I'm in the comfortable position to know what aths wanted to say to begin with. *shrug*

IMO, aths should have left the editorial part(s) to his main gripe: the possibility that future generations of VGA may still cut the corners that are NOW acceptable in most relevant situations, simply because "they worked back then, so why drop them?"

z-bag, Xmas, Demriug, Exx: thanks for the your clarifications. :)

93,
-Sascha.rb
 
Razor04 said:
This is somewhat disappointing: whatever area of R300 we were poking at, we've always found something that could have been done better, as demonstrated by the competition's textbook quality. It's likely that there are even more filtering simplifications on the Radeon, that we simply haven't found yet.
This was my favorite part of the whole article...I think someone feels a need to make up for the past few anti-NV articles lest they appear biased against NV.
That is because i am, lets say, censorious to NV (but, btw, own a GeForce Ti 4600 since Summer '02 — and still use it.) I am also censorious to ATI, too. After all, I stated the fact that some particular R3x0-features are better solved by the competition. My intention was not to make a list of features "here ATI rules", and "there NV rules". That ATIs filter do not delivering "full" quality, is a matter very few people know. After I saw the strange artifacts in ATIs BF, I felt this could worth a short article. It's purely accidential that the release was just after the "brilinear" article. For another reason I was just studying a paper about MIP-Map-Selection and trilinear filtering, and I am fully understand that for the technical feasibility's sake it is inescapably to compromise. But I cannot accept an not-"quasi"-perfect bilinear filter. (Imho, BF has to be only limited by the fact that MIP-Level-Calculation is per Quad only, and by the color-resolution of the texture, and frame buffer.) I don't give a damn about if it is noticable in "real games". For me, a GPU is more that just an acceleration tool for some games.


Razor04 said:
ATI has done a wonderful job with the R300 but some people just can't seem to acknowledge it without first pointing out any fault it has in an attempt to downplay the R300's superiority over the competitions part.
Yeah, that's true, since R300 ATI is king of the hill, since about 1 1/2 years, big N was not able to release an "overall better" GPU. Of course, R300 and R350 is still far from "perfection". Take the the shader limitations, i. e. It may be sounds a bit heavy, but for calculating texture coordinates, i consider FP24 as "lowered precision", too.

For an article, I have to focus on a special matter. In this article it was about ATIs texture filtering "optimizations". So things like shader limitations, but also NVs "brilinear" (I refuse to consider a graphic card, with a driver not allows simple things as full trilinear filtering, as an accessory for a gamer) have must to stayed out. (So I dont talked about other simplifications in R300, i. e. the matter with Alpha blending. To find harsh words is my style :(, this article was not intended to bash ATI. :))


Ailuros said:
There's a lot missing in that article and aths knows it.
With a reason. There will be more filter-related stuff "soon" (lets say, early in 2004.)
 
this article is imo very hmmmmmmmmm :D
have to read it several times to deal with it.

real ingame shots to proof this "less quality" would have been nice, but this seems to be not possible. :(
so I dont care bout ATIs "optimization".

or is there an application, which benefits from the higher precisions of the nvidia cards?
 
I had to register here to show you these screenshots.

I even made a small article about it half a year ago:
Its in finnish language (edit: added short english summary) but let the two images do the talking:

Firstly, image of how ati radeon 9700 and refrast renders frame 450 in 3dmark 2003 mother nature:

http://jt7-315a.tky.hut.fi/mendel/ref450.html
http://jt7-315a.tky.hut.fi/mendel/ref450.jpg
edit: close up on the turtle texture:
http://jt7-315a.tky.hut.fi/mendel/kuori.jpg


Then how geforce FX does the same thing.
http://www.skenegroup.net/common/images/5800_ultra_vs_R300/3dmark_450_5800_ultra.jpg

Look at the aliasing in the turtle texture!

This should prove it can be seen! Now it finally makes sense, I was very confused about why refrast produces the same lower quality than FX but now I know also refrash does filtering in 5 bits! Thanks for clearing it up!

edit: Oh nonono, i was jumping to conclusions too fast. Futuremark patch to the 3dmark03 seems to make FX render the scene a lot more like the refrast. So now I'm just very confused about what causes the aliasing here. Any thoughts?
 
Re: ATI Filter cheats at 3Dcenter

nggalai said:
As far as I'm concerned, I probably wouldn't have phrased them the way they were, but then, the English translation sounds harsher to me than the German original, too.
Hm? :oops:

Feel free to join the proof reading here :D
Current standings:
1 complete brain fart (German paragraph :oops: )
5 run-of-the-mill mistakes
1 semantics quibble ("even" vs "only"; I think I can win this one)
 
mapel110 said:
this article is imo very hmmmmmmmmm :D
have to read it several times to deal with it.

real ingame shots to proof this "less quality" would have been nice, but this seems to be not possible. :(
so I dont care bout ATIs "optimization".

or is there an application, which benefits from the higher precisions of the nvidia cards?

As Chalnoth already mentioned, it would be close to futile to try and make this 'marginally' (actually 37,5% ;)) lessened precison in screenshots for they'd most likely be visible as texture aliasing, which i also noticed especially in direct comparison with nV2x.
Good examples are SeSam (which is a complete mess regarding textures anyway) and Ultima IX which in its own resort, is even worse.
But in both of these games the textures do look much more unsteady on ATi Hardware than on the Green Californians'.
 
16 degree.

Noticeability is always dependent upon the scene being rendered. So, I just go with a simple definition: an implementation is only as good as the worst-case scenario. From what I've seen, the R300 is capable of far more texture aliasing and far less detail with its anisotropic filtering than the GeForce4 I used to have (stupid thing died...).

Funny I always thought it's 22 degrees (exactly in the middle between 0 and 45), but if it is 16 correction acknowledged; it's of minimal importance for me anyway.

As far as texture aliasing goes I think I've addressed it in my former post already and how the NV3x's behave in contrast too. Keyword should be here usability. As I said I was lucky if I could use 2xAF on the NV25 in very high resolutions apart from some corner cases. Mine died too, but it was actually consuming (I know it sounds absurd) at least as much power as the R300 today (Leadtek A250TD); maybe Leadtek should have considered even back then molex connectors...

As Chalnoth already mentioned, it would be close to futile to try and make this 'marginally' (actually 37,5% ) lessened precison in screenshots for they'd most likely be visible as texture aliasing, which i also noticed especially in direct comparison with nV2x.

Good examples are SeSam (which is a complete mess regarding textures anyway) and Ultima IX which in its own resort, is even worse.

You mean SS:SE probably since it wasn't as bad in the original SS. I had in game Texture-LOD one notch to "blur" too even on the NV25 and used Quincunx with 8xAF in 1152*864*32 and forced 24bitZ/8bit stencil (howdy CroTeam :) ). On the R300 it's rather 1280*960*32 and beyond with 4xAA/16xAF, in game texture LOD one notch to "blur" and driver mipmap LOD to "quality". As to which resulting IQ is actually better or worse, yes even from a texture aliasing perspective I leave it open to anyone's judgement.

F1 2002 as another example is even a worse case scenario than SS:SE.

@aths,

To avoid any possible misunderstandings I was chuckling a few pages ago, with the thought were all this came from and not about anything else.

With a reason. There will be more filter-related stuff "soon" (lets say, early in 2004.)

The article still missed it's original purpose. I find it at least worrysome from my POV that people are missing your original intentions and start carrying around your article as proof of ATI supposedly cheating with texture filtering. Of course do I expect future hardware investigations and to that very detailed ones.

There's one more instance you could sit down and write an article about; sadly all hints so far must have passed you by. You know where to find me don't you? :)
 
Chalnoth said:
GraphixViolence said:
The main problem with comparing to Nvidia here is that the FX series is a great example of how NOT to allocate transistors. The NV35 has ~25% more transistors than the R350, yet is much slower clock for clock. Maybe that's because they wasted them on things like 8-bit LOD and bilinear filtering precision that aren't visible without special tools...
I completely disagree.
Of course you do. :rolleyes:
These artifacts in texture LOD selection manifest themselves in a very specific way that can be very, very visible: texture aliasing.
Where's your evidence that this 5-bit accuracy causes texture aliasing? In fact, it's possible that it reduces texture aliasing because you stay in the same mip level longer.
I, for one, have noticed vastly more texture aliasing on the Radeon 9700 than I ever saw on the GeForce4.
Of course, because the GeForce 4 image quality sucks eggs.
Side note: I had always assumed that the increased texture aliasing was due to more aggressive LOD selection, but the inaccurate LOD selection could easily account for the inreased texture aliasing.
But you don't know that it does.

-FUDie
 
FUDie said:
Where's your evidence that this 5-bit accuracy causes texture aliasing?
-FUDie

Did you not check my screenshots few posts up or did you just not agree with my findings?

Also didn't someone mention this could cause aliasing in certain type of fog? Well, have you ever played silent hill 2? The banding is just terrible.
 
[quote="AilurosFunny I always thought it's 22 degrees[/quote]
16 degree as in the maximum degree of anisotropy supported. Has nothing to do with the shortcomings of ATI's method.
 
Mendel said:
I had to register here to show you these screenshots.

I even made a small article about it half a year ago:
Its in finnish language (edit: added short english summary) but let the two images do the talking:

Firstly, image of how ati radeon 9700 and refrast renders frame 450 in 3dmark 2003 mother nature:

http://jt7-315a.tky.hut.fi/mendel/ref450.html
http://jt7-315a.tky.hut.fi/mendel/ref450.jpg
edit: close up on the turtle texture:
http://jt7-315a.tky.hut.fi/mendel/kuori.jpg


Then how geforce FX does the same thing.
http://www.skenegroup.net/common/images/5800_ultra_vs_R300/3dmark_450_5800_ultra.jpg

Look at the aliasing in the turtle texture!

This should prove it can be seen! Now it finally makes sense, I was very confused about why refrast produces the same lower quality than FX but now I know also refrash does filtering in 5 bits! Thanks for clearing it up!

Lets not forget this feature Valve detected in the 5X.XX drivers....

http://www.beyond3d.com/misc/hl2/index.php?p=2


Rendering the scene differently (correctly) when a screen grab is detected
 
aths said:
Take the the shader limitations, i. e. It may be sounds a bit heavy, but for calculating texture coordinates, i consider FP24 as "lowered precision", too.

Can you show an example of how this could be true? What kind of texture coordinates would you not be able to address with FP24? :?

-----------------

While ATI does support the "minimum" spec it should also be noticed that they support the whole spec unlike Nvidia.

The problem is Nvidia provided fp32, but is too slow to use it. They provided more pixel shader instructions and a longer instruction queue, but the pixel shaders are too slow to utilize it.
 
rwolf said:
aths said:
Take the the shader limitations, i. e. It may be sounds a bit heavy, but for calculating texture coordinates, i consider FP24 as "lowered precision", too.

Can you show an example of how this could be true? What kind of texture coordinates would you not be able to address with FP24? :?
Texture coordinate calcs are done in 32bit, i thought.
 
Mendel said:
This should prove it can be seen! Now it finally makes sense, I was very confused about why refrast produces the same lower quality than FX but now I know also refrash does filtering in 5 bits! Thanks for clearing it up!
Interesting screenshots, but I think you're probably jumping to conclusions a bit too quickly - in this case the rendering from R300 and Refrast apparently match up closely while the 5800 rendering differs, but that doesn't automatically imply that the reason for this is because of the various choices of LOD precision. In fact if the filtering article told you anything then it should have been that if there are large, easily visible differences in rendering they are generally very unlikely to be caused by LOD precision (unless you had a stupidly small number of bits for this). Don't forget the 5-bits only refers to the mipmap selection part of the filtering, not the in-level filtering.

I'm not saying that you can't be right about this, just that it's relatively unlikely that this is the cause. There are certainly enough other differences between the various architectures involved that there could be other explanations.

[EDIT - just did a few experiments, and I'm now pretty convinced that the differences are not anything to do with LOD or filtering precision issues in R300 or Refrast. Try dumping out the same image from a 5800 using patch 340]
 
Althornin said:
Texture coordinate calcs are done in 32bit, i thought.

Coords in VS are 32 bit (128 bit xyzw quads).

You can do texture look-ups in PS with whatever precision you have available.
 
Several problems with this article:

First of all, the constant use of "textbook" and "standard" when describing nVidia's implementation. You defined these terms because nVidia uses this method despite 5 bit is the minumum spec.

We don't think it's okay to go below textbook quality on bilinear filters.
And just why not? Why is five bits not ok with you. Just because eight bits looks better at extreme zooms? How about ten bits? If it is not ok then why is it in the specifications?

The title: "ATIs Filtertricks" and the use of the word "optimizations" in quotes is designed to imply that ATI is somehow cheating here. Hell, even the title of this thread "ATI Filter cheats at 3Dcenter" indicates the effect this article has on people's perception. To imply that ATI is trying to "trick" you is outrageous.

And just what is the point in the entire article? To my knowledge, ATI has never claimed that they were using eight bits! They have never implied it! Five bits is part of the specs! ATI is not lowering quality below the minimum specs just to gain performance.

If you want to make the argument that nVidia's method produces better quality than ATI's because they are using eight bits instead of only five, that is fine. To try to imply that ATI is somehow trying to deceive people when they never made this claim and are at least reaching the minimum specifications is IMHO unethical. :(
 
Back
Top