ATI Filter tricks at 3Dcenter

Chalz, my NV20 didn't have filtering IQ any where near as good as my R300 has.

I encourage you to use 16xAF + TF on every texture stage. :D
 
Chalnoth said:
Ailuros said:
If you compare NV25 to R3xx then yes. In any other case since the thread is about 3DCenter, the site in question has more than one extensive articles about NV3x texture filtering degradations.
Yes, the NV3x anisotropic filtering is also a disappointment. But, then again, ATI also drops to bilinear filtering for all but the first texture stage, so I'm not sure you can really say that what nVidia is doing with the NV3x is any worse than what ATI is doing to optimize the anisotropic on a software level.

With the slight difference that if application specific settings don't work, with the aid of 3rd party utilities such as rTool the user can actually force trilinear on all texturing stages if he wants to.

Both IHVs need to put an end to this and if the user sets trilinear AF he should receive full trilinear AF.

I encourage you to use 16xAF + TF on every texture stage.

Playing devil's advocate here, it's actually up to 16 sample AF for accuracy's sake. It's fairly known that on 22 and 67 degree angles only about 2x sample AF is being used. Where it then usually leads to endless debates is in which cases it's actually noticable and to what degree. Part of the original debate where the article in question apparently originated.

Unless I result to some extreme wall hugging or bank let's say an aircraft in a flight sim once too often, I cannot notice a difference (or I'm obviously blind), but I'm personally forgiving with the overall quality/performance ratio and tradeoffs. Heck I used to be happy if I could combine 2xAA/2xAF on the NV25 in high resolutions, so it is a step forward and not backwards in the grander scheme of things. All of course IMHO.

One point though I and aths share mindset completely is that future hardware should have single cycle trilinear. There's a lot missing in that article and aths knows it.
 
I notice texture aliasing quite good in Max Payne 2 when I play it at 1024x768 with 4xFSAA and 4xperformance anisotropic filtering.
 
sonix666 said:
I notice texture aliasing quite good in Max Payne 2 when I play it at 1024x768 with 4xFSAA and 4xperformance anisotropic filtering.

Haven't played MP2 yet, but is fillrate limiting that much that you can't pick a higher resolution? (irrelevant to what card you're using)
 
Ailuros said:
Playing devil's advocate here, it's actually up to 16 sample AF for accuracy's sake. It's fairly known that on 22 and 67 degree angles only about 2x sample AF is being used. Where it then usually leads to endless debates is in which cases it's actually noticable and to what degree. Part of the original debate where the article in question apparently originated.
16 degree.

Noticeability is always dependent upon the scene being rendered. So, I just go with a simple definition: an implementation is only as good as the worst-case scenario. From what I've seen, the R300 is capable of far more texture aliasing and far less detail with its anisotropic filtering than the GeForce4 I used to have (stupid thing died...).

One point though I and aths share mindset completely is that future hardware should have single cycle trilinear. There's a lot missing in that article and aths knows it.
Yes, but not only single cycle trilinear.

I think future hardware should include (note that either ATI or nVidia have implemented many of these):

1. Greater sub-pixel precision for texture addressing (talking about ATI here...)
2. Full trilinear filtering on all texture stages where it is requested.
3. At least 16-degree maximum anisotropy
4. Programmable/sparse AA sample patterns
5. The ability to switch between MSAA and SSAA "on the fly."
6. Gamma correct FSAA

Anyway, I'm sure I could make a much longer list if I thought about it for a while, but this is just a few things.

As for single-cycle trilinear filtering, I'm really not sure it's that prudent to dedicate more transistors to standard texture filtering. After all, since more and more surfaces are going to be going the way of pixel shaders for display, fixed function texture filtering will be used less and less in comparison to the amount of time spent doing other operations in the pixel shader. So, though accelerated standard texture filtering could help significantly with today's games, and may help games in the near future quite a bit too, it isn't a very forward-looking way to make the hardware.
 
I have to say, I think using the phrase "cutting corners" when you're designing something which meets the minimum spec, is rather misleading. nVidia chose to do more; pretending ATi decided to do less is nonsense when it's fairly obvious they held to the minimum spec wherever they could.
 
Razor04 said:
As others have already mentioned this is getting really really picky and seems rather pointless to me especially when it has been shown that the R300 produces the same output as RefRast.
The R300/R350 can never deliver the same output like the RefRast because RefRast is using 32 bit FP.

P.S. I haven't noticed any difference between 5 Bit LOD and 8 Bit LOD in games. This point is irrelevant for me. The angle dependence of the anisotropic filter was imho a very bad decition. I've noticed the difference between ATI-AF and NV-AF in many games.
 
The R300/R350 can never deliver the same output like the RefRast because refrast is using 32 bit FP.

I'm not so sure it doesn't just use integer if integer ops are called for (i.e. < PS2.0 / fixed function). However, this is hardly likely to make much difference where fixed function texturing alone is concerned, which is obviously what the poster was talking about.
 
DaveBaumann said:
The R300/R350 can never deliver the same output like the RefRast because refrast is using 32 bit FP.

I'm not so sure it doesn't just use integer if integer ops are called for (i.e. < PS2.0 / fixed function). However, this is hardly likely to make much difference where fixed function texturing alone is concerned, which is obviously what the poster was talking about.
The RefRast uses allways FP32 even with DX8-shaders.
 
bloodbob said:
5. The ability to switch between MSAA and SSAA "on the fly."
Is that per frame or per pixel?

Per object should do the job. In a first way this can solve alphatest AA problems.

It would be nice if a pixelshader can decide (dynamic) for each pixel if it should run only per Pixel or per subpixel. But i do not belive to see this soon.
 
Exxtreme said:
DaveBaumann said:
The R300/R350 can never deliver the same output like the RefRast because refrast is using 32 bit FP.

I'm not so sure it doesn't just use integer if integer ops are called for (i.e. < PS2.0 / fixed function). However, this is hardly likely to make much difference where fixed function texturing alone is concerned, which is obviously what the poster was talking about.
The RefRast uses allways FP32 even with DX8-shaders.

You seriously saying it doesn't have integer support thats lame I would have hoped that the refrast would have support both signed 32bit int and doubles. If microsoft isn't willing to support optional data types why should IHVs.
 
bloodbob said:
Exxtreme said:
DaveBaumann said:
The R300/R350 can never deliver the same output like the RefRast because refrast is using 32 bit FP.

I'm not so sure it doesn't just use integer if integer ops are called for (i.e. < PS2.0 / fixed function). However, this is hardly likely to make much difference where fixed function texturing alone is concerned, which is obviously what the poster was talking about.
The RefRast uses allways FP32 even with DX8-shaders.

You seriously saying it doesn't have integer support thats lame I would have hoped that the refrast would have support both signed 32bit int and doubles. If microsoft isn't willing to support optional data types why should IHVs.
Errm, the RefRast is more a diagnose tool for developers. I don't think, it is suitable for image quality comparisons.

*Edit*
@ Threadstarter
I think, "cheat" is the wrong term for this case. "Intentional hardware-limitation" is IMHO better.
 
Exxtreme said:
Errm, the RefRast is more a diagnose tool for developers. I don't think, it is suitable for image quality comparisons.

I strongly disagree. Yes, it is not intended for end-users, but it is the "reference" against which the IHVs have to stack up. Thus it is perfectly valid to do image quality comparisons IMO.
 
[maven said:
]
Exxtreme said:
Errm, the RefRast is more a diagnose tool for developers. I don't think, it is suitable for image quality comparisons.

I strongly disagree. Yes, it is not intended for end-users, but it is the "reference" against which the IHVs have to stack up. Thus it is perfectly valid to do image quality comparisons IMO.
Almost every image quality comparison with RefRast is useless because the most hardware implementations are too different to the RefRast. RefRast renders every(!) shader with FP32. ATi's R200 and R300 doesn't support FP32, a GF1/2/3/4 supports integer precision only etc.
 
[maven said:
]
Exxtreme said:
Errm, the RefRast is more a diagnose tool for developers. I don't think, it is suitable for image quality comparisons.

I strongly disagree. Yes, it is not intended for end-users, but it is the "reference" against which the IHVs have to stack up. Thus it is perfectly valid to do image quality comparisons IMO.

...and that's exactly what the WHQL tests do. greater than x percent of the pixels match equals WHQL pass.
 
PSarge said:
...and that's exactly what the WHQL tests do. greater than x percent of the pixels match equals WHQL pass.
Which is a very stupid kind of test.

Quitch said:
I have to say, I think using the phrase "cutting corners" when you're designing something which meets the minimum spec, is rather misleading. nVidia chose to do more; pretending ATi decided to do less is nonsense when it's fairly obvious they held to the minimum spec wherever they could.
It is fairly misleading to say they designed the R300 according to "the minimum spec" (I presume you mean DX9 spec) which didn't even exist at that time.

I'm not sure whether there is such a thing as explicitly stated precision requirements for texture filtering, LOD calculation, anisotropic filtering and alpha blending in the first place.
 
If WHQL is testing against the refrast then I'd wager there is minimum specs for things like base texture filtering. I wonder what DX8 specified the Trilinear LOD accuracy to be?
 
If you take a look at the valid "WHQL Test Specification" you will see that RefRast is not used for erverything.

In the case of texture filtering only the bilinear filter is compared against RefRast (up to 15 percent error tolerance). Trilinear is not tested at all. The aniso filter is only tested against itself.

There is no test to check the results of pixelshader operations at the moment.

HCT 12 BETA have a new test called "ACT-R Test". This test will run different apps, capture some frames and compare with refrast(up to 15 percent error tolerance). But this is a new test and it is not sure how it will work in the final version of HCT 12.
 
Back
Top