StealthHawk said:
demalion said:
So now you're saying that in games without off angles, ATI's AF performance hit is like that of nVidia doing the same degree of AF?
No, that's not really what I'm saying. I'm saying that in games without off angles ATI is doing more work than in games with off angles.
OK, but that still doesn't mean that representing the image quality as uniformly weighted purely by one angular orientation factor allows you to say two implementations are offering the same image quality. I.e., the 4x versus 16x example.
We need to be clear on this: does ATI's performance hit being reduced depend on the off angles being present, or not?
Yes, I think so.
How does that coincide with your commentary concerning games not having significant off angles, but still performing with less performance hit for ATI cards?
While I have admitted that ATI's algorithm is more efficient/aggressive than NVIDIA's,
I'm asking for
one clear answer, but this appears to be two contradicting ones...to my understanding, this is a different answer than you just gave.
This contradicts what you just said unless you are asserting that the off angle image quality reduction and performance efficiency are the same thing, yet...
when there are off angles present that workload is going to decrease even more in comparison to the work being done by NVIDIA.
...you are again saying that the performance increase you are proposing from the off angles is in addition to this efficiency. Well, if there is a performance gain from the off angle image quality degradation, it has to be the
only method of reducing performance penalty for Quitch's comment to be contradicted.
Are you simply proposing that the image quality degradation is resulting in performance gain, but no longer stating you disagree with Quitch's commentary with regard to the 16x versus 4x comparison?
...
The point of Quitch's commentary was that "it does more work so it's better image quality" is an invalid precept. Your assertion was "NVIDIA's AF method is doing more work because it filters at all angles, while ATI does not. ATI is saving work by not using the maximum degree of AF at certain angles when those angles are present.", which counters this proposal (as you stated by "I don't see how this is the same thing at all." in conjunction with this) if the performance difference is due only to the image quality being degraded...it does
not do so if there is a performance difference outside of that which is resulting from the image quality reduction (because then it is apparent that it is saving work even without reducing image quality).
Again: is the performance due to the occurrence of image quality degradation, or due to the methodology (which also happens to sometimes result in image quality reduction)?
But what if they are gaining a great deal of performance even when image quality is not lost? Perhaps this indicates that the determing factor for the performance increase is not the lost image quality? That was Quitch's comparison, AFAICS...what do you propose is showing that this does not apply? AFAIK, this is exactly what is shown, though texture stage/trilinear issues (from both vendors) does cloud the issue.
It is apparent that ATI is gaining a lot of performance in UT2003 and IQ is lost.
When is IQ is lost? When is it gaining a "lot" of performance? If there isn't a dependency of the second on the first, this observation doesn't address these questions.
There are also people who think that ATI's AF has more texture aliasing than NVIDIA's AF. This would be lower quality.
Well, then atleast you are evaluating the image quality in the scene by the results instead of making a proposal of comparing a count in a completely different scene to represent the image quality of an implementation...
This was my point.
Show me where else ATI is gaining a great deal of performance by not doing trilinear on certain texture stages where IQ is not affected
SH...ATI doesn't seem to have a higher performance trilinear filtering implementation that I know of. So...this doesn't seem to relate clearly to my commentary and proposals...?
...
...
I am proposing quite simply that the performance hit between AF on NVIDIA and ATI hardware would be closer when there are no off angles, and bigger when there are off angles. I think we can both agree that regardless, ATI's performance hit is lower than NVIDIA's.
This seems fairly unambiguous(?), and I think I've explained how this agreement you mention supports Quitch's commentary...
...
Because the GF still has a larger performance hit with bilinear AF than ATI does, even in the many games we agree don't have much manifestion of the "off angles"?
Are you sure that this isn't just a coincidence?
I thought we were both sure it wasn't?
Again, do you know of bilinear AF performance data for applicable nVidia cards and with complete AF application? I know "Balanced" mode inflicts a significantly greater penalty, and I have the impression that this performance is close to the performance of bilinear filtering. Also, how do nVidia's current "Performance" and ATI's "Performance" mode compare in performance penalties?
I can establish with greater clarity that I know of the performance hit for ATI for bilinear filtering and 16x AF ranging from 89% of no AF, and sometimes reaching even higher performance efficiency.
Exxtreme said:
A new version of AF Tester for D3D is avaiable:
http://demirug.bei.t-online.de/D3DAFTester.zip
This version has a new function. It can count how much filtered texels are used.
My R9700 Pro is using 4395179 texels at 16x tri-AF.
I have my own numbers from 2x AF and 4x AF, taken with a r9500Pro. 2x AF is using 3918231 texels and 4x AF is using 4274985 texels. In real world games, do you see 16x AF having essentially the same performance hit as 4x AF?
Well, I presume you didn't mean to compare a 9500 Pro to a 9700 Pro and you meant some type of GeForce card, but this logic seems to be broken in several ways unless both completing an AF pixel is identical latency for each architecture and the tunnel case is a suitable universal representation. And, again, this assertion on your part seems to contradict what you said earlier and propose again later...?
As stated, most games don't feature off angles in large quantity.
Right, and does ATI have less performance hit for AF in most games when doing AF compared to nVidia doing AF in the same circumstances (using trilinear, applying to all the same texture stages, etc)?
Yes.
Wait, maybe you don't construe "more efficient" as indicating "less work is done even when results are similar" (i.e., you are considering the "work" to be the result alone, and universally pertinent, and not the process)?
...
Another thing which I didn't point out in my last point was that ATI's Quality AF does take a much bigger hit than ATI's Performance AF. The only difference between them is that one is using trilinear filtering.
So, trilinear fitering is responsible for the large performance hit then...right? Were you meaning to say something else?
The whole mythos of ATI's free AF...
Who said free AF here?
...seems to have started because NVIDIA had big hit trilinear AF while ATI had low hit bilinear AF.
But I don't think that was the only performance issue with nVidia's AF, SH, and you seem to be agreeing sometimes.
The thing is, once ATI cards start using trilinear too, AF doesn't seem to be as free.
True, because ATI doesn't have a more efficient methodolgy of (full) trilinear implementation at this time AFAIK. I wasn't ever proposing that they did, AFAICS.
You're making a case, but I'm not convinced yet with the evidence provided. It just doesn't make sense to me why the adaptivity of ATI's AF needs to be dependent on the angle.
Nor will it until we reinvent or reverse-engineer ATI's "secret" AF algorithm, which I think is a bit more than detecting off angles and reducing workload for them...or else ATI could fairly trivially just remove the detection criteria used to decide to reduce quality and get their non-off angle performance gains while retaining flawless image quality (or switch it off for aniso tester applications...
).