A look at IQ at Firingsquad...

Bjorn said:
Sign me up on this. These things should be in the games since the usable settings can differ so much depending on the game.
I think even you lucky Radeon 9800 Pro users will notice this when HL2 and Doom3 comes out :)

The other thing is that i can be rather annoying to test what settings you prefer when you have to restart the game and go to the CP everytime you want to change something. I remember that it was a joy to set up the FSAA in Neverwinter Nights since you could change and test the settings within the game. And the result was immedietly visible when moving the slider. Don't remember if you could do the same with AF though.

Yes, it would be great if we could simply set the drivers to AP and leave them there!....;) I agree that having to go back and forth to the Cpanel in a game is cumbersome, and to have to do it several times among several different games can be downright annoying.

NWN does do it somehwat better than most--but still...if you go into the NWN.ini you'll see the on/off entry for "Enable AnisotropicFiltering=1/0"...First, why couldn't they put this in the GUI along with their FSAA settings...and second, why couldn't they use 4x/8x/16x in the GUI (as they do for FSAA)--instead of just turning on AF or turning it off...? What does this tell me about the behavior of the engine in relation to how it handles AF...? Not much, right? Yep, developers could do a lot better, even the ones who allow you to set IQ from within their games--the included documentation on these things is so scant as to make them useless in most cases.
 
WaltC said:
StealthHawk said:
It's a global texture stage optimization...which only seems to affect UT2003 performance. And Max Payne by 5%. And Aquanox2 by ~10%. And everything else by 0%(in benchmarks I have seen. I have not seen benchmarks which show performance gains in any other games. Not to say they don't exist, but I haven't seen any evidence to the contrary). The question has always been, for me at least, to these two other games that get increased performance lose any IQ?

I am surprised by what convolutions people can make out of something as simple as Cpanel options....;)

Cpanel options are not meant to replace in-game options settings. They exist to *force* options in games which do not offer internal control for those options through their respective game engines. The proper way to use the Cpanel in the case of games which include their own settings is to turn the Cpanel control *off* by way of setting it to Application Preferences, and using the in-game controls, instead. When this is done with the Cats in UT2K3 the game displays full trilinear. There is no optimization in the Cats which prevents the proper texture-stage treatment when the user turns on full trilinear in the game (unless the user fails to set the Cpanel to AP, in which case the global texture-stage treatment is forced, instead.)

Right. How many times do I have to say I am *not* saying this is an UT2003 specific optimization and *not* a thinly veiled attempt at an UT2003 optimization before you believe me? I don't know *what* it is. That is what I have been trying to determine.

When an IHV forces something through the Cpanel, by nature it is not application-specific. Neither is it an "optimization" in any sense of the word. The Cpanel forces texture-stage treatment on a single stage simply because that is the "best fit" for 99% of the 3d games in existence which do not offer their own in-game controls for such settings.

To call it a "global optimization" is in error because it was not implemented to benefit the 2-3 games you mention, but rather it was implemented to serve the dozens of 3d games for which the treatment of a single texture-stage is all that is required. The key to understanding it is not looking at the 2-3 exceptions, but rather looking at the dozens of 3d games for which it works perfectly.

Ok, I am not sure which party line you subscribe to. This seems to be the prevailing theory though: performance will be lower if trilinear is done in all texture stages when it is not needed, and IQ will *not* subsequently increase, therefore why do trilinear if bilinear is all that is required. The control panel should be used for legacy applications which do not offer AF options, and when doing tri/bi these legacy applications will not lose IQ.

Sure, that sounds good on paper. This is why it has been called a global optimization. But it can only be called an optimization if performance increases. Furthermore, it only makes sense to do this globally if a) most applications benefit from such behavior and/or b) no application suffers from this behavior.

Where are the applications that gain performance but do not lose IQ? If there are no such applications, then why implement this AF method at all? Why not just do trilinear in all stages? It really makes no sense.

This is what we are seeing, again:
1) almost all games are unaffected by doing trilinear, it seems.
2) some games gain a lot of performance by doing tri/bi, but IQ is lost.
3) an equally small number of games may gain performance but not lose IQ.

A good 3d game will include internal controls for all of its IQ settings. A good set of IHV drivers will respond correctly to those in-game controls. A poor 3d game will include few, if any, in-game controls for IQ, which forces the user to drop back to an attempt to force them through the IHV control panel. This is an imperfect solution and so it should not be surprising that settings forced through the Cpanel only work properly 95% of the time, depending on the specific game engine in which they are trying to force those settings.

IMO, the IHVs have done such a good job of forcing IQ settings through their Cpanels that some end users have reached the erroneous conclusion that the Cpanel is the proper place to set up IQ for a game, and many software developers have decided to let the IHV handle IQ settings instead of doing it themselves like they should be doing. I would hope that going forward the IHVs would begin to forcefully stress to developers the importance of doing their own engine settings for IQ in their games, and that the IHVs would provide the help many of these developers obviously need in doing it.

I entirely agree with you. Games should offer controls for IQ aspects.
 
StealthHawk said:
demalion said:
...

You are looking at the result, which can be said to be (A+B)/2, and saying that this is the same thing as the processing, even in the latter case.

The result is the IQ.

Yes, and the texel count is a measure related to the detail in the specific scene (in the sense of counting distinct colors in a picture tells you how detailed it is). They represent the effect of the process, not the process.

So what is the process then, the number of sample points?

No, the method used to arrive at the anisotropically filtered pixel output. The number of sample points is...a count of the number of sample points.

"If you average a set of 100 numbers, that indicates you did more processing, yes? But if A and B are represented equally in that set, (A+B)/2 is still the result. That doesn't mean your processing consisted of just the steps of setting up and adding A and B and then dividing by 2, nor does counting the number of unique values tell you the only way you could have processed values to reach this result."

Anyways, have you decided whether you are sticking with:

StealthHawk said:
I don't see how this is the same thing at all. NVIDIA's AF method is doing more work because it filters at all angles, while ATI does not. ATI is saving work by not using the maximum degree of AF at certain angles when those angles are present.

:?: When you keep saying "work is the result", you seem to be returning to this, even though you say you agree with "work is the process" and have indicated something that seems to contradict what I initially replied to. I'm still not clear.

You are saying that the result is not the same thing as the work. How do you meaure the work then?

:?: Well, there are plenty of different ways to measure work. If you have an inefficient shader, and an efficient one, and they have the same output, you can measure the output (equal), or you can measure the number of steps or time taken, etc. Also, you can analyze different measurements and come up with another measure like efficiency. If you understand this, I believe your question is answered...?

For reference: I tried to cover this exactly in the post if I asked you whether you meant "work" as the result or the process. At the time, you indicated "process" in reply to me, but your usage continues to be "result". You continue to arbitrarily switch the specific meaning of work you are using to say how the "process" and the "result" are the same thing by virtue of "work" being the same word as "work", regardless of meaning.

This seems to be summarized by your saying you're not caring about the cause, as I've addressed.

How can you determine how much work is being done, and if the amount of work increases or not?

Well, defining the process allows you complete knowledge with regard to the result, but defining the result doesn't allow you complete knowledge of the process. Causality, as I've mentioned more than a few times.

Earlier you stated that if ATI AF did full degree filtering that the result would be increased texels filtered, but that the work would not increase.

No, I stated that a measure of process (time taken) demonstrably did not increase in accordance with considering only the measure of the result (i.e., even with off angles being rarely present) in comparison to a different process, and that this indicates that there is a problem with only measuring one set of results for characterizing the process.

What data proves this, and how do you know for sure?

SH, I've answered this question for what I actually did say, including what I am sure of and not sure of, and what I think is reasonably indicated, and what is not. If you understand you've missed some things as you later indicated, this would be a good time to revisit some of my shorter posts earlier.

...optional "rhetoric" relating to specific details of your commentary...feel free to disregard if the above is sufficient answer for understanding...

The rest of this post is an effort to offer every opportunity to move forward afforded by your own post. Your choice of whether to disregard the questions and explanations involved, if they are still still necessary after the above, is a response that is in your hands, not mine. Disregarding doesn't mean simply disagreeing, it means stating you disagree while treating what I've said and asked you exactly as if they had never been said or asked.

Surely some concrete irrefutable numbers exist which support this theory and have been interpreted to form this theory.

I don't subscribe to your list of adjectives (explained later here, as well as by prior statements), but I will point out you seemed to change your mind from agreeing that ATI's AF had a performance advantage even without off angles being present, and that it didn't result in changing to the GeForce AF performance penalty level in such a situation. I proposed the information I had, and asked you if you had something indicating otherwise than what I proposed and that I thought you had agreed with.

To restate this again:

I don't personally have a GeForce card to evaluate.
I do know where to find results for ATi using Performance mode, and to my knowledge this is one way to clearly resolve the issue of bi/tri mixtures as in issue with performance comparison for non-off angle filtering.
I presume I can find (but didn't find in quick searches) benchmarks with nVidia cards using Performance mode which I understand to perform closely to bilinear (and I ask if you have knowledge contradicting this). However, to my knowledge this does not resolve the issue of nVidia's full/2x AF mixtures and/or the issue of application detection for popular benchmarks, and I'd have to further find either a benchmark that there is proof indicating that these issues are not evident, or find specific anti-detect benchmarks comparing bilinear filtering AF performance penalty hits.

I don't think this uncertainty disproves my recollection, I think this uncertainty is related to my recollection being accurate and nVidia working to change the representation of it in benchmarking.

...

The situation with nVidia drivers prevents me from providing information fitting your set of adjectives, and your sometimes agreeing that ATi doesn't have the same performance penalty even without off angles being present led me to believe we could continue without it. This is something I've covered already.

I did find This example of comparison of performance hit between a GF 4 using 40.41 and 9700 using Catalyst 2.3 (I believe that this was before the tri/bi issue for ATI was introduced, but if this is incorrect, I am sure this can be pointed out), in the UPT and Comanche 4. I do think this is the general indication of performance benchmarks even without depending on off angles for offering lesser performance hit.
I do also have recollection of at least one bilinear AF performance comparisons between vendor implementations in a scene with off angles not being in evidence as a determing factor from some time ago, though I did not keep a url address for reference tosuch threads or articles.

Again, if you have any information to the contrary, feel free to share. If you simply and clearly disagree, it would be nice to have it clearly stated, as I haven't even been able to get one clear answer on this point from you so far.

:?: How many ways should I say that the result and processing are not the same, and how what you've specified is saying that they are?

Honestly it would help a lot if you cut down on the rhetoric, weren't as long winded, and just got to the point.

It would help if you at least answered questions and brief examples proposed to progress conversation instead of requiring me to repeat explanations already offered...instead of ignoring them, asking me to explain again, and complaining when I do. I've "gotten to the point" I don't know how many times, but it hasn't done any good.

SH, the beginning of this post covered the issue extremely briefly, and you completely bypassed the brief discussion to ask me to repeat things I'd already said.

No offense, but you write a lot, and it's not easy to sift through it.

You are failing to "sift through" both the brief statements and the "long" ones. Perhaps the long ones are not the problem, but simply your failing to sift through?
I can pose questions and provide opportunities for clarification, but you can also reply by asking me to explain something again, or repeating what I've proposed is an error without regard to my prior reply to it. I don't think that is a problem in asking you questions or providing an explanation.

I see now that if I don't read everything over carefully I can miss a lot. Which is not a fun or enjoyable process.

I didn't start out by, for example, explaining the difference between process and result, I spent words explaining it because you demonstrated that it was necessary for me to do so. I also asked you at each junction what you failed to understand about what I'd said, or if something I'd said was incorrect, and why. Why are you blaming me for your failures to answer these questions and define your statements and instead simply asking me to explain again?

OK, then as far as I understand it, your stipulation is incompatible with communicating clearly about the characteristics of hardware functionality at all. :-?

I think debating over something that is intangible and inscrutable is a waste of time.

Apparently what you mean is that anyone else responding to your statements in disagreement is a waste of time. Or you wouldn't have disagreed with Quitch, right? :-?

If there is no evidence that supports or refutes or can be interpreted then what are we to dwell on?

There is evidence, but there isn't the degree of certainty you've recently stipulated (at least, for what I've found). Why are your comments not subject to being evaluated for uncertainty? What do you think reasoning is, if not a way to work out which uncertainty is likely to be "certain"?

Either there is evidence which can be presented(which thus far has not been), or everything is complete speculation.

I'm not one who is fond of the "everything is speculation" defense of an argument when it is the conclusion of having simply disregard observations about flaws in your own statements that do have evidence. :-?

Yes, but there is the organization of the process as well as the processing itself, as far as hardware implementation, clock cycles, and the time it takes to reach the result. Again: what if the cause is the organization or some other characteristic that saves the work and time? Don't the performance characteristics indicate exactly that? Please consider this in relation to the commentary of yours I mentioned again above.

You have as much admitted that the organization of the process and the process are unknown to us and cannot be proven!

I keep asking where the things you assert are said, and you keep ignoring my question to pose new statements for me to address. What I said was that measuring the result does not measure the process, nor measure the "work" when "work" is referring to the process. I did not refer to organization of the process and the process as things that cannot be proven, but as things that are not proven by only measuring the result.

What was wrong with my explanation of "work", "process", and "result"?

So how can you be so sure you are correct?

Well, believing I am correct and being sure I am correct mean different things to me. I've answered why I do believe I am correct, assuming that is what you mean here...the questions I posed to you were not there as rhetoric to support that I was "certainly" right, but as opportunities for you to respond. You taking the opportunity to answer them or correct any errors in what I've said is part of my assumption in holding a conversation.
 
demalion said:
:?: Well, there are plenty of different ways to measure work. If you have an inefficient shader, and an efficient one, and they have the same output, you can measure the output (equal), or you can measure the number of steps or time taken, etc. Also, you can analyze different measurements and come up with another measure like efficiency. If you understand this, I believe your question is answered...?

Oi. See, you are giving vague and general answers, instead of giving specific ones. You are saying "the process works like this...but I have no proof." How can you come to a conclusion without proof? There are two ways to make a postulate, through deduction or through induction. Hereby an induction is improbable or impossible, because we cannot know the exact nature of the process unless ATI tells it to us. So we are left with a deduction. What specific evidence is there that supports your premise?

This seems to be summarized by your saying you're not caring about the cause, as I've addressed.

I was saying this in regards to texture aliasing. As that was a side discussion and not what we were primarily talking about. Diverging off to delve on another topic when the one at hand is already lengthy is not prudent to me. Nor is it an issue I care about significantly enough to discuss as I am not one of those people who is bothered by claims of ATI's AF providing better or worse IQ at angles where full degree AF is applied.

How can you determine how much work is being done, and if the amount of work increases or not?

Well, defining the process allows you complete knowledge with regard to the result, but defining the result doesn't allow you complete knowledge of the process. Causality, as I've mentioned more than a few times.

In other words, it is indeterminable. Again though, you came to your theory by deduction. Now I am asking for specific evidence.

Earlier you stated that if ATI AF did full degree filtering that the result would be increased texels filtered, but that the work would not increase.

No, I stated that a measure of process (time taken) demonstrably did not increase in accordance with considering only the measure of the result (i.e., even with off angles being rarely present) in comparison to a different process, and that this indicates that there is a problem with only measuring one set of results for characterizing the process.

Isn't this what you said? "As measured by this application, in this scene, and as representing unique texel samples, yes. As representing what the methodology processed, and how much processing took place to achieve that result, no. " Isn't this the same thing as saying, the number of filtered texels would increase but the work will not?

What data proves this, and how do you know for sure?

SH, I've answered this question for what I actually did say, including what I am sure of and not sure of, and what I think is reasonably indicated, and what is not. If you understand you've missed some things as you later indicated, this would be a good time to revisit some of my shorter posts earlier.

You have never given specific evidence where off angles are isloated from full AF angles and vice versa.

The situation with nVidia drivers prevents me from providing information fitting your set of adjectives, and your sometimes agreeing that ATi doesn't have the same performance penalty even without off angles being present led me to believe we could continue without it. This is something I've covered already.

Hold on a second. I have never disagreed that ATI's performance hit for AF is lower than NVIDIAs.

:?: How many ways should I say that the result and processing are not the same, and how what you've specified is saying that they are?

Honestly it would help a lot if you cut down on the rhetoric, weren't as long winded, and just got to the point.

It would help if you at least answered questions and brief examples proposed to progress conversation instead of requiring me to repeat explanations already offered...instead of ignoring them, asking me to explain again, and complaining when I do. I've "gotten to the point" I don't know how many times, but it hasn't done any good.

Apparently what you mean is that anyone else responding to your statements in disagreement is a waste of time. Or you wouldn't have disagreed with Quitch, right? :-?

Not at all. I have not seen results that support your theory. That is what I have been saying since the beginning.

If there is no evidence that supports or refutes or can be interpreted then what are we to dwell on?

There is evidence, but there isn't the degree of certainty you've recently stipulated (at least, for what I've found). Why are your comments not subject to being evaluated for uncertainty? What do you think reasoning is, if not a way to work out which uncertainty is likely to be "certain"?

Let me rephrase. I can see how AF results may tend to support your theory. But this seems like something after the fact. Like I said, if the evidence led me to believe the same thing that you do we wouldn't be having this conversation, would we ;) In other words, the theory supports the evidence, more than the evidence supports the theory.

I keep asking where the things you assert are said, and you keep ignoring my question to pose new statements for me to address. What I said was that measuring the result does not measure the process, nor measure the "work" when "work" is referring to the process. I did not refer to organization of the process and the process as things that cannot be proven, but as things that are not proven by only measuring the result.

You said it right here:

demalion said:
me said:
You're making a case, but I'm not convinced yet with the evidence provided. It just doesn't make sense to me why the adaptivity of ATI's AF needs to be dependent on the angle.

Nor will it until we reinvent or reverse-engineer ATI's "secret" AF algorithm, which I think is a bit more than detecting off angles and reducing workload for them...or else ATI could fairly trivially just remove the detection criteria used to decide to reduce quality and get their non-off angle performance gains while retaining flawless image quality (or switch it off for aniso tester applications... ).

What was wrong with my explanation of "work", "process", and "result"?

Nothing is wrong with your explanation of "result." You still haven't told me what "work" or the "process" actually are. The "process" is AF, but what does that consist of. You refer me back to your hypothetical illustrative equations with sample sets of numbers, but what are these numbers supposed to represent?


A little backtracking then. Please agree or disagree. Elaborate on your answer if you feel it is necessary.

Whether or not off angles are in the scene, the process is the same.

Whether or not off angles are in the scene, the work done is the same.

When off angles are present, the process and the work done are the same as when off angles are not present, but the result changes.

If full degree AF was used at all angles then:

a) performance would globally decrease, whether or not former off angles are present.

b) performance would only decrease when off angles are present.
 
I did find This example of comparison of performance hit between a GF 4 using 40.41 and 9700 using Catalyst 2.3 (I believe that this was before the tri/bi issue for ATI was introduced, but if this is incorrect, I am sure this can be pointed out), in the UPT and Comanche 4. I do think this is the general indication of performance benchmarks even without depending on off angles for offering lesser performance hit.

On the other hand, the 9600 Pro can take as much as a 46% performance hit in UT2003. But as you, i had a hard time finding any good comparisions of performance and quality AF (although this time for the R9500+ series) which is rather annoying. And unfortunetly, even then we wouldn't really know if the "difference in differrence" is caused by the new AF implementation or something else. But i would still be interested to see the performance hit for f.e UT2003, 1600*1200 performance AF on a 9600 Pro.

And another thing, going back to the results i posted earlier.
The R9000 Pro takes a 6% perf hit in UT2003, 16 X AF, the 9600 Pro takes a 46% perf hit in the same game and resolution (again, not knowing if B3D used the same benchmark). The 9000 Pro even outperforms the 9600 Pro by a couple of fps with those settings.

Correct me if i'm wrong, but doesn't this indicate that the massive difference in AF performance between the GF3 and the R8500 series was much more related to the 8500 not doing trilinear then it's "adaptive samling" ?
 
StealthHawk said:
....
Where are the applications that gain performance but do not lose IQ? If there are no such applications, then why implement this AF method at all? Why not just do trilinear in all stages? It really makes no sense.

This is what we are seeing, again:
1) almost all games are unaffected by doing trilinear, it seems.
2) some games gain a lot of performance by doing tri/bi, but IQ is lost.
3) an equally small number of games may gain performance but not lose IQ...

I think we are really in agreement about the main points...but it seems to me the above comments might be where we disagree. I think you are basing your opinions on the one benchmark test you referenced in another post. I'm skeptical of that testing, its methodology, and its results.

I think that generally speaking in most cases if you apply trilinear filtering to texture stages the game engine doesn't render such that they are visible to the camera you will definitely lose performance while gaining nothing in IQ. This is obviously the central and only motivation for nVidia to make the Dets incapable of doing full trilinear in UT2K3--there exists no other motivation for nVidia to do this, aside from the performance they gain (which presumably they hope translates to UT2K3 benchmarks and results in more sales of their hardware.)

I think any testing which indicates that there's no performance difference relative to the number of texture stages treated with trilinear is erroneous. The question is not of how many stages should be treated, but of whether or not all the treated stages are visible when playing the game. If they are not visible, there's no reason to treat them with trilinear filtering. I would think that any testing which purports that it doesn't matter how many texture stages are treated with trilinear because performance won't be affected is inherently flawed in some way. It's not providing the correct results, in other words.

It's like you say--why implement it at all if there's no performance difference? Obvious answer is that there is such a performance difference, but that the tests you saw which maintained there isn't were themselves flawed.
 
Quitch said:
Isn't this "it does more work so it's better image quality" the same thing that was shot down in the trilinear argument, when it was pointed out that you could do work on texture levels that weren't visible, and while this would increase the workload, it wouldn't improve the image quality.

Woah! I've been away for a while, and thus unable to reply. Shame, I missed fierce debate over a tiny little statement I made. Nifty :)

I've had to read the whole topic again just to find what triggered this statement, and what exactly the issue was.

I believe it was this post:

Exxtreme said:
ok at the numbers at 4X AF and above. This means, that Nvidia's 4X Tri-AF is superior to ATi's 16x Tri-AF.

This was a post that counted how many filtered texels were used. This was followed by a further post:

Exxtreme said:
This number is a sum of all filtered texels in this one 3d scene. Higher number means more texels are used... more work and higher image quality.

I then posed my question, which StealthHawk followed up on, with what appeared to be the answer I wanted. However, do not the two companies use different AF routines... ones which work on these texels? Therefore would working on more texels mean you always had the best image?

I didn't think this was true, hence my statement.
 
StealthHawk said:
demalion said:
:?: Well, there are plenty of different ways to measure work. If you have an inefficient shader, and an efficient one, and they have the same output, you can measure the output (equal), or you can measure the number of steps or time taken, etc. Also, you can analyze different measurements and come up with another measure like efficiency. If you understand this, I believe your question is answered...?

Oi. See, you are giving vague and general answers, instead of giving specific ones.

Pardon me? What exactly is vague there? This answer is very specific, it is just "not as simple as you believed". That is why I'm disagreeing with you.

You are saying "the process works like this...but I have no proof."

Pardon? I'm saying "the process isn't suitably characterized by your simplification, and here is the information I have". This information is something you've agreed with off and on, but continue to refuse to provide a specific and consistent answer to each time I ask.

How can you come to a conclusion without proof?

Are you ignoring the information I provided, or maintaining the information I provided isn't "proof" while proposing that you yourself are free to propose your own conclusions without it? Am I supposed to be taking your commentary a different way? What about your self contradiction, and the questions regarding it you simply continue to skip over? Could you perhaps begin to effectively answer the questions I pose to help with my understanding if I am misunderstanding you? There is a rather significant backlog at the moment. :-?

There are two ways to make a postulate, through deduction or through induction.

Which are you practicing with your conclusion? Or does this criteria not apply to yourself? Anyways, I don't think something like abduction should be ruled out as long as it isn't abused to propose certainty.

Hereby an induction is improbable or impossible, because we cannot know the exact nature of the process unless ATI tells it to us.

Eh? Another thing that doesn't make sense to me. Induction is specific->general, deduction is general->specific.

Just because these specifics aren't as simple as you'd like, and I don't like to automatically propose what I believe by reasoning as being a "certainty", doesn't mean that I am not performing inductive reasoning to arrive at what I believe. It just means I'm trying to hold a conversation that isn't one way.

So we are left with a deduction. What specific evidence is there that supports your premise?

Isn't deduction starting from a premise to make a re-statement that is defined by the stated premises (ack...this plural strikes me as odd), i.e., a specific observation indicated by the general premise propositions. It depends on these propositions being correct.
This fits your initial statement, with you simply continuing to propose that I can't say that your premise and reasoning is incorrect without providing evidence fitting your criteria, while ignoring that you didn't provide such evidence for your own premise and reasoning in the first place, and have moved forward with providing less "specifics" than I have. :-?

There is some deduction in my own statements as well, and you are free to discuss how my premises or reasoning is flawed.

Proposing that induction is "impossible" simply works as a device to preclude disagreement by someone who doesn't blithely propose their premise as a given, or who introduces details you wish to ignore.

I've already listed exactly what "specific evidence" I offer. Where is your own to counter it?

This seems to be summarized by your saying you're not caring about the cause, as I've addressed.

I was saying this in regards to texture aliasing.

...

You were saying that in regards to not caring whether the texture aliasing was a result of AF or LOD, as part of "proving" that ATI's AF implementation demonstrated more texture aliasing. This is directly connected to a discussion about AF, yes?

StealthHawk said:
demalion said:
Why wouldn't the texture aliasing "people claim" be related to a higher level of detail for textures (the default for Direct3D is "highest" last I checked)? What about people who "claim" otherwise?
...
To me, the cause here is irrelevant. Something is affecting IQ, whether positive or negative, and that something is at least influenced by the way ATI performs AF. Therefore I conclude that AF is responsible.

Did I miss text that changes the meaning, misunderstand, or do you just completely fail to hold yourself accountable to your own statements?

How can you determine how much work is being done, and if the amount of work increases or not?

Well, defining the process allows you complete knowledge with regard to the result, but defining the result doesn't allow you complete knowledge of the process. Causality, as I've mentioned more than a few times.

In other words, it is indeterminable.

Umm...no. It just isn't determined by only measuring the result you were looking at. How are confusing words like "complete", and my various phrasings of "there is more to look at", with saying this is "indeterminable"?

Again though, you came to your theory by deduction. Now I am asking for specific evidence.

I don't know why you keep proposing flawed statements to support your initial premise and conclusion, have me discuss the flaws in them, avoid engaging in that discussion, and then propose a new flawed statement...instead of beginning to consider that just maybe a different conclusion is warranted, or that something about your initial premise and proposed relationship might be invalid. Your stipulation about deductive and inductive reasoning is just the latest manifestation, AFAICS. :-?

Earlier you stated that if ATI AF did full degree filtering that the result would be increased texels filtered, but that the work would not increase.

No, I stated that a measure of process (time taken) demonstrably did not increase in accordance with considering only the measure of the result (i.e., even with off angles being rarely present) in comparison to a different process, and that this indicates that there is a problem with only measuring one set of results for characterizing the process.

Isn't this what you said? "As measured by this application, in this scene, and as representing unique texel samples, yes. As representing what the methodology processed, and how much processing took place to achieve that result, no. " Isn't this the same thing as saying, the number of filtered texels would increase but the work will not?

No, it is the same thing as saying I think that: in that specific application, scene, and measuring work by unique texels sampled, it would; as representing (universally, as I discussed at the time and in contrast to the first statement) what the methodolgy processed, and how much processing took place to achieve that result, it would not.

OK, I don't understand why I had to repeat myself here, so obviously I didn't get something across. At the risk of being "long winded": please look at your statement, which makes no distinction between specific example and universal applicability of it, then please look at mine, which does. How many ways should I have to say this?

Pardon me where I skip instances of where my replying to a question would consist of me repeating myself and pointing out something in this fashion.

...

The situation with nVidia drivers prevents me from providing information fitting your set of adjectives, and your sometimes agreeing that ATi doesn't have the same performance penalty even without off angles being present led me to believe we could continue without it. This is something I've covered already.

Hold on a second. I have never disagreed that ATI's performance hit for AF is lower than NVIDIAs.

StealthHawk said:
demalion said:
We need to be clear on this: does ATI's performance hit being reduced depend on the off angles being present, or not?
Yes, I think so.

How am I supposed to deal with such self-contradiction? Upon your providing this reply, should I point out that it is only to reach this point that the evidence you ask for applies, and that I've provided reasoning from this point on already?

<snipped where, AFAICS, either you misquoted and left my reply to you embedded as if we were each other, or you quoted my own text at me and didn't realize you were replying to yourself>

Apparently what you mean is that anyone else responding to your statements in disagreement is a waste of time. Or you wouldn't have disagreed with Quitch, right? :-?

Not at all. I have not seen results that support your theory. That is what I have been saying since the beginning.

OK, which theory do you mean? The "theory" I proposed as providing support for my reasoning was that ATI's AF implementation retained lesser performance hit, even without off angles being present, which you, again, seem to have agree with above. The rest is reasoning, which the result you just said you agree with support. What are you asking for?

If there is no evidence that supports or refutes or can be interpreted then what are we to dwell on?

There is evidence, but there isn't the degree of certainty you've recently stipulated (at least, for what I've found). Why are your comments not subject to being evaluated for uncertainty? What do you think reasoning is, if not a way to work out which uncertainty is likely to be "certain"?

Let me rephrase. I can see how AF results may tend to support your theory.

OK, then. But it really confuses me with regard to what you just asked, as what you just asked confused me with regard to what you'd just agreed with before it. This is what I mean about self-contradiction, which you don't seem willing to address or resolve.

But this seems like something after the fact.

This phrase, "after the fact"...how do you mean it? Such results support the theory. You can't just agree they support the theory in one place, and then go on about my not having provided results that support the theory in another. :oops:

Like I said, if the evidence led me to believe the same thing that you do we wouldn't be having this conversation, would we ;)

I've covered one problem in the direct answers to your questions about "measuring work". If you wish to abandon what appears to me to be clear self-contradiction, which I hope you understand is poor logic, and focus on resolving the issue with discussing the measurement of work, please do so.

In other words, the theory supports the evidence, more than the evidence supports the theory.

:?: This looks like a bass ackwards statement, like "That was true because you said it, more than you said it because it was true", in response to someone saying something like "it is raining". :-?

I keep asking where the things you assert are said, and you keep ignoring my question to pose new statements for me to address. What I said was that measuring the result does not measure the process, nor measure the "work" when "work" is referring to the process. I did not refer to organization of the process and the process as things that cannot be proven, but as things that are not proven by only measuring the result.

You said it right here:

demalion said:
me said:
You're making a case, but I'm not convinced yet with the evidence provided. It just doesn't make sense to me why the adaptivity of ATI's AF needs to be dependent on the angle.

Nor will it until we reinvent or reverse-engineer ATI's "secret" AF algorithm, which I think is a bit more than detecting off angles and reducing workload for them...or else ATI could fairly trivially just remove the detection criteria used to decide to reduce quality and get their non-off angle performance gains while retaining flawless image quality (or switch it off for aniso tester applications... ).

OK, pardon me for quoting fully, but could you highlight the part of this text that says "the organization of the process and the process cannot be proven" instead of "the organization of the process and the process cannot proven by only measuring the result", except as you continue to equate "result" with "process", ignoring everything I've said on the subject?! :-?

What was wrong with my explanation of "work", "process", and "result"?

Nothing is wrong with your explanation of "result." You still haven't told me what "work" or the "process" actually are.

Well, I'm only trying to get so far as to establish that they are not completely defined by counting texels, and that the count of texels is what is completed defined by counting texels.

You can view this as deduction, concerning progressing by establishing what the AF implementation is not, or as induction, concerning the various benchmarks and including such deductions in trying to continue define the AF process. Simply not having completed defining the process (which is what you are demanding is necessary to address your own commentary :oops:) does not serve to define the process as something else.

The "process" is AF, but what does that consist of.

You want a formula representation and analysis of how ATI implements their AF, then? What about your statements do you think requires me to specify such in order to be able to refute? What I've provided seems sufficient, and complaining that I haven't completed reverse engineering ATI's implementation doesn't do anything to address that.

You refer me back to your hypothetical illustrative equations with sample sets of numbers, but what are these numbers supposed to represent?

An example. Is there something I should explain about what an example is, now?

A little backtracking then. Please agree or disagree. Elaborate on your answer if you feel it is necessary.

Whether or not off angles are in the scene, the process is the same.

Sure.

Whether or not off angles are in the scene, the work done is the same.

As far as ATI applying the same process, and it being characterstically faster/more efficient than the GeForce method, yes. This is why comparing the GeForce method and ATI's method by counting texels is not a valid universal comparison, similar to trilinear/bilinear, or how 16 samples does not necessarily reduce edge aliasing more than 6.

As far as measuring the results by counting texels, no.

When off angles are present, the process and the work done are the same as when off angles are not present, but the result changes.

OK, you seem to understand then...:?:

If full degree AF was used at all angles then:

a) performance would globally decrease, whether or not former off angles are present.

If full degree AF was used at all angles by implementing things with the same process and implementation organization as the GeForce, yes.

b) performance would only decrease when off angles are present.

If full degree AF was formerly used at non-"off" angles by implementing things with the same process and implementation organization as the GeForce, and this was then changed to apply to off angles where it didn't before, yes.
 
Bjorn, I covered bilinear+AF versus trilinear+AF before, right?

Bjorn said:
Correct me if i'm wrong, but doesn't this indicate that the massive difference in AF performance between the GF3 and the R8500 series was much more related to the 8500 not doing trilinear then it's "adaptive samling" ?

No, it indicates that the difference in performance hit between the 8500 and the 9600 is the result of trilinear filtering, as far as we know the AF implementations are otherwise related.

I could observe that a truck slows down a certain amount when pulling a certain weight in tow, another similar truck pulls a lighter load much faster, and a 3rd truck of different design pulls a load similar to the first and also slows down a lot. You can't just disregard the load and propose that the 3rd truck would pull the lighter load just as quickly as either the 2nd truck or first would. Engine and transmission designs, behavior on inclines, etc., in short, trucks, aren't necessarily that simple.
 
WaltC said:
StealthHawk said:
....
Where are the applications that gain performance but do not lose IQ? If there are no such applications, then why implement this AF method at all? Why not just do trilinear in all stages? It really makes no sense.

This is what we are seeing, again:
1) almost all games are unaffected by doing trilinear, it seems.
2) some games gain a lot of performance by doing tri/bi, but IQ is lost.
3) an equally small number of games may gain performance but not lose IQ...

I think we are really in agreement about the main points...but it seems to me the above comments might be where we disagree. I think you are basing your opinions on the one benchmark test you referenced in another post. I'm skeptical of that testing, its methodology, and its results.[/b]

Ok, fair enough. What is specifically is wrong with that testing, its methodology, and its results?

I think that generally speaking in most cases if you apply trilinear filtering to texture stages the game engine doesn't render such that they are visible to the camera you will definitely lose performance while gaining nothing in IQ. This is obviously the central and only motivation for nVidia to make the Dets incapable of doing full trilinear in UT2K3--there exists no other motivation for nVidia to do this, aside from the performance they gain (which presumably they hope translates to UT2K3 benchmarks and results in more sales of their hardware.)

Hmm, this begs the question as to why NVIDIA is only doing what they are doing in UT2003 and not in all games then, doesn't it? Hypothetically in other games where IQ would not be affected by using bilinear, wouldn't they be able to gain a good deal of performance by only doing 2x bilinear AF instead of full degree trilinear AF? The optimization is hardly innocuous in UT2003, yet if I understand you right it should work well in other games with no IQ loss!

I think any testing which indicates that there's no performance difference relative to the number of texture stages treated with trilinear is erroneous. The question is not of how many stages should be treated, but of whether or not all the treated stages are visible when playing the game. If they are not visible, there's no reason to treat them with trilinear filtering. I would think that any testing which purports that it doesn't matter how many texture stages are treated with trilinear because performance won't be affected is inherently flawed in some way. It's not providing the correct results, in other words.

I agree that filtering things that you cannot see is pointless. What I'm trying to distinguish is how performance is affected.

Are you saying that rTool is not doing what it is alledging that it is doing? (allowing the possibility of trilinear to be used in texture stages where bilinear would normally be used when Quality AF is selected)
 
demalion said:
Which are you practicing with your conclusion? Or does this criteria not apply to yourself? Anyways, I don't think something like abduction should be ruled out as long as it isn't abused to propose certainty.

Does it matter what I am doing? I am asking about your theory. If it is not clear yet, you have already debunked my evidence. I was using the filtered texels as evidence, but if work is the same regardless of how many texels are filtered then my use of evidence is flawed. So I have no evidence to prove or disprove my theory. The problem is, I don't see any evidence to prove or disprove yours either.

You were saying that in regards to not caring whether the texture aliasing was a result of AF or LOD, as part of "proving" that ATI's AF implementation demonstrated more texture aliasing. This is directly connected to a discussion about AF, yes?

Is LOD the same whether or not AF is enabled?

Umm...no. It just isn't determined by only measuring the result you were looking at. How are confusing words like "complete", and my various phrasings of "there is more to look at", with saying this is "indeterminable"?

Then what is it determined by?

StealthHawk said:
demalion said:
We need to be clear on this: does ATI's performance hit being reduced depend on the off angles being present, or not?
Yes, I think so.

How am I supposed to deal with such self-contradiction? Upon your providing this reply, should I point out that it is only to reach this point that the evidence you ask for applies, and that I've provided reasoning from this point on already?

Ok, let me clarify what I mean. Yes, ATI's performance hit is lower than NVIDIA's. But it gets even lower when off-angles are present, that is my stipulation.

OK, which theory do you mean? The "theory" I proposed as providing support for my reasoning was that ATI's AF implementation retained lesser performance hit, even without off angles being present, which you, again, seem to have agree with above.

No, your theory is that the off angle problem stems from whatever process ATI uses when AF, is it not? That is a consequence of the algorithms being used, so that in fact, they are essentially one and the same.

There is evidence, but there isn't the degree of certainty you've recently stipulated (at least, for what I've found). Why are your comments not subject to being evaluated for uncertainty? What do you think reasoning is, if not a way to work out which uncertainty is likely to be "certain"?

Because I never said my theory wasn't wrong. You seem to be saying that your theory must be right and my theory must be wrong(because yours is right) :?:

This phrase, "after the fact"...how do you mean it? Such results support the theory. You can't just agree they support the theory in one place, and then go on about my not having provided results that support the theory in another. :oops:

General results support your theory, not specific ones. ie, you are looking at the lower performance hit of ATI's AF and saying that the off angles occur because of ATI's aggressive/efficient process. But we already agreed that most games don't use very many off angles, so using such data doesn't prove anything about off angles(to me).

In other words, the theory supports the evidence, more than the evidence supports the theory.

:?: This looks like a bass ackwards statement, like "That was true because you said it, more than you said it because it was true", in response to someone saying something like "it is raining". :-?

No, it is more like your theory does not fall apart when looking at general cases. But when looking at specific cases, it might. But we don't have an specific cases to look at! ie, where off angles are isolated, that is a specific case.

OK, pardon me for quoting fully, but could you highlight the part of this text that says "the organization of the process and the process cannot be proven" instead of "the organization of the process and the process cannot proven by only measuring the result", except as you continue to equate "result" with "process", ignoring everything I've said on the subject?! :-?

If it is not proven by looking at the result, then what is it proven by?

Well, I'm only trying to get so far as to establish that they are not completely defined by counting texels, and that the count of texels is what is completed defined by counting texels.

Ok, that's fine. So "work" and the "process" are not completely defined by filtered texels. I thought we covered that. I'm asking what "work" and the "process" are defined by. Do you not understand that?

You want a formula representation and analysis of how ATI implements their AF, then? What about your statements do you think requires me to specify such in order to be able to refute? What I've provided seems sufficient, and complaining that I haven't completed reverse engineering ATI's implementation doesn't do anything to address that.

No, I am asking for what variables(specifically, what these variables represent) would make up such formulas. Since you say that filtered texels have are the result and not part of the process, and that number of samples taken is not part of the process. What IS part of the process?

An example. Is there something I should explain about what an example is, now?

A more specific example that tells me how these arbitrary numbers used in your previous example are related to AF?

a) performance would globally decrease, whether or not former off angles are present.

If full degree AF was used at all angles by implementing things with the same process and implementation organization as the GeForce, yes.

b) performance would only decrease when off angles are present.

If full degree AF was formerly used at non-"off" angles by implementing things with the same process and implementation organization as the GeForce, and this was then changed to apply to off angles where it didn't before, yes.

So you are saying that it is possible to perform full degree AF at "off angles" where currently only 2x AF is performed, without losing performance in cases where such "off angles" don't exist.

You are also saying that it is possible to implement AF another way so that performance will globally decrease.

So....if it is possible to separate the off angle problem from the "normal" AF ATI is currently doing, why is it impossible that the off angle problem is unrelated to the way ATI is doing AF? I have said from the beginning that I thought the off angle problem was mututally exclusive from "normal operation" of AF, and you disagreed.
 
Found some numbers for both the 9600 Pro performance AF and the 9800 Pro with full trilinear. Both reviews had some rather annyiong problems though, despite being B3D reviews.
These problems might not be the actual reviews but they're annoying when you are supposed to look for the performance hit for AF.

Ati 9800 Pro review, B3D

This review has UT2003 numbers with full trilinear, problem is that it doesn't say if the review used the botmatch, flyby or a custom demo for the AF benchmarks. I guess that it wouldn't make that much sense to use the botmatch since it's so CPU limited in this case. On the other hand, 63% seems to be a rather large performance hit for the 9800 Pro but i guess Dave B can shed some light as for which demo was used. Maybe it's actually stated in the review but i missed it ?

Flyby, 0X AF: 121 fps
Botmatch, 0X AF: 69.5 fps
?? 16X AF: 44.7 fps

A nice 35% or 63% performance hit depending on what demo they used. Makes it easy to understand why Ati also is trying to avoid full trilinear if possble.

On to 9600 Pro performance AF:

Triplex Radeon 9600 Pro, B3D

Shows a 14% performance hit going from 2X AF to 16X AF (UT2003, 1600*1200, performance AF). Annoyingly, there were no 0X AF numbers in the review. There were no numbers for full trilinear either, only for quality set through the CP. Again, don't know if the same demo's were used as in the 9000 Pro review, and we also know that these things can differ depending on the card (fillrate, bandwidh....) which means that it's not really hard evidence. But it seems to indicate that the new AF implementation has a larger performance hit then the old one, even when only doing AF + bilinear.

It would be nice if f.e B3D did some investigation into these things. Hopefully (but maybe not likely at this moment :)), Nvidia will add some options for full trilinear in their CP. But it would be interesting to see the difference between full trilinear and bilinear even with only Ati cards. F.e between the 9000 Pro, 9200, 9600 and 9800.
 
DaveBaumann said:
Maybe it's actually stated in the review but i missed it ?

* cough *

Same level that you used for the screenshots then i presume ?

The level used is a custom level built from the UT2003 Editor, by Beyond3D regular "jb", specifically to highlight the texturing properties.

I'm a bit lazy sometimes and it's just that both the other benchmarks (flyby and botmatch) had this clearly stated in the benchmarks (f.e UT 2003 (flyby) " but the AF had just "UT2003 - 16 x AF". So sorry if i'm missing the obvious :)

Edit: Looked at the test setup page where it mentions two demos, standard and custom. So i guess you used the custom demo for AF then, or ? But as i said, it doesn't say which one you use when you look at the benchmarks which would have been a good thing to include.
 
As it says at the top of page 10, anything beyond there uses the custom demo. Its not the same as the screenshots since that is actually just a box, designed to show off the issue.
 
DaveBaumann said:
As it says at the top of page 10, anything beyond there uses the custom demo. Its not the same as the screenshots since that is actually just a box, designed to show off the issue.

Ok. This makes it a bit hard to check the performance hit when using AF though. So i suggest a upcoming article where you compare the AF performance on the whole 9000+ lineup, using f.e Half Life 2 as a benchmark :)

PS I know that B3D doesn't like to have shootouts between different IHV's, that's why i suggested between Ati cards only DS
 
K.I.L.E.R said:
DaveBaumann said:
My Spidey senses are suggesting that the AF performance under HL2 may mirror TR:AOD somewhat...

Is that bad?

No, more like AF comes for "free" because it'll be shader limited. (Dave has mentioned that HL2 will be mostly shader limited if i rember things correctly)

From the Triplex 9600 Pro review (TR:AOD):

Trilinear benchmarks are not provided because it is less than a frame faster than anisotropic at all resolutions and AA levels.

Will be very interesting to see if this will be the same for Doom3 also since it' supposedly not as shader heavy as HL2 (although it looks WAY better despite that, all imo of course and i'm probably biased since i've tried the Doom3 alpha and i've only watched videos of HL 2).
 
Back
Top