Trilinear Filtering Comparison(R420 vs R360 vs NV40 vs NV38)

trinibwoy said:
Stryyder said:
Good point but my point is that comparing x800 method 1 to x800 method 2 is meaningless what really matters is comparing x800 against the NV40 and the R3xx series if it shows improvement over the R3xx and is on par or better than the NV40 this is a null point.

Whoa!!! :LOL: You're changing the rules on the fly man. Let's assume we have R420A (optimized) and R420B (regular). Now we don't know if R420B has better IQ than R420A cause ATI is being stingy. But let's assume that R420B has better IQ since I don't feel like taking ATI's word for it. You would prefer R420A over R420B if A has an extra 2-5% advantage over NV40? Why does this comparison have to be made in context of this discussion at all?

1)We know that there can be up to a 20% perfomrance difference in R420 between method A and B based on the article that started this up.

2)If R420A is better quality than anything else available and faster I would naturally prefer it as it is giving be up to 20% better performance and still is better than anything else I could get..

If you are suggesting that I should be upset because the possiblity exists that I could be getting even better IQ out of method B because the card is capable of it from a practicle standpoint that doesn't bother me.

If this is your argument we should be talking about NIDIA 2xaa and 4xaa not being up to snuff for years, I am sure they are capable of the same IQ as ATI 2xaa and 4xaa but don't do it because of the performance hit at those levels of IQ now we have ATI providing NV40 IQ levels or better in AF they set the bar, they are claiming to maintain it, there is no competing product doing better, whats the issue.
 
Stryyder said:
1)We know that there can be up to a 20% perfomrance difference in R420 between method A and B based on the article that started this up.

2)If R420A is better quality than anything else available and faster I would naturally prefer it as it is giving be up to 20% better performance and still is better than anything else I could get..

If you are suggesting that I should be upset because the possiblity exists that I could be getting even better IQ out of method B because the card is capable of it from a practicle standpoint that doesn't bother me.

That is exactly what I'm suggesting. And although you may not be upset some people may prefer the higher quality over the FPS mania. I'm not trying to paint ATI with a holy brush like you are....just looking at the situation as it stands. I can't find the quote but I recall seeing some ATI PR on "providing the best image quality possible", not "providing the best image quality that is better than our competition".

If this is your argument we should be talking about NIDIA 2xaa and 4xaa not being up to snuff for years, I am sure they are capable of the same IQ as ATI 2xaa and 4xaa but don't do it because of the performance hit at those levels of IQ now we have ATI providing NV40 IQ levels or better in AF they set the bar, they are claiming to maintain it, there is no competing product doing better, whats the issue.

My GOD how do I respond to this? :oops: Please check the topic of THIS THREAD. :rolleyes:
 
I for myself wouldn't be surprised if we saw a 5 to 10 percent increase in X800 numbers in the following months, due to the drivers maturing.

Hypothetically speaking, of course...
 
The way I see it is ATI has implemented adaptive trilinear filtering. They compare the two mipmaps to see how different they are then apply the correct amount of trilinear filtering to get the job done. They are not detecting color mipmaps...color mipmaps are just so much different that they need the full amount of trilinear filtering. This is the same concept as dynamic branching in PS3.0. There is no reason to run all the pixel shader code when part of it does not apply to the situation you are currently using. Is this cheating? No. It's being smart. ATI is applying trilinear filtering until the job is done then stops. This frees the card to do other things that can further increase image quality, like providing you with 'free' 16x anisotropic filtering. If you think ATI is cheating with this method then you also think PS3.0 is cheating on nVidia cards as they are not running the full shader code that ATI is running.
 
Stryyder said:
trinibwoy said:
Stryyder said:
Good point but my point is that comparing x800 method 1 to x800 method 2 is meaningless what really matters is comparing x800 against the NV40 and the R3xx series if it shows improvement over the R3xx and is on par or better than the NV40 this is a null point.

Whoa!!! :LOL: You're changing the rules on the fly man. Let's assume we have R420A (optimized) and R420B (regular). Now we don't know if R420B has better IQ than R420A cause ATI is being stingy. But let's assume that R420B has better IQ since I don't feel like taking ATI's word for it. You would prefer R420A over R420B if A has an extra 2-5% advantage over NV40? Why does this comparison have to be made in context of this discussion at all?

1)We know that there can be up to a 20% perfomrance difference in R420 between method A and B based on the article that started this up.
quote]


The same levels ran slower even on NV40 with colored mipmaps. So, 20% is a bogus number. Changing the contents of an application's texture maps and expecting to have same performance is a naive assumption...For starters - changing the alpha channel of a texture that causes all the pixels to get alpha killed will boost the performance a lot...You just wont see anything drawn on the screen though :D

The whole premise of the original article is based on this performance difference and we are here discussing and debating after 4 days without anyone showing actual image quality issue.
 
ATI is making graphic accelerators the goal is to produce the best possible experience which means the best graphical presentation possible at acceptable perfromance levels... No one needs or wants picture perfect at 2 FPS
 
Mintmaster said:
You continually created a huge raucus about NVidia's brilinear, but now that ATI is doing the same thing, you're defending them vehemently.

There is a big differnce. With NV brilinear it was very easy to pick out in static screen shots and even worse in motion, thus it degraded IQ for speed. While ATI's method does not appear to be true "trilinear" it has yet to be shown to degrade IQ while increasing speed.


ATI is doing the same filtering, but they're asking for it to be turned off with NVidia cards! Completely unfair, and very decietful. Now, it is entirely possible that ATI's benchmarking guide was written by people not aware of this, or they put this comment in their guide before R420 was introduced and forgot to remove it. Either way, several sites, like Tech-Report and HardOCP, disabled trilinear optimisations on NV40 when comparing to R420.


ATI already has admitted to that being a document error. They claimed it was a cut and paste from the older docs and was not proof read. Normally I might not buy that..however seeing that the released RH Slides with notes on them leads me to believe that they could screw up like that again :)

And on those comparions did you see any difference in IQ (meaning tri on/off?

ATI should have a box in their driver with three options for trilinear filtering optimization: "Always on", "Always off", and "Auto".

Yea would be nice...as well as a super sampling option but I am not holding my breath :)
 
What I would like to see is a poll for two un-labeled pictures. After 100 votes it should be clear if people are really seeing the difference.
 
trinibwoy said:
Whoa!!! :LOL: You're changing the rules on the fly man. Let's assume we have R420A (optimized) and R420B (regular). Now we don't know if R420B has better IQ than R420A cause ATI is being stingy. But let's assume that R420B has better IQ since I don't feel like taking ATI's word for it. You would prefer R420A over R420B if A has an extra 2-5% advantage over NV40? Why does this comparison have to be made in context of this discussion at all?

We have A/B/C/D and of course E.

A and B have been around for a long time. C is new. D and E are both new and from the same product. D is equivelent to A and B and at least equivelent to C. E has not been seen nor tested, it may or may not be better than D.

If the cone of comparison is against A/B/C vs D does it matter what E is?

Aaron Spink
speaking for myself inc.
 
aaronspink said:
We have A/B/C/D and of course E.

A and B have been around for a long time. C is new. D and E are both new and from the same product. D is equivelent to A and B and at least equivelent to C. E has not been seen nor tested, it may or may not be better than D.

If the cone of comparison is against A/B/C vs D does it matter what E is?

Yeah but my cone is bigger than your cone :p :LOL: :LOL: :LOL:
 
trinibwoy said:
aaronspink said:
We have A/B/C/D and of course E.

A and B have been around for a long time. C is new. D and E are both new and from the same product. D is equivelent to A and B and at least equivelent to C. E has not been seen nor tested, it may or may not be better than D.

If the cone of comparison is against A/B/C vs D does it matter what E is?

Yeah but my cone is bigger than your cone :p :LOL: :LOL: :LOL:

The point being that if a tree falls in the woods but no one sees nor hears it, does it matter?

We don't know what option E is. We don't need to know. We may/will never know. It doesn't change the issue at hand, is D as good if not better than A/B/C. If yes, then this is a lot of nothing, if no, then we need to see E.

But there is no need to see E if D is adequate. Or are you next going to complain that everyone doesn't implement 32x jittered SSAA as an option. Obviously the quality would be better, you may only get 1 FPS, but damn that quality is great.

Aaron Spink
speaking for myself inc.
 
I can't tell a difference, I just can't no matter how hard I try. :(

Is there a "worst case scenerio" screenshot that will really show-off/highlight the difference? :|
 
croc_mak said:
Changing the contents of an application's texture maps and expecting to have same performance is a naive assumption...

According to Carmack there should be no performance differences in Q3 engine between coloredmips and normal (of course assuming same filtering is being done, which is not the case here and why performance differs now).
 
We don't know what option E is. We don't need to know. We may/will never know

YOU dont' need to know. ;)


It doesn't change the issue at hand, is D as good if not better than A/B/C. If yes, then this is a lot of nothing, if no, then we need to see E.

But there is no need to see E if D is adequate. Or are you next going to complain that everyone doesn't implement 32x jittered SSAA as an option. Obviously the quality would be better, you may only get 1 FPS, but damn that quality is great.


Come on. Don't be so arrogant. We have no idea what the performance hit is on R420 with 'real' TRI. Would you be so gung-ho if you found out that you could get real TRI with a 2% drop in performance and noticeable image quality improvement? These are all unknowns but your ridiculous 32X SSAA analogy just goes to show the lengths you will go to to evangelize this optimization. :!:
 
Mintmaster said:
ATI should have a box in their driver with three options for trilinear filtering optimization: "Always on", "Always off", and "Auto".

The problem is, though, that ATi and nVidia haven't actually done the same thing at all. If you'll recall, nVidia started down that slippery slope initially with a set of drivers geared to provide brilinear on detail textures in exactly one game: UT2K3, because of its use as a benchmark, obviously, and their concern that nV3x look better compared to R3x0 in benchmarks. What happened is that nVidia made it application-specific initially, which is something ATi did not do. You had no trouble getting full trilinear under those drivers with nV3x in U2, for instance. It was just UT2K3-specific, in the beginning.

Only later, as I recall, after much criticism about that particular behavior with respect to UT2K3, did nVidia enlarge this behavior to encompass all games, so that full trilinear in effect became impossible on any games--as I understand it--regardless of whether or not the CP was bypassed with the "application" setting to allow the game to instruct the drivers as to which texture layers to treat with trilinear. While ATi's CP settings generically restricted tri to the first layer, this was easily bypassed by setting the CP to "application" and letting the game decide the texture layers which ought to receive trilinear treatment. Still much later yet it was before nVidia deigned to make this option selectable by way of a CP tickbox (which now allows the user to defeat brilinear, supposedly, when ticked, as the driver default is brilinear.)

Even in the latest incarnation of the Forcenators, though, all you get apparently is "on-off," which I will assume here actually works as advertised. But the ATi approach is to provide an algorithm the purpose of which is to at least semi-intelligently turn ATi's brand of brilinear on and off "automatically" dependent on certain criteria relative to IQ and filtering that ATi has selected for (which they've explained.) The nVidia approach to brilinear does not seem the same at all to me, as ostensibly it is entirely set up to be either on all the time or off all the time (assuming the bri defeat switch actually does defeat it in all cases, which I'm not sure has been verified yet.) So that's the first difference--ATi's approach is algorithmic and is never "on" all the time, depending on the filtering conditions it encounters, whereas nVidia's is a manual on-off only.

The second difference to me is that I read several reports of people seeing evident mipmap boundaries with nVidia's approach to bri, especially in the beginning with UT2K3, but so far the only thing I've seen of the same for ATi is a misreported case of CoD in which bilinear was actually being done, which was wrongfully assumed to be an example of ATi's brilinear, but actually was not.

So, I think there are some major differences, apparently, between the two approaches, and we ought not let the temptation to generalize get in our way of uncovering the facts relative to each method, and its efficacy or lack thereof.
 
Stryyder said:
ATI is making graphic accelerators the goal is to produce the best possible experience which means the best graphical presentation possible at acceptable perfromance levels... No one needs or wants picture perfect at 2 FPS

Are you speaking for everybody here? :oops:
 
trinibwoy said:
We don't know what option E is. We don't need to know. We may/will never know

YOU dont' need to know. ;)
[\quote]

Exactly. Thanks for agreeing.

Come on. Don't be so arrogant. We have no idea what the performance hit is on R420 with 'real' TRI. Would you be so gung-ho if you found out that you could get real TRI with a 2% drop in performance and noticeable image quality improvement? These are all unknowns but your ridiculous 32X SSAA analogy just goes to show the lengths you will go to to evangelize this optimization. :!:

We don't need to know. Seriously, if the currently exposed technique has equivelent or better image quality than previous trilinear methods, and better performance in general, does it matter if there is another method available, and if so, why?

And I'm not going to any length to evangelize this. Show me an issue and I'll deal with it, but the arguements so far are simply illogical. There is a method, that is used, that has higher performance, that appears to have equivelent or better image quality.

So far no one has been able to demonstrate an issue with the quality available.

I'll repeat, so far no one has been able to demonstrate an issue with the quality available.

It appears that some people have their britches in a tither because they weren't personally informed of this change.

Compare/contrast this with Brilinear where people were quickly able to produce screen shots from real game play where there were noticable image quality degredations.

Aaron Spink
speaking for myself inc.
 
Miksu said:
What I would like to see is a poll for two un-labeled pictures. After 100 votes it should be clear if people are really seeing the difference.

I'd love to see a poll comparing two pics taken with the same card, only minutely apart in position, so some pixels are off, to see what the conclusion would be... :devilish:
 
aaronspink said:
The point being that if a tree falls in the woods but no one sees nor hears it, does it matter?

We don't know what option E is. We don't need to know. We may/will never know. It doesn't change the issue at hand, is D as good if not better than A/B/C. If yes, then this is a lot of nothing, if no, then we need to see E.

But there is no need to see E if D is adequate. Or are you next going to complain that everyone doesn't implement 32x jittered SSAA as an option. Obviously the quality would be better, you may only get 1 FPS, but damn that quality is great.

Aaron Spink
speaking for myself inc.

Aaron, we're talking the top of the range here. The biggest whizz bang from ATI with the accompanying big bang hit on the bank account. If I'm paying that kind of money for a graphics card, I would like to not only try D and E depending on my mood on any given day, but also know if it can do F, G, H, ad infinitum when they come out.

Going back to your analogy, if I'm paying that kind of money, I would like one, two, no three stooges to go there and listen, see, measure the hell out of that tree falling and report back to me :devilish:

edit: typo
 
Drak said:
Going back to your analogy, if I'm paying that kind of money, I would like one, two, no three stooges to go there and listen, see, measure the hell out of that tree falling and report back to me :devilish:

LOL. For real though, by Aaron's reasoning if Nvidia never improves IQ neither should ATI. That simply is false by all standards.
 
Back
Top