Good explanation of filtering (must read for beginners)

DaveBaumann said:
ATI are converned about the number of options in the control panel, so they don't seem keen on adding another checkbox. My suggestion to them has been to just keep this as the default option but have it as one notch down the slider, with the full notch as standard trilinear. I think there is some feeling their end that this is tantamount to a "remove-some-performance-for-no-IQ-gain" option, so they weren't too keen on the idea initially. However, if people can spot the differences and they are highlighted hopefully they will recant on this.

Wouldn't you rather they fixed it? Just look how long it's taking anyone to find anything... and it's not like there aren't a fair old number of 9600s out there. I'd say the number of cases might be quite limited, if there really are any that are noticable (and frankly, if it ain't 8x AF minimum I don't care about it).

I'd rather the technique were refined rather than abandoned because a few enthusiasts threw the toys out of the pram.
 
Stryyder said:
the fact is that ATi's X800 currently cannot handle trilinear filtering and NVIDIA's GeForce 6800 delivers a theoretically better filter quality, at least when the trilinear optimization is disabled in the driver

So much wrong with this statement don't know where to begin....

Congrats on picking apart the entire article to find something to bitch about :LOL: :LOL: :LOL:

Anyway, what exactly is so wrong about the statement. Isn't full trilinear theoretically better than optimized trilinear? I know that the texture stage optimization can be bypassed if AF is set in the game (very few titles) and not via the console but do you know of someway to disable the trilinear optimization and demonstrate X800's capability(=hardware+driver) of full trilinear (in all situations) ?
 
trinibwoy said:
Stryyder said:
the fact is that ATi's X800 currently cannot handle trilinear filtering and NVIDIA's GeForce 6800 delivers a theoretically better filter quality, at least when the trilinear optimization is disabled in the driver

So much wrong with this statement don't know where to begin....

Congrats on picking apart the entire article to find something to bitch about :LOL: :LOL: :LOL:

Anyway, what exactly is so wrong about the statement. Isn't full trilinear theoretically better than optimized trilinear? I know that the texture stage optimization can be bypassed if AF is set in the game (very few titles) and not via the console but do you know of someway to disable the trilinear optimization and demonstrate X800's capability(=hardware+driver) of full trilinear (in all situations) ?


Are you looking for a way other than the tool that ATI has provided to Dave Bauman and others to do the comparisons that have been in about 1/2 dozen threads in this forum, or are you just ignorant of the tools existance?
 
Stryyder said:
Are you looking for a way other than the tool that ATI has provided to Dave Bauman and others to do the comparisons that have been in about 1/2 dozen threads in this forum, or are you just ignorant of the tools existance?

Nah I know of it but you seem to have the market cornered on ignorance anyway ;)

I don't feel like engaging in another pointless ATI rulez/ATI sucks discussion so I'll just leave it here. I don't play with 'tools that ATI has provided to Dave', I play games. Show me a game with forced full trilinear across the board and your point will have been made 8)

We should implement a dedicated flaming thread so all the junkies can get their fix in one place.
 
trinibwoy said:
Stryyder said:
Are you looking for a way other than the tool that ATI has provided to Dave Bauman and others to do the comparisons that have been in about 1/2 dozen threads in this forum, or are you just ignorant of the tools existance?

Nah I know of it but you seem to have the market cornered on ignorance anyway ;)

I don't feel like engaging in another pointless ATI rulez/ATI sucks discussion so I'll just leave it here. I don't play with 'tools that ATI has provided to Dave', I play games. Show me a game with forced full trilinear across the board and your point will have been made 8)

We should implement a dedicated flaming thread so all the junkies can get their fix in one place.

First off don't be a prick I didn't imply you were ignorant I asked if you were aware of something If I thought you were ignorant I would said that to you, not asked you. Secondly I think it has been widely accepted here that the burden of proof are on those that claim the optimization results in worse image quality. And as I originally said about Lars article was that 1)it was very good explanation of the technology issue especially for the layman, but the conclusions were in my opinion biased and ironic considering their silence on true NVIDIA cheating issues in the past. Remember that Lars is the same guy that said

http://www20.tomshardware.com/graphic/20030127/geforce_fx-29.html

NVIDIA takes the crown! No question about it - the GeForceFX 5800 Ultra is faster than the competition from ATI's Radeon 9700 PRO in the majority of the benchmarks. However, its lead is only slight, especially compared to the distance that ATI put between its Radeon 9700 PRO and the Ti 4600. Still, when compared to its predecessor, the GeForce4 Ti, the FX represents a giant step forward.

and we remember how that turned out don't we, I don't remember ever seeing the follow ups to that review or the promised in depth IQ comparisons.

Regardless if your point is that ATI went about this in the wrong way and that they should have had better disclosure on the issue, thats a good point. As to my point being made, it has been made by the 1/2 dozen threads of which no one can conclusively say that the image quality is worse and the fact that 9600 variant owners have not noticed it for over 6 months.

..as to the flaming issue if I intended to flame you I would have and you would have known it, but don't pretend that you have't already thrown stones in discussions extremely similar to this one in other threads.
 
jvd said:
But nvidia did worse than what ati has done. THey had lowered iq noticable . Yet thier fans defended them.

You are defending ATI now as a fan it seems kind of silly to bash others for what you are doing yourself. Furthermore looking at dave's polls on methods of filtering it certainly seems like a wash all around until people are told what to vote for.
jvd said:
If the dev asks for 32bit fp then they should get it. Its not up to nvidia to choose when a dev gets what he asks for .

Now when does a dev ask for fp32, if they ask for that you are suggesting that ATI cards should just have a pop up window saying "oops sorry we can't do it"

If you are really trying to imply that when they ask for full precision which you view as fp24 or 32 they should get one or the other, then that is what you should say instead of imply that they actually are asking for fp32, or full trilinear *cough cough*
 
BAH! I'm getting tired of the whole (over)enthusiast scene, particularly where all these cheats/optimizations are concerned. Both IHVs are optimizing. We are beginning to be thrown into a situation where it is becoming increasingly difficult to make apple to apple comparisons. ATIs "tryliner" filtering gets an ok from me until it is proven to make drastic IQ changes. Yes, thats right drastic. If I have to use some software progy to see the difference outside of a normal gaming environment then the whole matter is moot to me. I don't like how "brilinier" affects IQ never have. The whole matter of forced FP16 I never liked.

On and on these cheating accusations go. For which I believe NV gets the credit in the first place. I have always thought that ATi would be in a position where they can rationalize optimizations because their competition does. Now we are being thrown into a situation where we are really becoming less and less important in terms of being able to determine without the use of subjective material to which IHV has the better solution particularly on the high end. For example ATi cards get XXX on such and such a benchmark.. "oh well that is because they use XXX filter implement" NV gets XXX on that bench ... "but the use XXXX to degrade the output increasing their FPS" on and on.

I am beginning to see some real good logic in [H]s take on the whole benchmarking scene. In the end it all comes down to some subjective end user experience and it is getting more and more difficult to quantify that experience. [H] did a fantastic job in their most recent review comparing the 6800U vs the X800 series cards. I am of the opinion that other sites ought to follow that model or become irrelevent.
 
Yes Sebastian you are right in many ways.

And as JC is quoted saying in THG something like "It is actually a pretty sensible performance enhancement"

And yeah I don't disagree with that I just think as always it is nice if they give me the consumer more choices, even if I choose to use trylinear all the time I would feel better with a check box for trilinear, even though I personally can't tell much difference between any of them.

Heck ATI could include a box that said it enabled trilinear and did no such thing I would probably never be the wiser :p Of course that would really make folks mad... anyway I always said that I like most optimizations but especially those that are like this and apply across all games and all 3d apps.

But sebastian I differ in some respects it is still very useful to be able to tell what capabilities a card has by testing specific parts of the architecture then inferences about the abilities of the card can be made.
 
DaveBaumann said:
My suggestion to them has been to just keep this as the default option but have it as one notch down the slider, with the full notch as standard trilinear.

I'd like to comment on this since this is very close to something I've been thinking for a long time...

The only problem with that is many users, most of which won't be knowledgeable on the subject, see the slider is not all the way to the right they will blindly turn it the rest of the way up. And if I understand correctly this is actually ATI's concern, that users won't understand it is nearly identical between the two settings.

My proposed solution is to actually make the trylinear notch the full notch on the slider, with standard trilinear as a notch down.

While this may seem a bit backwards it actually solves everyone's problems. It gives the enthusiasts and reviewers the option to turn it off, and helps keep non-enthusiasts from disabling something that ATI (and many users) feel doesn't degrade quality and improves performance.
 
The only reason I would be interested in getting rid of performance enhancing optimizations would be when performance is not a factor. For example I am running a game at over 100 FPS ..WTF do I want that for? Anything below 50 FPS then I would be interested in optimizations that increase performance with little to no IQ disparities.

Scotty I am not saying we ought to throw out the bath tub with the dirty water. I agree with you that specifics can be addressed but I'm not interested in nit picking filtering optimizations that affect a few pixels here and there and crying foul with regards. Sure hammer the optimizations that are app specific and degrade IQ significantly. Otherwise forget all this over analyzing nonsense particularly if there is no noticeable IQ disparities.. It is getting ridiculous.
 
I thought the article was pretty good. I'm more of a gamer and only a mild tech-head, so it was nice to not have to put my safety helmet on when I read the article! :LOL:

Perhaps in regards to the optimizations, I don't really have a problem with them as long as they are mentioned (documented), and the end user has the option to disable them. Doesn't seem like rocket science to me. :?

I am perhaps more concerned about game specific optimizations, and if they will affect the visuals of the games I play. :devilish:
 
Stryyder said:
First off don't be a prick
But that's my only talent, you can't take that away from me :cry:

Secondly I think it has been widely accepted here that the burden of proof are on those that claim the optimization results in worse image quality.
Ummm we're not debating IQ here just the validity of Lars' statement about full trilinear capability. Please don't change the topic ;)

And as I originally said about Lars article was that 1)it was very good explanation of the technology issue especially for the layman, but the conclusions were in my opinion biased and ironic considering their silence on true NVIDIA cheating issues in the past.

Well there may be some truth to that but in my opinion if you were to read the particular article at hand in isolation you would need a sizeable chip on your shoulder to find any significant bias there. I guess when you go looking for something you'll eventually convince yourself that you've found it :?

As to my point being made, it has been made by the 1/2 dozen threads of which no one can conclusively say that the image quality is worse and the fact that 9600 variant owners have not noticed it for over 6 months.

Again I was not discussing IQ or the end result / value of the optimization. Like you said there are several other threads on this topic.

..as to the flaming issue if I intended to flame you I would have and you would have known it, but don't pretend that you have't already thrown stones in discussions extremely similar to this one in other threads.

I don't throw stones, only bottles....significant damage but less chance of killing somebody :LOL: :LOL:
 
Sabastian said:
The only reason I would be interested in getting rid of performance enhancing optimizations would be when performance is not a factor. For example I am running a game at over 100 FPS ..WTF do I want that for? Anything below 50 FPS then I would be interested in optimizations that increase performance with little to no IQ disparities.

I think the point was made previously that the optimizations on the high end cards are for bragging rights and not really for the consumer's experience. As long as the min/avg fps on a title is above a certain level (some say 30/60) then why do we need optimizations which only give us a few extra fps with possible negative impact on IQ? We sure don't need them but it's a fact that bigger numbers rule and it'll take a revolution to change that anytime soon.
 
trinibwoy said:
Stryyder said:
First off don't be a prick
But that's my only talent, you can't take that away from me :cry:

Secondly I think it has been widely accepted here that the burden of proof are on those that claim the optimization results in worse image quality.
Ummm we're not debating IQ here just the validity of Lars' statement about full trilinear capability. Please don't change the topic ;)

And as I originally said about Lars article was that 1)it was very good explanation of the technology issue especially for the layman, but the conclusions were in my opinion biased and ironic considering their silence on true NVIDIA cheating issues in the past.

Well there may be some truth to that but in my opinion if you were to read the particular article at hand in isolation you would need a sizeable chip on your shoulder to find any significant bias there. I guess when you go looking for something you'll eventually convince yourself that you've found it :?

As to my point being made, it has been made by the 1/2 dozen threads of which no one can conclusively say that the image quality is worse and the fact that 9600 variant owners have not noticed it for over 6 months.

Again I was not discussing IQ or the end result / value of the optimization. Like you said there are several other threads on this topic.

..as to the flaming issue if I intended to flame you I would have and you would have known it, but don't pretend that you have't already thrown stones in discussions extremely similar to this one in other threads.

I don't throw stones, only bottles....significant damage but less chance of killing somebody :LOL: :LOL:

Damn he is to charming to be pissed of at isn't he.
 
I agree with you that specifics can be addressed but I'm not interested in nit picking filtering optimizations that affect a few pixels here and there and crying foul with regards. Sure hammer the optimizations that are app specific and degrade IQ significantly. Otherwise forget all this over analyzing nonsense particularly if there is no noticeable IQ disparities.. It is getting ridiculous.

It's not only one optimisation on recent graphics accelerators when it comes to texture filtering but rather a combination of numerous of optimisations. If I get MIPmap banding combined with texture aliasing in a scenery that shouldn't actually be there, then it's not a minor side-effect to me (might be for everyone else). How would you get MIPmap banding? Texturing stage optimisations in an application that has the wrong textures in the wrong texturing stages. And how more texture aliasing than necessary? LOD optimisations (haven't seen them yet in real time and I'd need a R420 to judge; yet if it's true that in some spots negative LOD values are being used then it does mean added aliasing, especially when you're restricted to Multisampling). I already use more than often a MIPmap LOD value of +0.5 on the R300 to get rid of aliasing and I really can't see how a negative value lower than "0" would be for the better.

The only reason I would be interested in getting rid of performance enhancing optimizations would be when performance is not a factor. For example I am running a game at over 100 FPS ..WTF do I want that for? Anything below 50 FPS then I would be interested in optimizations that increase performance with little to no IQ disparities.

In the case of a R420 or NV40 today it's really not a matter of playable or unplayable anymore between optimisations on and off. It's rather a matter of the two IHVs striving to win the dreaded benchmark crown in reviews.

All I want is to have the luxury as a user to decide myself what I want to use and where. If I like an optimisation I'll use it; if not then I'll just disable it.

-----------------------------------------------------------------------

Finally no matter if reviewers get accused of having an agenda or for mistakes, ignorance or whatever else, I'd personally prefer that the community would encourage them to continue with IQ analyzing reviews/articles. If they attempt to they get lynched for being biased; if not then sterile benchmark results in colourful graphs get hammered because there are no IQ comparisons.

What do we actually want in the end? I might not agree with each and every relevant article out there but I don't only want them to exist but to increase in the foreseeable future too. I want as much as possible exposed and rest assured there will always be reviewers that tend more to the one river bank or to the other. What I do have as a user then though is the luxury to know as early as possible what is really going on with each offering and I can then decide what I can tolerate and not.

Alas if I have to find out after the purchase myself; it would literally kill the purpose of reading any review/analysis for me. All I will have bought then is a benchmark winner; in which I'm not really interested in.
 
Ailuros, agree for the most part. I might digress a bit on the aliasing issue depending on the degree of it. I couldn't agree more on the reviewer end of things and that is why I pointed out the need for changing the review format of providing benches only. My example being [H]s 6800U vs the X800.
 
Sxotty said:
jvd said:
But nvidia did worse than what ati has done. THey had lowered iq noticable . Yet thier fans defended them.

You are defending ATI now as a fan it seems kind of silly to bash others for what you are doing yourself. Furthermore looking at dave's polls on methods of filtering it certainly seems like a wash all around until people are told what to vote for.
jvd said:
If the dev asks for 32bit fp then they should get it. Its not up to nvidia to choose when a dev gets what he asks for .

Now when does a dev ask for fp32, if they ask for that you are suggesting that ATI cards should just have a pop up window saying "oops sorry we can't do it"

If you are really trying to imply that when they ask for full precision which you view as fp24 or 32 they should get one or the other, then that is what you should say instead of imply that they actually are asking for fp32, or full trilinear *cough cough*

You don't know how to read do you .

No one here can show any reduced iq with ati's version. Yet they did with nvidia . That is a fact . Unless you ahve proof otherwise and no subtracting a picture from another picture is not proof as no one can see the image quality diffrence. With the first ffew versions of nvidia's way you can see the horrible diffrence


Also i don't see fp24 bit being full persicion . That was ms . It is not my fault that nvidia supports 16/32bit and not 24bit. That is nvidia's fault. For them full pericion under dx 9 specs is fp 32 bit .

A dev never asks for 32bit or 24 bit. THey ask for full or pp hints . Full is 24/32bit and pp hints are 16bit.

Once again it is not my fault that nvidia is supporting fp 32 and can not support it at a fast enough speed that they have to foce 16 bit which is not full .
 
Back
Top