Good explanation of filtering (must read for beginners)

DaveBaumann said:
However, if people can spot the differences and they are highlighted hopefully they will recant on this.
Most 9600 or X800 owners won't have a chance to spot the differences (if there are any) without a software toggle for trylinear. Or does ATi mean for "people" to run two PCs side by side with different video cards, or to swap cards in the same rig? Screenshots can't fully capture how visible mipmap bands are in motion, so does ATi really expect ppl to create and compare videos of try and tri to spot the difference?
 
radar1200gs said:
FUDie said:
radar1200gs said:
The difference is how much data has to be moved around the chip at any one time. FP16 halves FP32's requirements in that regard and allows for more register space which can assist scheduling.
This bandwidth is free. End of discussion. Register space is something completely different. Your whole premise is flawed.
Nothing is ever free. Especially not register and memory bandwidth.
Yes, it is free because of the chip's design. Register bandwidth is free, this has been proven even on the NV3x. Register storage is not free, but that's not what you were saying.

At the beginning you claimed that partial precision doesn't refer to internal precision, which it, of course, does. Then you said that pixel shaders use bandwidth, which they don't. Then you said internal storage is reduced, which is true, but has nothing to do with what you started off as saying.

You can't use one correct statement to justify a whole host of incorrect ones.

The NV3x is one example we have that benefits from partial precision. Is it because of "reduced internal bandwidth"? No. It's strictly because the number of pixels in flight is increased. The chip has enough internal data lines to support FP32 at full speed, this has been shown. What it doesn't have is a large enough internal register file to keep enough pixels in flight to hide all the latency when FP32 is used.

-FUDie
 
whql said:
radar1200gs said:
In contrast ATi is forcing AF & Trilinear optimizations upon you whether you want them or not.
So is nvidia. On they boards they have sold in the past 14 months they are only not forcing it for the 10 or retail retail 6800's that have been sold, with one driver release so far. Thats hardly a commitment.
And let's not forget the "special" 61.11 driver where the option to disable trilinear optimizations was "broken".

-FUDie
 
jvd said:
I'm still wiating for problem cases though. People keep saying they see them but never post any proof. They just ignore me asking for and offering to host thier proof .

Kinda funny huh
Sometimes they also point to an article which shows the mathematical difference or made up software so YOU can see the mathematical differences, with a conclusion of "how can it NOT cause serious IQ problems?"

Oh, and sometimes they post pics with differences far smaller than the usual ones between differing techniques ATi, nVidia, and others always have in games with a bit "AHA!"

Hopefully that stuff will come in time, looking at a broad scale of games and situations. Regardless, I do think ATi should have made sure to keep people appraised of their "advanced trilinear algorithm(s)" and kept it optional, so there wouldn't be fuzziness on the situation and people would still be happy to "get what they want" while they try out something else for themselves. Heck, they'd get a lot more case testing done that way to refine the process, and gamers would judge at their leasure. There will be SOME differences, and each individual will react differently to them. (Though I imagine the vast majority would notice nothing at all.)

Otherwise, though, it's much like adaptive AF or getting stuck with only certain AA configurations where you prefer other methods--just something you have to live with.

FUDie said:
Why do you insist on defending a position which is indefensible?
When you say things enough times, it's true!
 
whql said:
radar1200gs said:
In contrast ATi is forcing AF & Trilinear optimizations upon you whether you want them or not.
So is nvidia. On they boards they have sold in the past 14 months they are only not forcing it for the 10 or retail retail 6800's that have been sold, with one driver release so far. Thats hardly a commitment.

It's more of a commitment than ATi has made so far, and ATi has been optimizing since the 9600 was released.
 
radar1200gs said:
whql said:
radar1200gs said:
In contrast ATi is forcing AF & Trilinear optimizations upon you whether you want them or not.
So is nvidia. On they boards they have sold in the past 14 months they are only not forcing it for the 10 or retail retail 6800's that have been sold, with one driver release so far. Thats hardly a commitment.

It's more of a commitment than ATi has made so far, and ATi has been optimizing since the 9600 was released.

nvidia has been optimizing since the fx series was released !!!


Not only that but in the almighty 6800ultra they put in af optimizations !!!!!

Oh no radar what are you going to do. Your god has been doing it for almost 2 years !!!!!!

Oh wait your a troll so it wont matter to you .


Good day . I said good day
 
FUDie said:
whql said:
radar1200gs said:
In contrast ATi is forcing AF & Trilinear optimizations upon you whether you want them or not.
So is nvidia. On they boards they have sold in the past 14 months they are only not forcing it for the 10 or retail retail 6800's that have been sold, with one driver release so far. Thats hardly a commitment.
And let's not forget the "special" 61.11 driver where the option to disable trilinear optimizations was "broken".

-FUDie

They are beta drivers. Wait for the official release, then crtiicize.
 
radar1200gs said:
It's more of a commitment than ATi has made so far, and ATi has been optimizing since the 9600 was released.
...and nobody noticed. Though brilinear was noticed immediately, complained about immediately, and those concerns ignored stoicly. (And then it was applied universally to get around one of their own internal optimization guidelines! Whee!)
 
1 checkbox - "Disable all optimizations."

How difficult is that? Not much clutter, puts forth tons of goodwill. I used to hate Nvidia for their stupid brilinear optimization, but this time, I'm going with the 6800U because they give users the option to turn it off.
 
jvd said:
nvidia has been optimizing since the fx series was released !!!

Now, I do kind of remember that ATI first optimized AF with the 8500. Angle dependent AF is optimization in my book, since my graphics reference books paint a different picture in the results of AF.

Nvidia going down the same route with adaptive AF is a step backwards, but here's to hoping we can expose their non angle dependent AF in their drivers again.
 
Smurfie said:
1 checkbox - "Disable all optimizations."

How difficult is that?

Err... incredibly difficult. It's an impossible facility to cater for without compromise.
 
Smurfie said:
jvd said:
nvidia has been optimizing since the fx series was released !!!

Now, I do kind of remember that ATI first optimized AF with the 8500. Angle dependent AF is optimization in my book, since my graphics reference books paint a different picture in the results of AF.

Nvidia going down the same route with adaptive AF is a step backwards, but here's to hoping we can expose their non angle dependent AF in their drivers again.

right but i'm not the one claiming one company is better than the other :)

I'm just telling radar that he is wrong because nvidia has been doing this for along time (See also tnt image quality decrease with det series and then geforce 1 image quality decrease
 
Smurfie said:
I used to hate Nvidia for their stupid brilinear optimization, but this time, I'm going with the 6800U because they give users the option to turn it off.
I'd wait until the next driver--they may well be turning it off. (And we never know which way each will go.)

But yes, I'm always a fan of "more options." Disabling EVERYTHING each of us would see as an "optimization"--illicit or otherwise--is pretty much impossible, but some examples are easy to cover.
 
radar1200gs said:
They are beta drivers. Wait for the official release, then crtiicize.
They are beta drivers. Wait for official release before declaring that nvidia aren't defaulting to optimised trilinear. :rolleyes:
 
DaveBaumann said:
ATI are converned about the number of options in the control panel, so they don't seem keen on adding another checkbox. My suggestion to them has been to just keep this as the default option but have it as one notch down the slider, with the full notch as standard trilinear. I think there is some feeling their end that this is tantamount to a "remove-some-performance-for-no-IQ-gain" option, so they weren't too keen on the idea initially. However, if people can spot the differences and they are highlighted hopefully they will recant on this.
Another option would be making it possible to disable it via a registry hack. Then, those of us who care about it could use tweaker programs to turn it off if desired but ati's control panel won't get more cluttered.

Problem solved. :)
 
Hmm... didn't I write an article waaay back then that covers this as part of texture mapping (but only in OpenGL)?
 
You have to optimize or else everything would run like a Parhelia. Not to knock that chip, but it is true. I don't mind optimizations, especially when they do not affect quality, or when they affect it very slightly. You must take a middle road between, performance and quality. You don't want crappy performance, but you also do not want an ugly picture. I think in this situation, the optimization is unnecessary at this time. In a couple years, maybe the X800 will benefit. Right now though, it is more than quick enough. Given it seems there are few or close to no situations where quality seems to be affected, I will not rag on ATI too much and say good job on optimizing and keeping quality.

Where I will knock ATI is on not disclosing the fact of what they did, and at some level decieving people. Not to say they did it purposely, as we all know engineers and marketing probably are not the closest nit group. We do not need to be upset with the engineers (they've done an awesome job lately), nor do we need to be mad at the marketing department (they probably were not keenly aware of the situation). Who we do need to question is the management who do have the RESPONSIBILITY to teach the marketing department what the engineering department have done with the products and also do this ETHICALLY. Does this mean boycott ATI? No, unless you are a dense person who cares nothing about themself and only for a company. What this means is we make it clear to ATI what we want in a CIVILIZED manner. If everyone floods ATI with emails asking for clarification or options to select/deselect the option (or even a slider), then they would be stupid not to respond, for fear of alienating customers.

Hopefully, one of the IHV's will catch on, and the one who does will gain more customers (as ATI has recently). Either company or both could easily take a U-Turn and totally change policy, if we make it monetarily beneficial for them to do so. Well, that's it. --Sorry for length-- I get carried away sometimes.[/i]
 
jvd said:
radar1200gs said:
jvd said:
radar1200gs said:
precision is precision. Internal or external as is bandwidth. It all adds up.

Pixelshaders require bandwidth also. Using the _PP hint can reduce that bandwidth.

We have heard plenty of coders who post in this forum state that they have to be careful when coding not to run into the limits of R3xx's shading bandwidth.
well for bandwidths sake i guess ati should just force bilinear all the time and get rid of af ?

If the dev asks for 32bit fp then they should get it. Its not up to nvidia to choose when a dev gets what he asks for .

Since when has nVidia recently forced precision changes upon us in drivers? That hasn't happened in a long time. Get over the past.

In contrast ATi is forcing AF & Trilinear optimizations upon you whether you want them or not.

have you played farcry recently ?

Not to mention how much else they do that we can't notice or haven't noticed .

Actually the Devs are the one who have made Far Cry run @ the shader precision its using. The Nvidia cards Run @ FP32 when no PP hints are used, But basically every shader 2.0 game seems to use PP hints now, So FP16 is on by default.
 
Actually the Devs are the one who have made Far Cry run @ the shader precision its using. The Nvidia cards Run @ FP32 when no PP hints are used, But basically every shader 2.0 game seems to use PP hints now, So FP16 is on by default.

No you would be wrong . There are many threads bout this go read them .
 
Back
Top