ATi is ch**t**g in Filtering

Suspicious said:
1. They haven't used numerical optimizations that give ~3.8x speedup and introduce random one bit errors that could also be deemed acceptable.
well.. i'm still waiting for more detail what they all implemented, but the only obvious one is, of course, brilinear style stuff. then again, it can be a small-range error in certain cases, wich they try to detect.

2. They haven't invented new algorhitm for full trilinear filtering that works ~14.5x faster and gives the same result as in Chudnovsky + FFT .vs. Gauss-Legendre case I mentioned.
they won't get a such factor out of it, as long as they do scanline rastericing. its still a one-time-per-pixel job. but there are ways, wich they could, if they do yet, we don't know. (discussions about one-mip-level trilinear, i guess you followed them).

Instead they "invented" algorhitm which reduces precision of full trilinear filtering or even falls back to bilinear in some cases which is clearly NOT what I have requested through the game or control panel settings. And I don't like decisions removed from me and I guess neither do you? In case you really don't care then let me decide what you should think about it -- even if what they did was not bad, they did it in a bad way. 8)
well, the way they did it is not bad, it's rather good. it worked over a year till somebody noted with great effort.

yes, of course, it affects the output, and yes of course we should be informed. but we life with that quite fine till now, why can't we continue? we have fp24, non IEEE conforming INSTRUCTIONS (yeah, the data format is the same, but the calculation quality is not), we have adaptive anyso, funny antialiasing. we're so loaded with fakes. adaptive trilinear replacements don't hurt there really. but yes, you are fooled, and allowed to cry. you cry about something you would never have noticed (unlike brilinear at the start), but so what.

i, as a developer, don't care. and i know why. there are much more evil things going on on a gpu, this doesn't really mather. i'm on the other side working on raytracing, global illumination, and high quality offline (and one day online?:D) renderers. gpu's are just shit. :D loaded with such optimisations. if its not visible, it doesn't hurt me. i never expect a gpu "to work as expected". it is not ment to. it is ment to illusionise the gamer into the feeling he sees something complex, realistic. thats all. scientific calculations on gpus are fine, but i would not do it if i need the precicion with ANY today existing gpu's. there is just no 100% definition on how each part of it really works.

thats my opinion on it. except on this russian page, there is nothing showing that it visually hurts. and no user-complains prior to the bit-comparisons. thats good enough to be a valid optimisation, it passed the "be in the range of near enough to the original". what else do you need? usercontrol. okay. then again, we can't compare pixelshading hw at all. they don't work identical, nor do they the same workload. so what? it never hurted, as long as the output looks good. with all _pp, it didn't. with the premature optimisations per app, it was cheating. now, with the shader optimizer, it is a valid optimisation. and no one bothers.
 
AlphaWolf said:
I have news for you, it's all an optical illusion. Your display is only capable displaying 2d images. You have been cheated all along.

Yes, and it's really amazing how many people stumble over this important and fundamental point. Everything about "3d" along the z axis is entirely an illusion, and all "3d" is in reality completely 2d...;) We are still limited to x and y, it's just that "3d" provides an illusory z axis.

Looking at the difference between bilinear and trilinear filtering, for instance, in both instances the separate mipmaps, and their boundaries, are always present. The difference between them is that we can see the boundaries in bilinear, but with trilinear an optical illusion is created such that the boundaries *appear* to disappear. They are still there, of course, it's just that the filtering blends them better so that we can't plainly see them, as we can with bilinear.

People are acting as if trilinear filtering for the sake of trilinear filtering is what is important, and that making the mipmap boundaries appear to disappear is entirely secondary...! Heh...;) They've got it exactly backwards, as the whole point to trilinear filtering is expressly to create the illusion that the mipmap boundaries have disappeared. That's the only value to trilinear filtering traditionally that I know of.

The point with respect to nVidia's initial "brilinear" substitution for trilinear is that it did not always blend mipmap boundaries as efficiently as nVidia's standard trilinear, and we were seeing mipmap boundaries where we wouldn't have seen them with standard trilinear. That seems to me to be the only reasonable objection to have about "brilinear," and from what I understand nVidia's improved from that initial point a great deal. So my opinion is that if the IHVs can achieve the same degree of mipmap boundary blending with their respective "brilinear" methods that they have traditionally done with standard trilinear--I say More Power To 'Em and I'm all for it.
 
davepermen said:
we have fp24, non IEEE conforming INSTRUCTIONS (yeah, the data format is the same, but the calculation quality is not), we have adaptive anyso, funny antialiasing. we're so loaded with fakes. adaptive trilinear replacements don't hurt there really. but yes, you are fooled, and allowed to cry.
i wonder who invented all these so called "adaptive" techniques....
will anyone here claim that "adaptive aniso" gives better quality?for example. I think adaptive trilkinear is OK as quality, but i do have something against LIES
 
Some people use "adaptive AF" as synonym for "angle-dependent AF" - which is wrong. All AF implementations are "adaptive" in that they vary the number of samples used depending on the degree of anisotropy.
 
Xmas said:
Some people use "adaptive AF" as synonym for "angle-dependent AF" - which is wrong. All AF implementations are "adaptive" in that they vary the number of samples used depending on the degree of anisotropy.

I think we have Ati's PR departement to blame for this.
 
Bjorn said:
Xmas said:
Some people use "adaptive AF" as synonym for "angle-dependent AF" - which is wrong. All AF implementations are "adaptive" in that they vary the number of samples used depending on the degree of anisotropy.

I think we have Ati's PR departement to blame for this.

why?
 
Evildeus said:
tEd said:
Bjorn said:
Xmas said:
Some people use "adaptive AF" as synonym for "angle-dependent AF" - which is wrong. All AF implementations are "adaptive" in that they vary the number of samples used depending on the degree of anisotropy.

I think we have Ati's PR departement to blame for this.

why?
Their naming perhaps? :?

what naming?

I have the impression that when people heard ati has adaptive AF and later found out that their AF is angle-dependent they thought angle-dependencie=adaptive. It looks like the usual assumption turning into the quasi truth with time to me and not ati's PR.

Happens all the time though
 
chavvdarrr said:
i wonder who invented all these so called "adaptive" techniques....
will anyone here claim that "adaptive aniso" gives better quality?for example. I think adaptive trilkinear is OK as quality, but i do have something against LIES

Well, consider for a moment that most people, including me, do not run AF in the absence of FSAA, but rather run both at the same time. Really, the only reason to run AF sans FSAA is if your FSAA sucks, but your AF is good (which was the situation for me with GF2/3/4.) If both are good in combination, most people, I think, will use both.

Considering that, what is the purpose in examining AF separately, or FSAA separately, from AF and FSAA in combination? I don't see much purpose in that, actually, since I always use them in combination, and don't turn off FSAA to run AF, and don't turn off AF to run FSAA. Yet, many hardware testing sites still insist on testing AF and FSAA separately, as if it was primarily an either-or situation, which of course it isn't. If the combined FSAA and AF implementation in a product is good, then most people will use both in combination.

My point here is that, yes, the angle-dependency of angle-dependent AF is sometimes visible when testing in the absence of FSAA, contrasted to the exclusive testing of non-angled AF, but when testing the combination of FSAA & AF, the combination of the two clearly provides better IQ in R300 + than the combination provides in nV30 +. The responsibility of the IHV is not only to consider the separate implementation of AF and FSAA, but the implementation when a combination of them is used. IE, the IQ resulting from a combination of AF and FSAA is not less important than an examination of the IQ produced by FSAA and AF when invoked separately, to the exclusion of eath other. This is where "adaptive" techniques come to the fore.

In fact, you could argue that all 3d is entirely "adaptive" in the strict sense of the requirement to balance IQ with frame-rate performance generally. If an IHV errs and degrades IQ to increase benchmark frame-rate performance, it's a negative. If an IHV errs to increase IQ at the expense of smooth and playable frame-rate performance, it's also a negative. The idea is to achieve the best balance possible between the two, without swinging wildly to either side of the equation. So then, "adaptive" approaches are not unusual or strange or false in some fashion--rather, they are the bedrock foundation on which successful IHVs rest. The idea is to achieve both maximal frame rate performance and maximal IQ, which means that adaptive approaches are really quite normal and traditional in the 3d gaming industry.

Regarding IQ specifically, though, it should never be examined as if frame-rate performance is irrelevant--because in 3d gaming frame-rate performance is equal in importance to IQ--I'd say the proper percentage ratio should be 50-50 between the two. Because of the fact that the two are of equal importance, in my opinion, I am far more interested in the resulting IQ than I am in the methods which an IHV might choose to produce that IQ. Many things in "3d" are by nature compromises or illusions, but that to me is not nearly as important as how convincing the resulting illusions of "real-time 3d gameplay" turn out to be. As such, I see the only rational criticism to be made of an "adaptive" approach is when such an approach produces IQ and/or gameplay that is less than convincing, or less convinving than a more traditional, less-adaptive approach might be.
 
tEd said:
Bjorn said:
Xmas said:
Some people use "adaptive AF" as synonym for "angle-dependent AF" - which is wrong. All AF implementations are "adaptive" in that they vary the number of samples used depending on the degree of anisotropy.

I think we have Ati's PR departement to blame for this.

why?

Because they have been marketing their AF solution as adaptive from the very beginning (R100 ?). And their solution was no more adaptive then Nvidias, apart from the angle dependancy that is.
 
chavvdarrr said:
davepermen said:
we have fp24, non IEEE conforming INSTRUCTIONS (yeah, the data format is the same, but the calculation quality is not), we have adaptive anyso, funny antialiasing. we're so loaded with fakes. adaptive trilinear replacements don't hurt there really. but yes, you are fooled, and allowed to cry.
i wonder who invented all these so called "adaptive" techniques....
will anyone here claim that "adaptive aniso" gives better quality?for example. I think adaptive trilkinear is OK as quality, but i do have something against LIES

adaptive always means just one thing: adapting the required workload to the situation. ie where it doesn't mather, don't do anything, where it does, do a lot.

adaptive stuff is faster. it can, if it has a clever algo, thus tuned up to have a higher quality at the same performance, as well.. but its simply to remove unneeded work.

and its something dynamic. the angle dependant 'feature' of the anyso of ati (doesn't gf6 now have that, too?) is a static feature, not adaptive.

while questionable, it is a understandable optimisation. most walls and floors are flat => they get full anyso max, and the ones that are not, won't. and as a mather of fact, those angles are much more costy to sample anyways, because they touch texels in a way less cache friendly.

the temporal aa is sort of an adaptive solution, too.. it does 2 samples if it has low fps. if fps get high, it starts using 4 samples by toggling, and combine each two frames. result is, in this case, quality enhancement if there is enough speed. quality adapts to speed.

in case of the trilinear, speed adapts to quality, means it always measures how much loss it would have, and based on that, chooses the filtering.

same for adaptive anysotropic filtering: it measures how much samples are required (with a wrong algo to have more samples for walls and floors and ceilings), depending on the orientation of the surface.

what else did i forgot? as long as we don't have to start to talk about how z-buffers, stencilbuffers, and all try to 'cheat' to get bether performance (wich results in in the end high speed peaks, but different performance characteristics, wich on the other hand result in different ways developers have to optimize..).

and of course, we can call nvidias shader sorta adaptive, as they try to _pp where they can. and you can call atis shading 'optimisation' there static. fp24, that is, wich is a constant cost-optimisation for ati compared to fp32.

there are tons of things. some are adaptive (means on no two boards bitwise equal), some are static (means eighter good enough, or not (brilinear was not, nowadays, it gets bether)), and most important: the combination of both results in never identical images on different hw.

that leads to the final conclusion: don't expect anything, except overall quality (and thus, just such a slider), as an end user. all those terms are just overcomplicating one fact: they work together, and only together, well balanced, they can give the best performance/quality ratio.

there is still the developer side issue of having, or not having, direct control wether a texture is trilinear or not. but this, imho, is an api-design-fault-issue. you could eighter specify a specific filtering, or simply a quality value (characters get high quality filtering, particles low quality, etc..).
and this mathers, to me, too.

i'm not 100% okay with what ati did. but i still can't call it a cheat. for me, only app dependant optimisations are a cheat. that means, if a completely equal described image sent to drivers does not result, drawn on gpu, to the same image, at the same speed, just because one is my app, and the other one a huge app of some big gamedevs (quake runs faster than my quack, while i draw exactly the same the same way).
this is a cheat

and of course, overall image quality degradation below a certain treshold (aka "i can't see wich is bether") is cheating the user.
not delivering the promised best performance is, as well. the nv30 cheated a lot who payed the money, in that sence. the gfFX5200 never cost much, but cheats a lot who expected it to be a fine performing gpu (something along gf4, or so)..



sooo.. that was my rant:D i guess? yeah. have fun reading, quoting, flaming :D
 
davepermen said:
for me, only app dependant optimisations are a cheat. that means, if a completely equal described image sent to drivers does not result, drawn on gpu, to the same image, at the same speed, just because one is my app, and the other one a huge app of some big gamedevs (quake runs faster than my quack, while i draw exactly the same the same way).
this is a cheat

Well, i disagree of course :) It's annoying for benchmarking reasons but not a cheat unless there is a IQ loss.

and of course, overall image quality degradation below a certain treshold (aka "i can't see wich is bether") is cheating the user.
not delivering the promised best performance is, as well. the nv30 cheated a lot who payed the money, in that sence. the gfFX5200 never cost much, but cheats a lot who expected it to be a fine performing gpu (something along gf4, or so)..

The FX5200 is perhaps "cheating" the buyers who didn't read any reviews of if and thought that they got a decent DX9 PS2.0 card. But it's a pretty good card otherwise, though you get what you pay for of course.

And the problem with "image quality degradation below a certain threshold" is, who will be the judge of that threshold ?

That's the benefit of being able to force trilinear f.e :)
 
Bjorn said:
tEd said:
Bjorn said:
Xmas said:
Some people use "adaptive AF" as synonym for "angle-dependent AF" - which is wrong. All AF implementations are "adaptive" in that they vary the number of samples used depending on the degree of anisotropy.

I think we have Ati's PR departement to blame for this.

why?

Because they have been marketing their AF solution as adaptive from the very beginning (R100 ?). And their solution was no more adaptive then Nvidias, apart from the angle dependancy that is.

any prove to back that up?
 
davepermen said:
while questionable, it is a understandable optimisation. most walls and floors are flat => they get full anyso max, and the ones that are not, won't. and as a mather of fact, those angles are much more costy to sample anyways, because they touch texels in a way less cache friendly.
I have to disagree here, on both points. The optimization is understandable from a mathematical POV, but if those cases where it makes a difference were rare, then there's not much gain, and if they occurred often, there's too much IQ drop.
And why would it be less cache-friendly? The outer texels can still be reused for neighboring pixels, just as with other angles. And the line of anisotropy depends on the texture alignment, too.
 
of course. i'd say if you create a difference image to the highest quality reference image (all settings max in a refrast wich is 100% bugfree), and can't see anything nonblack without contrast enhancing, then it definitely is good enough.

now how much you see, should get defined by some values. for measurement-purposes, of course.

why do you disagree with app dependant stuff? if an app has a certain way to get bether performance, its eighter a patch that the app can apply, and, if not, a design bug in the api, as it can not expose certain optimisation-features.

app-dependant optimisations and settings behind the user are not allowed. user defined settings per app, of course, are :D

there are two things where you can cheat: comparing to yourself, and fail to be equal, and comparing to some reference, and fail to be near enough.

one is app dependant optimisations, the other is quality degradation.

and the rest is marketing cheating. (fx5200 in that case)
 
Xmas said:
I have to disagree here, on both points. The optimization is understandable from a mathematical POV, but if those cases where it makes a difference were rare, then there's not much gain, and if they occurred often, there's too much IQ drop.
in natural scenes, you don't notice the bluriness quite as fast, and this is where rounded-not x|y|z aligned surfaces can occur often => the image quality loss doesn't hurt the eye directly.
in indoor scenes, the iq gets quite high on the main-parts, floors, walls, wich fill the most of the screen. non-x|y|z aligned objects are eighter moving (characters, objects that got hit, boxes and such bouncing around), or small (rounded edges at floors, ceilings, and other stuff), or both.

the trick is, the most visual attention does anyso get, at flat, big surfaces. and those tend to be x|y|z aligned. this is not only a mathematical point of view, but merely a psychological one.
And why would it be less cache-friendly? The outer texels can still be reused for neighboring pixels, just as with other angles. And the line of anisotropy depends on the texture alignment, too.
because it goes in direction of the scanline (or perpendicular to it) => it can reuse those samples 100% in neighbouring pixels. (my brain hurts from thinking about all of it at the same time so i hope i don't mess up namings, logic, and all the rest:D)
 
Bjorn said:
tEd said:
Bjorn said:
Xmas said:
Some people use "adaptive AF" as synonym for "angle-dependent AF" - which is wrong. All AF implementations are "adaptive" in that they vary the number of samples used depending on the degree of anisotropy.

I think we have Ati's PR departement to blame for this.

why?

Because they have been marketing their AF solution as adaptive from the very beginning (R100 ?). And their solution was no more adaptive then Nvidias, apart from the angle dependancy that is.

want to see that proven as well, i remember other wise (but yes, angle dependancy existed, but no, as far as i know, nvidia did constant count samples, while ati none where not needed.. but you can of course prove me wrong, as you can back this statement up.. i guess)
 
davepermen said:
want to see that proven as well, i remember other wise (but yes, angle dependancy existed, but no, as far as i know, nvidia did constant count samples, while ati none where not needed.. but you can of course prove me wrong, as you can back this statement up.. i guess)

http://www4.tomshardware.com/graphic/20021024/ati-08.html

Interestingly enough, ATI has recently begun calling its filtering technique "adaptive," or "adaptive anisotropic filtering."

That was pretty recent though.
 
Bjorn said:
tEd said:
Bjorn said:
Xmas said:
Some people use "adaptive AF" as synonym for "angle-dependent AF" - which is wrong. All AF implementations are "adaptive" in that they vary the number of samples used depending on the degree of anisotropy.

I think we have Ati's PR departement to blame for this.

why?

Because they have been marketing their AF solution as adaptive from the very beginning (R100 ?). And their solution was no more adaptive then Nvidias, apart from the angle dependancy that is.

Right, so what's your point? AFAIK it IS adaptive, regardless of what nVidia do. People simply heard the term, found out there was angle-dependancy, and confused the two. This is no IHVs' fault.
 
Quitch said:
Right, so what's your point? AFAIK it IS adaptive, regardless of what nVidia do. People simply heard the term, found out there was angle-dependancy, and confused the two. This is no IHVs' fault.

As has alreay been mentioned, AF is by it's very nature adaptive so if you're saying Adaptive AF then it's something more then what's implied by AF itself. And Ati had a much lower perf hit when doing AF and thus called it's AF adaptive to make it out as something NVidia didn't have. They did have something that Nvidia didn't have though but that part was hardly adaptive.
 
Back
Top