ATi is ch**t**g in Filtering

what i stated. there is only a definition on how it should look like. how, exactly, each vendor/driver/hw combination does it, doesn't mather, and is always slightly different. they all try to do some micro-optimisations here and there, just to get the last percent of speed. what mathers is, if those micro-optimisations fail, does it result in a bad image, or fall back into the ordinary way? this is what mathers. does it hurt the output? or does it only help to speed up where possible.

and, as i stated. this isn't a new thing now. this was ALWAYS the case in all sort of places in every vendor/driver/hw combination existing. and they are allowed to do so, even encouraged.
 
This is roughly what trilinear involves.

1. calculate the texture co-ordinate for the pixel.
2. calculate the Lod of detail required ( depends on the texture the depth and lots of other things ).
3. do cieling( lod ) and floor ( lod ) do a bilinear sample for mipmap of cieling ( lod ) and floor ( lod )
4. final colour = |actual lod - floor( lod )| x sample for the cieling lookup + |actual lod - cieling( lod )| x sample for the floorlookup

What ATI and Nvidia are doing does is if the LOD is close the floor or the cieling they don't bother blending they simple look up whichever is closest to the actual LOD. Now when its close to the middle they do the blending as well. So when its not close to the middle of LOD transistion they only do 1 bilinear texture look up saving them bandwidth.

It appears as if ATI are being more conservative then NV by defualt in skipping texture look ups.
 
bloodbob said:
It appears as if ATI are being more conservative then NV by defualt in skipping texture look ups.

Nice post. Makes all this techno-babble seem really simple for a change :LOL: However, if that is all that ATI is doing what is the basis of their 'sophisticated algorithm' claim? And how does this account for their determination of the difference between mip maps?
 
there are more ways to do filtering in between, like doing only a pointsample on the one, and a bilinear on the other, and vise versa, and bla, and this, and blending that, etc, and if not..

there are some (but not much) documents about other ways to improve filtering, to get lower-cost near-trilinear filtering that isn't as bad as bilinear.. search for fliptri and flipquad or something (a cheap antialiasing algo). they have a pdf, in wich they explain some different compression algos, and filtering algos, and how they combine, and work together, and all..

there is more, but thats just one i remember from brain that it is on the net.

none is real trilinear, of course. but you can restate that statement:
none is the correct filtering. including trilinear. and thats why the big "WE WANT TRILINEAR" doesn't bother me. trilinear is NOT the end-of-the-line. by far not. it's just one possibility, with certain defined performance and quality trade-offs.

well, its weekend now.. finally:D
 
trinibwoy said:
bloodbob said:
It appears as if ATI are being more conservative then NV by defualt in skipping texture look ups.

Nice post. Makes all this techno-babble seem really simple for a change :LOL: However, if that is all that ATI is doing what is the basis of their 'sophisticated algorithm' claim? And how does this account for their determination of the difference between mip maps?

That is the bit that does either one of three things:

1) If the mipmaps are not automatically generated with a box filter then trilinear is performed. Otherwise brilinear is performed.

2) If the difference between mipmaps is greater than a certain threshold then trilinear is performed. Otherwise brilinear is performed.

3) Depending on the difference between the mipmaps the region of bilinear is expanded or contracted according to some algorithm to give the highest blend of image quality and performance.
 
Jabbah said:
2) If the difference between mipmaps is greater than a certain threshold then trilinear is performed. Otherwise brilinear is performed.

3) Depending on the difference between the mipmaps the region of bilinear is expanded or contracted according to some algorithm to give the highest blend of image quality and performance.

Thanks. I'm not too clear on the nature of the 'difference' between mip-maps. Since there is a threshold this difference must be quantifiable somehow. Is this just resolution or something else too?
 
Jabbah said:
2) If the difference between mipmaps is greater than a certain threshold then trilinear is performed. Otherwise brilinear is performed.

3) Depending on the difference between the mipmaps the region of bilinear is expanded or contracted according to some algorithm to give the highest blend of image quality and performance.
I really doubt this actually happens. ATI haven't provided us with any proof they are doing this. Remeber its *proprietary* ( although if they really did file a patent we will find out after a while ).

If ATI are doing this it could actually cause visual artifacts lets say you have two texture on two triangles side by side. The difference between mipmaps is greater in one then the other. In this case the places where the blending occurs and you may end up with two bands of different lengths sitting next to each other.

Their are also some other problems. Do you decided when blending should start on a per texel basis? do you decided on worst texel case and apply it to the whole mipmap? do you decided on the average case for the whole mipmap transistion?

If your apply it different threshold for each texel thats a whole more amount of data you have to store. Though I guess if your deciding when to start blending uniformly across an entire mipmap the storage cost won't be to bad.

Coloured mipmaps are not automaticlly generated so they also don't get the algorithim applied.
 
trinibwoy said:
Jabbah said:
2) If the difference between mipmaps is greater than a certain threshold then trilinear is performed. Otherwise brilinear is performed.

3) Depending on the difference between the mipmaps the region of bilinear is expanded or contracted according to some algorithm to give the highest blend of image quality and performance.

Thanks. I'm not too clear on the nature of the 'difference' between mip-maps. Since there is a threshold this difference must be quantifiable somehow. Is this just resolution or something else too?

Basically you have a base texture which is a resolution of a power of 2 ie x^2 by y^2, mipmaps are then generated for this where each subsequent map is half the size ie (x-1)^2 by (y-1)^2 so resolution can not be used to determine difference. eg base texture 1024x1024, mipmap 1 512x512 mipmap 2 256x256 etc. Every four pixels in one map is averaged to one pixel in the next.

So the only thing they could base a difference on is the difference in colour between the maps. They would probably do that by doing a box filter (4 pixels to 1) and comparing that with the next map. If it was auto generated then there would be no difference but if it was generated using different filters then there would be slight differences and they could use that to determine what kind / how much filtering to do.
 
I'm lovin' this big time! :LOL:

Now we should be able to see reviews of videocards with full "optimizations" and "subtle IQ loss" from here on out. 8)

Now it apparently doesn't matter any longer as there are no "standards" in either Bi-Linear or Tri-Linear filtering methods. This is a huge plus now as it allows freedom to be more creative.

However I am really confused now, didn't everyone want some form of "standards" in 3D programming and technology? Does this mean the big push to have Nvidia oppose DirectX 9 requirements alright now? Does it mean we will see a Cg compiler from ATi now?

Holy cow batman, things just got ions easier for hardware reviews now. Will there be any need for programs like 3DMark any more?

A big thank you goes out to everyone who either couldn't see a difference in this poll and to those who have now decided that this whole non optimized way of benchmarking is a waste of time and resources.

I can totally see those 25 page previews/reviews falling back down to their appropriate 5 to 10 pages of shock and awe. :D

Kudos for making it happen everyone, I really can't wait till the mid-range battles appear. Hopefully I won't have to wait till the NV45 or NV50 vs. R423 or R500. 8)
 
Chalnoth said:
MrGaribaldi said:
Would this be like the problem you say we're likely to see wrt to textures and ATI's 24 bit precision?
No, I think it'll be closer to Matrox' FAA.

Ok, thanks for the answer :)


Now, why would it be impossible for ATI to update the algorithm so that the corner cases where it curerntly doesn't work (where it's not giving trilinear and it's trylinear doesnt' work) either will give full trillinear or fix the trylinear problem?

Since it's not a hardware implementation, but software (based on what ATI has said), why couldn't this, given time, become a solution without any corner cases and work to our satisfaction?

(I'm just trying to figure out why you think this is impossible)
 
bloodbob said:
Jabbah said:
2) If the difference between mipmaps is greater than a certain threshold then trilinear is performed. Otherwise brilinear is performed.

3) Depending on the difference between the mipmaps the region of bilinear is expanded or contracted according to some algorithm to give the highest blend of image quality and performance.
I really doubt this actually happens. ATI haven't provided us with any proof they are doing this. Remeber its *proprietary* ( although if they really did file a patent we will find out after a while ).

Yes, I also think option 1 is more likely.

If ATI are doing this it could actually cause visual artifacts lets say you have two texture on two triangles side by side. The difference between mipmaps is greater in one then the other. In this case the places where the blending occurs and you may end up with two bands of different lengths sitting next to each other.

I dont think that would be a problem or cause noticable artifacts as even with full trilinear you will get a difference of mip levels so one triangle would appear blurier earlier than the other. To see such an artifact you would also need to have obvious mip boundries.

Their are also some other problems. Do you decided when blending should start on a per texel basis? do you decided on worst texel case and apply it to the whole mipmap? do you decided on the average case for the whole mipmap transistion?

Maybe this is what makes the algo so wonderful :) But I would have some statistical function relating to the worst case and applied to the whole mipmap.
 
bloodbob said:
Great so the Nvidia wasn't cheating with future mark by replacing shaders putting in clipping planes ect?
I dunno, is ATi taking knowledge of static results to manually change something that breaks the moment you change the case one iota from the norm? Or perhaps are they doing something similar to "adaptive AF" (which hasn't been "cheat-vs-legal," but "IQ-vs-IQ") but in this case smarter, as it's much more adjustible? (And what would make Adaptive AF better...? The ability to read the situation and adjust on the fly to what would create better IQ results, whether it's shrinking a few more angles or increasing them. For the most part, just sticking an "optimization template" over a scene is an easy way to score numbers, but certainly never going to be the best method possible.)

It might be more relevant to ask "is nVidia cheating by manually re-programming developers' shaders to increase performance results with NV3x hardware without (or without seriously) impacting image quality?" Even from fanATIcs that answer has basically been "no"--just an admonition as to how performance would start our poorer and slowly roll up for NV3x as developers weren't accounting for its peculiarities, and it made both their programming job and nVidia's driver programming (to help straighten out kings) all the more difficult.
 
cthellis42 said:
It might be more relevant to ask "is nVidia cheating by manually re-programming developers' shaders to increase performance results with NV3x hardware without (or without seriously) impacting image quality?" Even from fanATIcs that answer has basically been "no"

I would say that the answer has been yes. But it seems that it's about to change now.
 
Just thinking about all this filtering stuff and what makes each technique of a higher quality than another.

There is one easy example where full trilinear would give better image quality than brilinear. At the moment the arguments are that trilinear is not the best filtering method as it blurs more due to blending over the full lod transitions. Brilinear does not do this and so the blur is less with crisper textures inbetween. That all sounds good, and brilinear could be better.

Now imagine traveling down a long corridor with detailed textures on the walls, floor and ceiling. As you move the lod transitions will stay in the same place on the screen. With trilinear you will not notice much of a problem as the textures are uniformerly blurred from one lod transition to the next. However with brilinear you will get a more obvious banding with the textures flowing in and out off blurred / sharper regions as you move down the corridor.

Trilinear may give you a slighlty blurrier overall image but in motion the quality will appear higher as the eyes are more suseptible to varying changes in image quality.

Such a case may be rare in todays detailed games, but i believe it is one that would show a noticeable difference.
 
Bjorn said:
cthellis42 said:
It might be more relevant to ask "is nVidia cheating by manually re-programming developers' shaders to increase performance results with NV3x hardware without (or without seriously) impacting image quality?" Even from fanATIcs that answer has basically been "no"
I would say that the answer has been yes. But it seems that it's about to change now.

I think I mentioned in my last 3DMark article that replacement of shaders that provide a consistent output is a valid optmisation for a single application, however this isn't a particularly good general solution because its not consistent across all applications, or even withing a single application (i.e. one commonly used shader may be optimised giving good performance, but less commonly used ones in the application might pull the performance down) - app specific optimisations lke these are fairly flimsy because they don't cater for the majority of applications and can even break when patches are supplied. This is why its important to provide generic optimisers.
 
bloodbob said:
I really doubt this actually happens. ATI haven't provided us with any proof they are doing this. Remeber its *proprietary* ( although if they really did file a patent we will find out after a while ).
Well they did make it very clear that the driver is analyzing textures/mip-maps to decide which algorithm to use - using a more optimized algorithm *only* when the quality will be the same or better. If this is the case (which I've seen no evidence to the contrary), it's a legitimate optimization.

Your problems are well noted, but I they could be the reason why this algorithm has taken so long to be realized. In my mind, the definition of mip-maps as (usually box) filtered versions of the original texture leaves open a huge realm for mathematical (and algorithmic) simplifications that produce comparable results.

Now imagine traveling down a long corridor with detailed textures on the walls, floor and ceiling. As you move the lod transitions will stay in the same place on the screen. With trilinear you will not notice much of a problem as the textures are uniformerly blurred from one lod transition to the next. However with brilinear you will get a more obvious banding with the textures flowing in and out off blurred / sharper regions as you move down the corridor.
Visible mip-map boundaries are very much the reason for trilinear and anisotropic filtering. As far as I understand though, what ATI is doing shouldn't be described as "brilinear" since they are only optimizing where it will not negatively affect the visual quality of the image.
 
Bjorn said:
cthellis42 said:
It might be more relevant to ask "is nVidia cheating by manually re-programming developers' shaders to increase performance results with NV3x hardware without (or without seriously) impacting image quality?" Even from fanATIcs that answer has basically been "no"

I would say that the answer has been yes. But it seems that it's about to change now.

it was a manual shader replacement, so it was cheating. now its an automatic shader replacement, thats not cheating. this is an automatic texture-resampling-technique. so its not cheating.

the only case where it is cheating again, is, if image quality gets worse. getting different is no issue, but getting worse is. brilinear gets worse, trylinear only different.
 
davepermen said:
it was a manual shader replacement, so it was cheating. now its an automatic shader replacement, thats not cheating. this is an automatic texture-resampling-technique. so its not cheating.

Well, i don't agree with the "manual shader replacement" = cheating. Unless we're talking about syntethic benchmarks.

And automatic doesn't necessarily mean that it's not cheating imo.
 
manual means it does not generally apply: that means in certain special cases (be it quake3.exe, farcry.exe, or a special written shader), you get some much bether performance. if you write another program, doing the same, or another shader, resulting in the same, but there is a slight syntax difference, you get worse performance.

if it is an automatic shader replacer, it gets the same optimisation on same-style input (but not equal input). this of course only if the replacer works good:D

if something only works in one special situation, then its only placed in to cheat. if its not to cheat, then to bugfix a certain app. and THAT would be something the developers of the APP should have to solve, a.k.a. a patch.
 
mikechai said:
But then RV3x0 users have not experienced full trilinear yet on their cards. As you know the option is not available to them. Therefore, they can't compare.
My understanding is that initially RV3x0 was trilinear when set to Quality, and "trylinear" at lower settings, and that later drivers simply use "trylinear" all the time. If #V3x0 and R4x0 can do trilinear for corner cases, then surely the hardware option is available to them, although ATi hasn't yet made it available via software (which is essentially the same thing to users, so your point stands).
 
Back
Top