Trilinear single mip level Optimization not always cheating

Scott C

Newcomer
I didn't have time to wade through several hundred posts, (I despise this forum format for any sort of catch-up on discussions) but my point is simple.

If ATI is "disabling" the bilinear anisotropic or trilinear modes when color mip maps are used, this may not be cheating.

It is PERFECTLY ACCEPTABLE to sample from only one mip-map (the larger one) rather than linearly interpolate between two if trilinear is on, provided enough samples are taken on the larger mipmap, provided the smaller mipmaps are linear downsamples of the larger one, and provided the algorithm is correct.


---------
Scott's Trilinear optimization correctness theorem:

If a lower level mipmap is simply a straight linear down-filter of the higher level map, then all the data required to render a scene can be found in the high detail map, and using multiple mipmaps IS NOT NECESSARY provided the algorithm samples enough texels in the high detail mipmap to retrieve and integrate the information lost by neglecting the lower level mipmap. Additionally, to be correct, such an optimizing filtering algorithm must mimic filtering methods that interpolate between mipmaps and have no discontinuity at mip level boundaries.
---------

If their drivers simply detect when lower level mipmaps are close enough to simple downfiltered maps and enable this optimization, it is not cheating at all. Your visual quality will not be impacted by using only one mip level at a time since all the information in the lower level map is contained in the larger one.

That said, I'm not sure that ATI's specific optimization passes my correctness test or not. But I've always thought that blending between two mipmaps was a pointless excercize if you already HAVE the data from one mipmap contained in another and the texels are cached and ready to be read.

This is in no way some sort of fanboyism, I've been an advocate for this since the S3 Savage3D did it.

The ONLY reason trilinear existed in the first place was to save bandwidth and to save processing power, and sample and filter only 8 texels (4 from each level) instead of 16 from the top level. (Assuming isotropic). If processing power is free or near it (hardware interpolation) and caching means that bandwidth is also almost free for cached data, then such optimizations can make sense.
 
Malfunction said:
It's not cheating as long as it is not being called, "Full Tri-Linear Filtering." 8)

Except from what I understand Scott's example was indeed FULL trilinear just a different way of implementing it. Though sadly it does not seem that this is the technique that ATI is using, which doesn't mean that their method can not be just as good. And as some people have pointed out, the purpose of trilinear is to merely make the mipmap transitions not visible. So if ATI's method does this, is it not also "full" trilinear?
 
Killer-Kris said:
Malfunction said:
It's not cheating as long as it is not being called, "Full Tri-Linear Filtering." 8)

Except from what I understand Scott's example was indeed FULL trilinear just a different way of implementing it. Though sadly it does not seem that this is the technique that ATI is using, which doesn't mean that their method can not be just as good. And as some people have pointed out, the purpose of trilinear is to merely make the mipmap transitions not visible. So if ATI's method does this, is it not also "full" trilinear?

Well, this can go into so many directions... let's try a few shall we? :D

*Going to be using the NV17 example here since Dave had supplied the shots here already and so forth.

1) Going by the shots of the NV17's "Tri-linear Filtering" method, do we assume that anything less than that example is not to be considered "Tri-Linear Filtering?"

2) Are we suggesting that ATi has now "redefined" what is considered "Tri-Linear Filtering" as not many can actually see the difference from the "Legacy" method (NV17 in this example)?

3) Is there even a need for the terms (Bi-Linear Filtering) or (Tri-Linear Filtering) any longer? Should they just be called, "Filtering Methods," or something to that effect?

4) Who draws the line between "subtle" IQ degradation and illegal "Optimizations?"

5) As a result of this, what is the definition of a *illegal "Optimization?" Some see the difference, some don't. If some can't tell the difference between mix mode FP16/FP32, are we now suggesting this is a valid optimization?

Inquiring minds would like to know.... 8)
 
a simple example:

if you know the higher levels are simply downsampled from the lower levels, then the higher level is just the bilinear equally weighted combination of the lower 4 texels.

if you now know by measure or settings, that this is true, then you can only sample the 4 texels at the lower level, and bilinear filter them twice. once, with the texcoords, and once with 0.5,0.5 to equally weight them. and _then_ you can linear weight between those, to do the full trilinear filtering.

now this is a linear combination of only 4 values. that means, if you adjust the texcoord, with a weighted shift towards the center in the 4-sample-quad, you get EXACTLY the same as with the other samples else.

except for one fact: the higher level sample is just a point-sample. non-the-less, it is near-to-trilinear with only 4 samples instead of 8. and about enough in a lot of cases, depending on the input data.


this _could_ be one way they do it.
 
Killer-Kris said:
And as some people have pointed out, the purpose of trilinear is to merely make the mipmap transitions not visible.
ATI may say so, but that doesn't suffice to make it true.
The whole purpose of texture filtering is to maximize material frequency within the constraints given by the Nyquist theorem. AFAIK there's exactly nothing wrong with "legacy" (?!) trilinear filtering in this respect.
 
zeckensack said:
The whole purpose of texture filtering is to maximize material frequency within the constraints given by the Nyquist theorem. AFAIK there's exactly nothing wrong with "legacy" (?!) trilinear filtering in this respect.

Solving the Nyquist problem is very, very hard (I was going to say impossible). That's because the frequency domain for sythetic textures is unbounded.

Consider a checkboard pattern. A white to black transistion for adjacent pixels is a step discontinuity. In the frequncy domain, it has infinite harmonics, therefore no amount of samples will be sufficent
 
davepermen said:
a simple example:

if you know the higher levels are simply downsampled from the lower levels, then the higher level is just the bilinear equally weighted combination of the lower 4 texels.

if you now know by measure or settings, that this is true, then you can only sample the 4 texels at the lower level, and bilinear filter them twice. once, with the texcoords, and once with 0.5,0.5 to equally weight them. and _then_ you can linear weight between those, to do the full trilinear filtering.

now this is a linear combination of only 4 values. that means, if you adjust the texcoord, with a weighted shift towards the center in the 4-sample-quad, you get EXACTLY the same as with the other samples else.

except for one fact: the higher level sample is just a point-sample. non-the-less, it is near-to-trilinear with only 4 samples instead of 8. and about enough in a lot of cases, depending on the input data.


this _could_ be one way they do it.

And considering they only use it on driver generated mip-maps. those (mip-maps), along with LOD bias, could be tweaked at loading time to minimize most drawbacks. plus using bilinear on the second (smaller) mip-map has pro and cons. the only reason it was used by default was that a bilinear sample is the only thing "free" on current hardware.

I think ATI should be praised for including extra hardware on chip for free trilinear. it is a first for 5 Years (?) that any IHV includes hardware for trilinear.

Nv3x doesn't count because it included specialized hardware for not doing trilinear. (that was for Chalnoth)
 
zeckensack said:
The whole purpose of texture filtering is to maximize material frequency within the constraints given by the Nyquist theorem. AFAIK there's exactly nothing wrong with "legacy" (?!) trilinear filtering in this respect.
I'm not sure but it seems to me that bilinear filtering either undersamples or oversamples information. And trilinear tries to average error.
 
UPO said:
I'm not sure but it seems to me that bilinear filtering either undersamples or oversamples information. And trilinear tries to average error.

Liked that :D
 
coredump said:
Consider a checkboard pattern. A white to black transistion for adjacent pixels is a step discontinuity. In the frequncy domain, it has infinite harmonics, therefore no amount of samples will be sufficent
that's why it's useful to prefilter (lowpass..) textures.
 
coredump said:
zeckensack said:
The whole purpose of texture filtering is to maximize material frequency within the constraints given by the Nyquist theorem. AFAIK there's exactly nothing wrong with "legacy" (?!) trilinear filtering in this respect.

Solving the Nyquist problem is very, very hard (I was going to say impossible). That's because the frequency domain for sythetic textures is unbounded.

Consider a checkboard pattern. A white to black transistion for adjacent pixels is a step discontinuity. In the frequncy domain, it has infinite harmonics, therefore no amount of samples will be sufficent
That's a question of perspective, I guess. A texture map is an array of discrete samples, with step discontinuities between every pair of samples, if you insist that each texel is a rectangular region and there's no distance between neighbors.
However, you can fit a frequency bounded curve over that array that will exactly touch each texel center. This represents another view on the map: a texel is a single point on the surface of the map, and there's non-zero distance between texels.

This model has no discontinuity concerns, and IMO this is the right way to look at it (not from an implementation's POV, but from a signal theory POV). After all, if the flush rectangle theory were correct, point sampling would be the preferred technique for texture magnification. It isn't.

UPO said:
I'm not sure but it seems to me that bilinear filtering either undersamples or oversamples information. And trilinear tries to average error.
Bilinear tends to underuse frequency headroom because, by the simplicity of its design, it can't introduce higher frequency information without running the risk of introducing artifacts (shimmering). Trilinear does a better job at using the frequency headroom, but has issues with inclined surfaces, as should be well known. This is the reason why we need anisotropic filtering.
 
nAo said:
coredump said:
Consider a checkboard pattern. A white to black transistion for adjacent pixels is a step discontinuity. In the frequncy domain, it has infinite harmonics, therefore no amount of samples will be sufficent
that's why it's useful to prefilter (lowpass..) textures.

For rendering, that's best. However, since we can see very high frequency information, it becomes a tradeoff as to how much you blur the scene to compensate for the aliasing problem.

adding noise to the final image might be a good way of adding fake high freqency data.

Filter texture -> Sample texture -> Add noise to sample
 
Pretty much. That kind of technique has been used for image processing before.

Someone could implement it as a smart shader, and try it out

One more thing. You probably need to know if the source texture had a high freqency component to begin with....it would look really silly to add noise to a sky texture (for example)
 
Re: Trilinear single mip level Optimization not always chea

Scott C said:
If their drivers simply detect when lower level mipmaps are close enough to simple downfiltered maps and enable this optimization, it is not cheating at all. Your visual quality will not be impacted by using only one mip level at a time since all the information in the lower level map is contained in the larger one.

No, the visual quality will be impacted when the texture has lossy compression.

Reproducing the lower level mipmap from the lossy compressed higher level mipmap will have higher quality than the lossy compressed lowel level mipmap.

That means the "fast-trilinear" filtering can actually improve IQ.
 
vb said:
And considering they only use it on driver generated mip-maps.

That's unlikely.
D3D doesn't support driver generated mip-maps.
Therefore there isn't many game that would be affected by it. ;)

Btw, ATI said their method works as long the application used a box filter for generate the miplevels. The driver analyses the texture to detect such situations. It doesn't generate the mipmaps.
 
One of the things I love about B3d is that when someone throws out "Nyquist theorem", it doesn't put the whole room to sleep, or cause a chorus of abusive "wtf!". Here eyes squint, hands start inching slowly toward belts, and a Sergio Leone soundtrack starts up. . .
 
zeckensack said:
That's a question of perspective, I guess. A texture map is an array of discrete samples, with step discontinuities between every pair of samples, if you insist that each texel is a rectangular region and there's no distance between neighbors.
However, you can fit a frequency bounded curve over that array that will exactly touch each texel center. This represents another view on the map: a texel is a single point on the surface of the map, and there's non-zero distance between texels.

This model has no discontinuity concerns, and IMO this is the right way to look at it (not from an implementation's POV, but from a signal theory POV). After all, if the flush rectangle theory were correct, point sampling would be the preferred technique for texture magnification. It isn't.

Good point. Then what is the best assumption for how the source texels are filtered to generate the source texture map? That should dictate how the texture should be reconstructed by the implemenation.

Plus you need to add in the tool that generated the scaled texture and mip maps.

Bring on the procedural textures!
 
Back
Top