Given the importance of AF in image quality...

Why do designers stick with using only bilinear capable TMU's.

IIRC the GF3 (or 4?) had TMU's capable of single cycle trilinear filtering, so such a thing is possble.

I would guess the interconnect routing and control logic would be easier to implement for 16 trilinear TMU's rather than 32 bilinear units, which as chips become more complex and mutithreading capable would have to be a consideration.

From my limited grasp of what happend under the hood, generally speaking a TMU would spend most of it's life performing anisotropic filtering on anything up to 128 samples, so why continue to focus it's design on bilinear? Is the relative cost to the transistor budget that great? Am I missing something?
 
Last edited by a moderator:
I believe it was GF1 that had "free" trilinear b/c of a hardware boo-boo that they corrected with GF2.

As for sticking with mere bilinear, I'd guess it's b/c of scaling. They scale their designs down to mid-range and low-end GPUs where transistors and bandwidth are at a greater premium and so HQ AF may not be simply a way to use up excess 3D power. You can see from Hexus' texture filtering shots that NV's default "quality" filtering is more like brilinear, and "high quality" more like trilinear.

But there may be a more technical explanation, which I'm interested in, too. Actually, a question like this might be better served in the main 3D Tech forum, as it's a bit more involved than a simple beginner's Q, IMO. But let's see how many ppl read this sub-forum.
 
Why do designers stick with using only bilnear capable TMU's.
Because it's the lowest common denominator, an essential building block for AF, and generally just used a lot. Even if you crank up AF, many texture samples will not get any because they don't need any. Likewise if you use a trilinear filter, many samples won't need that because they are magnified, which clamps to the largest mipmap level, so there's no point in trilinear=blending between mipmap levels.
TWWCB said:
IIRC the GF3 (or 4?) had TMU's capable of single cycle trilinear filtering, so such a thing is possble.
It was the Geforce 1, but that was more of an engineering accident instead of being deliberate.

S3 Deltachrome has something like that, but it wasn't really competitive to its peers anyway, so it wont help prove a point.
TWWCB said:
I would guess the interconnect routing and control logic would be easier to implement for 16 trilinear TMU's rather than 32 bilinear units, which as chips become more complex and mutithreading capable would have to be a consideration.
Huh?
You can optimize away a little math, i.e. if you really want to build a trilinear TMU it'll be smaller than two separate bilinear TMUs. But the bilinear TMU still wins the area contest.
 
hm... and so for a part where scaling isn't an issue (i.e. consoles), the issue becomes one of die space? Do you think their stance will change with the next wave of consoles as a lack of AF would be even more obvious with higher resolution displays :?: Presumably, 1080p would be the absolute ceiling limit as far as resolution goes by that time, so I'd hope that other IQ features would be focused on...maybe?
 
Doing trilinear anisotropic filtering by taking multiple trilinear samples is wasteful. There is no need to take the same number of samples in both the higher and the lower mip level.
 
Back
Top