Yeah, that's kind of along the lines of what I was thinking. But now that you say it like that, it makes some sense that this might possibly be true free trilinear. Anyway, I'm really excited about this architecture...definitely looks like lots of interesting changes!The other possibility is simply that the rate at which you can filter is uncoupled from the rate at which you can calculate texture addresses. (e.g. you can only calculate N texel addresses per cycle, but have 2N filtering units, then trilinear is "free" in the sense that if you don't enable it, you have bilinear units potentially going to waste)
I wonder. Does it mean that nVidia has implemented an on-the-fly trilinear filtering technique? Or are the texture units just meant to perform two bilinear filters per clock? The first could be good for image quality/performance ratios (though the image quality will be somewhat lower than "true" trilinear, it should be much better than brilinear). The second would be a good optimization for anisotropic filtering, though nobody should expect trilinear to be free with AF enabled in that case.
Edit:
There is a third, much worse option: trilinear filtering isn't actually being enabled. I hope this isn't the case, but it is possible.
Edit:
There is a third, much worse option: trilinear filtering isn't actually being enabled. I hope this isn't the case, but it is possible.
from ArchMark test methods page said:Any positive mipmap LOD bias, or an extreme (-6) negative LOD bias will effectively disable trilinear filtering for large portions of the screen during this test. LOD bias driver controls should be set to their defaults (zero) for meaningful results.
Well, he could be right:
Do you have an 8800, Trini?
OK, folks!
It looks as ArchMark likes to throw out big bunch of numbers, so I've decided to put some order in the field and here is the result, with few gathered reference scores aside to the star of the day [G80]:
p.s.: it was a real pain in the a** to collect and organize all the data in the chart, but I hope it worth.
You may like to note that some texture caches my report differently dependant on whether there's and L1 and L2 cache available.p.s.: it was a real pain in the a** to collect and organize all the data in the chart, but I hope it worth.
Thats not true for some texture formats on (at least) NV4x.Well, on NV's hardware the L1 cache is used to store only uncompressed TEX data, so who knows for sure?