Smartfiltering may eventually replace bilinear/trilinear

K.I.L.E.R

Retarded moron
Veteran
http://www.hiend3d.com/smartflt.html

All rendering implemented completely in software, and performance hit for this "smart" texture filtering method is only about 10% comparing to software bilinear filtering.

The current hardware does not support this texture filtering technique, but in the future, more advanced versions of programmable pixel shaders could make the hardware implementation possible.

Why don't we have something like this out already in hardware?
Why do we stick with bilinear/trilinear when it is so prehistoric?
 
Funny, that filtering method seems to be based on a 2d-version of Marching Cubes (Marching Squares).
 
Doesn't seem to work well with any texture though.

I changed the original texture into this:
pic2.png



Result was this:
pic1.png
 
LOL! Sorry Brit.

This became news to me an hour or so ago. I didn't know it's been out for so long.

After reading both threads fully.

My questions still stand.

Neither thread has answered my question, all anyone was doing was arguing advantages and disadvantages which just about every filtering method has.

AFAICS smartfilter is far superior to anything we have in 3d.
 
It's not a good method really for most textures. It looks similar to various methods used for emulators such as hq2x through hq4x which they have on their website which isn't a surprise.
 
This isnt so much about smart filtering, as it is about smart upsizing ... and sorry, I dont think that particular approach is all that smart. Might work for some cartoon stuff, but it isnt general purpose.
 
K.I.L.E.R said:
AFAICS smartfilter is far superior to anything we have in 3d.

Far superior? It's good for simple high-contrast shapes on low-contrast backgrounds, as evidenced by the picture on the page, beyond that the evidence for supporting your claim is sketchy at best - especially considering Carpediem's test.

Anything purporting to be far superior, or heck, even functional on a case-by-case basis has to work well with arbitrary input, which clearly is not the case here, given the broken output in Carp's post. I don't see this thing beating trilinear with high-degree anisotropic sampling, not in a million years.
 
Carpediem,

I copied your test texture and saved it as a 32-bit TGA with transparency and I am not seeing the rendering errors you show in your screen capture. The smartfilter is making a bit of a mess of the IE tools/icons on the borders and the window title bar with white text on blue, but other than that it is quite good. The text, that text showing the error in your post, renders perfectly for me.

Sorry, no screen shot. Simply copied your image and set the blue background as transparent leaving full diamonds of IE.

QUICK UPDATE:

Now have it rendering flawlessly by using 24-bit (with RLE compression). 16-bit TGA becomes a garbled mess. 32-bit compressed exhibits my initial error. Seems compression is ok to use. So....

32-bit: Some corruption.
24-bit: Perfect (well, same as bilinear)
16-bit: Best described as chaos theory or a CBM-64 loading screen (for those who remember the good old days.)
 
Ok, I'm a dolt. I went back and tested re-saving the original texture to see which mode it uses and it turns out it is 32-bit RLE compressed. Additionally, The bilinear and smartfiltered output are obviously not supposed to be the same, so when I said 24-bit was perfect I really meant it looked right without corruption, but was identical to the bilinear filter, which is not what we want.

As I am getting slight edge corruption when using the IE (Carpediem's) texture and it mostly looks like the bilinear otherwise (especially the bottom half showing the corruption in the URL text in Carpediem's picture) I wish to retract everything I said and simply agree that there is corruption. Perhaps the application is customized for the specific texture that is included and using other textures of arbitrary file size changing things behind our backs (like where the corruption occurs).

So, this is basically a bunch of text I wish I never wrote. Wow... smartfiltering...who woulda thought.... *cough* *whistle*

/me hides
 
Well trying various random textures from Doom 3 (mostly rock and sand textures but also some various other ones). I can say there wasn't one damn texture where I could see a difference between the bilinear filtering and "smart" filtering. Apparently normal textures don't have areas of high contrast also there was the issue with this program that for some textures it would default to point sampling for everything

Then I looked at the damn lizards texture and notice its using alpha mapping to figure out where it should do the smart filtering and not. Hehe it is described on the page though here

"This particular demo uses a little more complicated technique - 4 additional bits per texel helps to describe position of the border in each inter-texel interval more precise. So texture is stored as 32-bit TGA file, with 24 bits for color and 8 bits for filter description. All rendering implemented completely in software, and performance hit for this "smart" texture filtering method is only about 10% comparing to software bilinear filtering. The reason of such good performance is simple - for typical texture image, the number of "border" texels is much less then number of "inside" texels."

So its absolutely pointless to try other textures practically as a result. Unless we are supposed to store an edge detection filter in the fourth channel.
 
If you read the page you can quickly see that its not supposed to replace bilinear but rather to be used alongside it. For textures with low contrast (most "natural" looking textures) bilinear is far superior. Where this smartfilter method shines is for sharp, high contrast images (such as those produced by vector drawing programs like Illustrator for example). I imagine you would implement a smartfilter pixel shader for only a few surfaces such as: monitors/control panels, team badges/decals, anything with text, roadsigns, etc... It would mean that in these situations you could get away with using smaller textures while still getting excellent quality when the player gets close to them. I like it.
 
For my undergraduate thesis, I built a SmartFlt hardware texture unit (along with the rest of a scanline rasterizer). You can find the paper here and the hardware here.

It turns out that the basic SamrtFlt algorithm (the one that's presented on Maxim's website) requires very little hardware on top of bilinear filtering: Just a few MUXes, two 8-bit adders (assuming 8-bit subtexel precision) a few gates of logic and lots of wiring.

The full SmartFlt (the algorithm implemented in the demo) would require some additional hardware: two units that can compute A*B + C*D and two ROMs. This is much more expensive, but also looks a lot better.

So why isn't SmartFlt implemented in today's GPUs? A few reasons:
- The full version is still somewhat expensive.
- It doesn't always blend nicely from LOD level 0 to 1. Some mixture of trilinear and SmartFlt would help, but that just makes it more expensive.
- Finally, the best reason for why it's not in implemented in hardware: it's not automatic.

Textures need to be specially preprocessed. Not only that, but that preprocessing requires *two* textures: the base texture and a color segmentation map. Although segmentation can be induced from the base texture, the various algorithms are far from perfect, and are usually very slow.

As some people pointed out earlier int his thread, SmartFlt only improves the contrast on sharp edges. It will not help gradients.
 
Thanks Bob.

From reading your post if speed wasn't an issue it would be feasible to use it, am I correct?

Couldn't there be a stage specifically designed to preprocess the textures?
 
From reading your post if speed wasn't an issue it would be feasible to use it, am I correct?

Couldn't there be a stage specifically designed to preprocess the textures?
You probably should re-read my post again. I already proposed a hardware addition to bilinear filtering. Speed is not the issue.
 
Goragoth said:
I imagine you would implement a smartfilter pixel shader for only a few surfaces such as: monitors/control panels, team badges/decals, anything with text, roadsigns, etc...

Yeah! And race cars. :)

MaxSt
 
Bob said:
- Finally, the best reason for why it's not in implemented in hardware: it's not automatic.

Textures need to be specially preprocessed.

Well, I don't think it's big deal.

Alpha maps are not generated automatically. Usually it's a job of texture designers to make them. But hardware support for alpha-channel exist.

I think the same is true for bump maps, normal maps...

In Doom 3 for almost every .tga texture I see that a lot of additional information is stored in other (related) .tga files.

MaxSt
 
Well, I don't think it's big deal.

Alpha maps are not generated automatically. Usually it's a job of texture designers to make them. But hardware support for alpha-channel exist.
The examples you give don't really apply in this case. Sure, hardware supports an alpha channel, dot3, shaders, etc, and using them often requires support from artists.

However, all of those are generic and have many more uses than "bump mapping technique #454". SmartFlt doesn't have this flexibility and genericity.
 
Great, this is what we always wanted; aliasing inside textures. Wait, let's simply go back to point filtering! :rolleyes:

Ok, ok, it does seem very useful for 2D applications.

What might actually work ok in a 'scientific' way it to give every texel a weight. This should be simple to implement, by having a second texture with the weights. This could be used for example to create a thin black line on a white texture, whithout actually using grey texels. Just a quick idea... probably has aliasing issues as well...
 
Back
Top