How does forward texture map work?

Didn't read the papers in detail but it seems like they map texels to pixels rather than pixels to texels. Current hardware rasterizes pixels, so for each pixel a tex coord is generated and thats looked up in a texture, this texture is sampled according to rate of change etc using bilinear/trilin/anisotropic filtering (uses more texels to better approximate the area covered by the pixel). What Philips suggest is rasterizing in texture space, so you map the polygon into the texture, you then step through every texel within the mapped polygon area, each texel is "splat" onto an area of the screen (4x4 area with filter weights). This seems strange at first since a texel can actually write outside the area of the actual polygon ?

Advantage they claim is that each texel is used only once and in a perfectly regular predictable way (just like default architectures are perfectly predictable in the framebuffer). Problem is each texel is written to multiple pixels...

All of this is nice for older multitexture principles, with pixel shaders all of this goes out of the window... and dependent read ops don't work which they mention in the paper. Since each texel influences multiple pixels it would become a mess to execute pixels shaders IMHO... so nice idea but does not seem to fit in with PS very well.

But then again I only had a quick look at it all...

K-
 
Backward texture mapping computes for every screen pixel which texture texel(s) it corresponds to.

Forward texture mapping computes for every texture texel which screen pixel(s) it corresponds to.

The tricky part is to avoid gaps with texture magnification, and avoid too many computations for minification. That's what the paper is about...
 
From the initial looks, the inverse texture mapping animations look as if the LOD bias has been set to a substantial negative value, the 2x2 and 4x4 AA pictures look like they are using ordered grid AA (rotated or sparse grids would look a lot better than that), and the pixel fragment buffer suggested in the presentation seems to be doing highly sort-dependent anti-aliasing, although that can probably be fixed.

I seem to remember that Nvidia's old NV1 chip did forward texture mapping (albeit unfiltered) - is this correct?
 
I think

The sega saturn and 3D0 system both implemented texturing in a similar way..

I'd guess it appeared more efficient, as most polygons reduce the texture somewhat, so a fast linear scan through texture ram + random walk through screen space, would be faster than linear scan through screen space+random
walk thru texture ram
( No mipmapping... remember)
 
Back
Top