Didn't read the papers in detail but it seems like they map texels to pixels rather than pixels to texels. Current hardware rasterizes pixels, so for each pixel a tex coord is generated and thats looked up in a texture, this texture is sampled according to rate of change etc using bilinear/trilin/anisotropic filtering (uses more texels to better approximate the area covered by the pixel). What Philips suggest is rasterizing in texture space, so you map the polygon into the texture, you then step through every texel within the mapped polygon area, each texel is "splat" onto an area of the screen (4x4 area with filter weights). This seems strange at first since a texel can actually write outside the area of the actual polygon ?
Advantage they claim is that each texel is used only once and in a perfectly regular predictable way (just like default architectures are perfectly predictable in the framebuffer). Problem is each texel is written to multiple pixels...
All of this is nice for older multitexture principles, with pixel shaders all of this goes out of the window... and dependent read ops don't work which they mention in the paper. Since each texel influences multiple pixels it would become a mess to execute pixels shaders IMHO... so nice idea but does not seem to fit in with PS very well.
But then again I only had a quick look at it all...
K-