Non bitmapped texturing

Shifty Geezer

uber-Troll!
Moderator
Legend
Here's a theoretical debate for you - how feasible is it to replace bitmap texturing with vector drawing functions? Is it just a matter of requiring stupid amounts of processing power well beyond what we can handle now?

The description of the technology in mind is illustrated thus - Consider a quad with a 2D texture. Each texel is a memory access and interpolation. It looks good from a distance but lousy up close. Let's say we have a graphic design on this quad; I'm thinking of some scroll work on the ship in Rogue's Galaxy that was a mess in a cutscene due to low resolution. Now imagine this texture was created in an art package using vector drawing functions, saved out into a single layer texture. It would be possible to draw those same vector drawing operations at render-time, transformed to fit the orientation of the quad, saving the memory footprint of the texture and the BW, but requiring more processing power to draw. Now if you can draw 2D vector art onto a transformed quad using UV coords, you could do the same on any UV defined surface, and thus could render textures on everything using mathematical draw functions rather than data lookups.

Okay, you wouldn't be able to match the artistry of textures, both in principle it should work, and could allow some exceptional IQ on particular art-style games. Replacing textures with vector drawing would allow artefact-free close-viewing, and could incorporate antialised line drawing algorithms. The question I have is what could run such an engine? I guess GPU shaders aren't going to manage that complexity? I dunno how far they can go, although whole games can be run on shaders so they must be very versatile. Cell also looks an excellent fit, but I don't know how workload would scale with complexity. What I'd hope for though is something that's basically jaggie-free; a cell-shader type renderer that draws all objects with 2D vector drawing functions using antialiased drawing functions including the texture detail on them. A sort of cartoon renderer. It wouldn't have to be bitmap free, but the starting place should be sans bitmaps.

Is the idea plausible, and what would be the cons versus traditional renderers?
 
I gotta ask you: If textures are gonna be drawn from vectors, why not using HOS on geometry level?
The workload should be the same without the restricion of planar "textures" (though they could added with little trouble)
 
Yes, that'd be another thing to add, producing a result that's no jaggies, no polygon edges. But there's issues with texturing and animating HOS objects. I'd like to just consider the drawing aspect for now.
 
Slightly OT:

Made me think of that crappy 3d renderer that you could get for Amos (a really crappy but popular basic language on the Amiga). It didn't support texture mapping (the original Amigas were too slow for that), so instead you could draw vector image "textures" on your polygons. Didn't look too bad, even though the vector images weren't perspective correct.
 
Well, the problem will be how you efficiently draw those "textures" (ill call them SVG-Textures for no apparent reason), for a triangle you can rather easily define the region of a SVG-Texture thats displayed.
The problem now is that you need to find out which curves you need to draw to accurately represent that region (painting them in 3 Dimensions aint a problem) - at worst you gonna need to paint ALL curves. And I wager ito say that it could be easier to paint&clip every curves than finding out which ones you need. At the end you have succesfully painted the surface of 1 Triangle.
Rinse and repeat for every Triangle in the scene - you need to paint every Surface on the object seperatly. Thats gonna be very expensive to say the least.

Again, to come back to HOS - you could take your Polygonal-Object, replace the Triangle-Surfaces with HOS (ie intersect the SVG-Textures with the Boundaries and add them to the geometry) and have the same result. Animate the Object just like you would before, but the advantage is that the SVG-Textures are now integrated directly in the geometry. You wont need to recreate the SVG-Textures at every angle, just paint the curves as the come.
 
Why draw the vectors per triangle? Instead transform the vector data into screen space and draw directly onto the screen buffer. Render per pixel the vector code for that 'texel', transformed by orientation.
 
There has been quite some work on this already, albeit from a different direction; namely using GPUs to accelerate vector drawing (e.g. SVG).

Hoppe was co-author on a paper which may be interesting...
 
Jawed, didn't the folks doing sparse sampled GI also have a vector based technique for textures (FBU or something?) that would take a texture and add vectors as as you got closer they maintained their edge quality?
 
There's actually quite a lot of work in the area. MSR has done really good some stuff lately; there was also a cool font rendering paper on the GPU done in 2006 (I3D) by Qin and McCool and a much more generalized SVG rendering paper coming this year - check out Zheng Qin's page for some downloads.
 
There's actually an OpenGL implementation of SVG called SVGL.

From the site:
- a lot of svg 1.0 features are implemented: simples shapes, path, gradient, clipping, viewBox, opacity, <use>, animations etc.
- fonts are handled by the glft companion library, which allows for autoscaling according to the current scale, and automatic choice between vectorized glyphs or textures based on freetype2 rendering.
- texturized fonts are antialiased by freetype, while every other drawings are FSAA by OpenGL.
-next step is optimization by using various techniques like display list, culling, and cache rendering into textures.
 
Back
Top