Here's a theoretical debate for you - how feasible is it to replace bitmap texturing with vector drawing functions? Is it just a matter of requiring stupid amounts of processing power well beyond what we can handle now?
The description of the technology in mind is illustrated thus - Consider a quad with a 2D texture. Each texel is a memory access and interpolation. It looks good from a distance but lousy up close. Let's say we have a graphic design on this quad; I'm thinking of some scroll work on the ship in Rogue's Galaxy that was a mess in a cutscene due to low resolution. Now imagine this texture was created in an art package using vector drawing functions, saved out into a single layer texture. It would be possible to draw those same vector drawing operations at render-time, transformed to fit the orientation of the quad, saving the memory footprint of the texture and the BW, but requiring more processing power to draw. Now if you can draw 2D vector art onto a transformed quad using UV coords, you could do the same on any UV defined surface, and thus could render textures on everything using mathematical draw functions rather than data lookups.
Okay, you wouldn't be able to match the artistry of textures, both in principle it should work, and could allow some exceptional IQ on particular art-style games. Replacing textures with vector drawing would allow artefact-free close-viewing, and could incorporate antialised line drawing algorithms. The question I have is what could run such an engine? I guess GPU shaders aren't going to manage that complexity? I dunno how far they can go, although whole games can be run on shaders so they must be very versatile. Cell also looks an excellent fit, but I don't know how workload would scale with complexity. What I'd hope for though is something that's basically jaggie-free; a cell-shader type renderer that draws all objects with 2D vector drawing functions using antialiased drawing functions including the texture detail on them. A sort of cartoon renderer. It wouldn't have to be bitmap free, but the starting place should be sans bitmaps.
Is the idea plausible, and what would be the cons versus traditional renderers?
The description of the technology in mind is illustrated thus - Consider a quad with a 2D texture. Each texel is a memory access and interpolation. It looks good from a distance but lousy up close. Let's say we have a graphic design on this quad; I'm thinking of some scroll work on the ship in Rogue's Galaxy that was a mess in a cutscene due to low resolution. Now imagine this texture was created in an art package using vector drawing functions, saved out into a single layer texture. It would be possible to draw those same vector drawing operations at render-time, transformed to fit the orientation of the quad, saving the memory footprint of the texture and the BW, but requiring more processing power to draw. Now if you can draw 2D vector art onto a transformed quad using UV coords, you could do the same on any UV defined surface, and thus could render textures on everything using mathematical draw functions rather than data lookups.
Okay, you wouldn't be able to match the artistry of textures, both in principle it should work, and could allow some exceptional IQ on particular art-style games. Replacing textures with vector drawing would allow artefact-free close-viewing, and could incorporate antialised line drawing algorithms. The question I have is what could run such an engine? I guess GPU shaders aren't going to manage that complexity? I dunno how far they can go, although whole games can be run on shaders so they must be very versatile. Cell also looks an excellent fit, but I don't know how workload would scale with complexity. What I'd hope for though is something that's basically jaggie-free; a cell-shader type renderer that draws all objects with 2D vector drawing functions using antialiased drawing functions including the texture detail on them. A sort of cartoon renderer. It wouldn't have to be bitmap free, but the starting place should be sans bitmaps.
Is the idea plausible, and what would be the cons versus traditional renderers?