MfA said:
Both need the same solution IMO. Adaptive resolution texture maps ... no matter how much it sucks from an implementation point of view.
The nice thing is that in some cases it doesn't suck at all.
One can use a very low resolution geometry texture to encode a low resolution mesh from its high resolution counterpart.
The process of further subdivide the coarse mesh can be much more simple with a now completely regular mesh and one can use a lot of different algorithms here, from a standard midpoint insertion/quad split to subdivision surfaces.
A smooth subdivision scheme can be slow at run time but it has the potential to be coupled with a smoother displacement map too that can be compressed (wavelets?) much more effectively, so trade offs have to be made.
In displaced subdivision surfaces each triangle (a quad in a GT framework) has 'attached' a fixed resolution displacement map, but it doesn't have to be that way. It may be desiderable to store different resolution displacement maps (or other attributes, like vertex colors, normals, glossiness, and so on..) on each base quad in the GT. In a highly programmable (and highly parallel..eheh
) architecture you don't need to make random memory accesses in order to render a GT mesh, every memory access become streamlined and adaptive texture maps (or better.. scalar fields associated to base quads) are much more simple to implement, imho.
Obviously I ignored a lot of small and not so small problems here..like seams and t-junctions creation between different resolution scalar fields on close to each other base quads, topological restrictions on the high resolution mesh, etc..
ciao,
Marco