Formal work on trianglization of smooth surfaces?

betan

Veteran
Specifically I'm curious about a possible information theoric approach to polygonal modeling (error quantization, Nyquist, frequency domain, aliasing etc., not rasterization)
Probably it is nothing useful in practice, still do you guys have any pointers for such work/paper/publication?

It feels like an early chapter of any confident theory-centric 3d graphics book, of course I doubt it.
 
I know of 2 libraries that might have software implementations of what you're interested in, namely the Computational Geometry Algorithms Library and the GNU Triangulated Surface Library. I suppose another good source of information would be Blender's source code.

Thanks, CGA library even has python bindings.
However, I'm not sure there is a software implementation of what I'm looking for (at least in this thread).
For example, I'm interested in things like the constraints on the original smooth surface for perfect reconstruction from a given triangulated model, purely theoretical stuff that is.
 
How exactly do you see this being done in the modeling phase? There is no error to minimize, the error depends on the sampling (or in other words, yes rasterization). You could pick some ad hoc set of viewpoints and sampling densities and minimize average aliasing error for them I guess ... but now you have an algorithm which at the same time is impractical and ad-hoc. What's the point with anyone even spending time on that?

You could do a search for "view-independent simplification" on google, but by your very definition of the problem you rule out an information theoretic approach IMO.
 
Last edited by a moderator:
How exactly do you see this being done in the modeling phase?
I don't expect it.
There is no error to minimize,
The difference (by some metrics) between abstract smooth surface and polygonised model (or between high poly and low poly model) is the error.
I haven't said anything about minimization though (since the minimum error would appear when one keeps the original model only for almost all cases). But reconstructiblility of higher poly model or smooth surface from the simpler model is a different thing.
the error depends on the sampling (or in other words, yes rasterization).
You could pick some ad hoc set of viewpoints and sampling densities and minimize average aliasing error for them I guess ...
That is a different "error". What I'm talking about would exist even in an infinite AA system, and has nothing to do with pixels or rasterization.
You could do a search for "view-independent simplification" on google, but by your very definition of the problem you rule out an information theoretic approach IMO.
Thanks for the pointer, but I don't see how my definition would rule out such?

BTW, I checked a couple of papers, and they seem to use simple quadratic error metrics (vertex to vertex, vertex to plane, plane to plane etc.)
 
The difference (by some metrics) between abstract smooth surface and polygonised model (or between high poly and low poly model) is the error.
It's an error, but not an error which is relevant to the Nyquist limit until you start sampling.
 
It's an error, but not an error which is relevant to the Nyquist limit until you start sampling.

That depends on your definition of sampling.
For example 2d sampling of a function (RxR->R ) can be seen as uniform rectanglization of a surface in 3d, where all the 2d Nyquist stuff applies.
That's why I claim polygonization of smooth surfaces can be seen as nonuniform sampling.
 
Back
Top