Formal work on trianglization of smooth surfaces?

Discussion in 'Rendering Technology and APIs' started by betan, Jan 16, 2008.

  1. betan

    Veteran

    Joined:
    Jan 26, 2007
    Messages:
    2,315
    Likes Received:
    0
    Specifically I'm curious about a possible information theoric approach to polygonal modeling (error quantization, Nyquist, frequency domain, aliasing etc., not rasterization)
    Probably it is nothing useful in practice, still do you guys have any pointers for such work/paper/publication?

    It feels like an early chapter of any confident theory-centric 3d graphics book, of course I doubt it.
     
  2. codelogic

    Newcomer

    Joined:
    Oct 1, 2007
    Messages:
    54
    Likes Received:
    0
    Location:
    Albuquerque, NM
  3. betan

    Veteran

    Joined:
    Jan 26, 2007
    Messages:
    2,315
    Likes Received:
    0
    Thanks, CGA library even has python bindings.
    However, I'm not sure there is a software implementation of what I'm looking for (at least in this thread).
    For example, I'm interested in things like the constraints on the original smooth surface for perfect reconstruction from a given triangulated model, purely theoretical stuff that is.
     
  4. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    7,197
    Likes Received:
    603
    How exactly do you see this being done in the modeling phase? There is no error to minimize, the error depends on the sampling (or in other words, yes rasterization). You could pick some ad hoc set of viewpoints and sampling densities and minimize average aliasing error for them I guess ... but now you have an algorithm which at the same time is impractical and ad-hoc. What's the point with anyone even spending time on that?

    You could do a search for "view-independent simplification" on google, but by your very definition of the problem you rule out an information theoretic approach IMO.
     
    #4 MfA, Jan 20, 2008
    Last edited by a moderator: Jan 20, 2008
  5. betan

    Veteran

    Joined:
    Jan 26, 2007
    Messages:
    2,315
    Likes Received:
    0
    I don't expect it.
    The difference (by some metrics) between abstract smooth surface and polygonised model (or between high poly and low poly model) is the error.
    I haven't said anything about minimization though (since the minimum error would appear when one keeps the original model only for almost all cases). But reconstructiblility of higher poly model or smooth surface from the simpler model is a different thing.
    That is a different "error". What I'm talking about would exist even in an infinite AA system, and has nothing to do with pixels or rasterization.
    Thanks for the pointer, but I don't see how my definition would rule out such?

    BTW, I checked a couple of papers, and they seem to use simple quadratic error metrics (vertex to vertex, vertex to plane, plane to plane etc.)
     
  6. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    7,197
    Likes Received:
    603
    It's an error, but not an error which is relevant to the Nyquist limit until you start sampling.
     
  7. betan

    Veteran

    Joined:
    Jan 26, 2007
    Messages:
    2,315
    Likes Received:
    0
    That depends on your definition of sampling.
    For example 2d sampling of a function (RxR->R ) can be seen as uniform rectanglization of a surface in 3d, where all the 2d Nyquist stuff applies.
    That's why I claim polygonization of smooth surfaces can be seen as nonuniform sampling.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...