Polygons, voxels, SDFs... what will our geometry be made of in the future?

Discussion in 'Rendering Technology and APIs' started by eloyc, Mar 18, 2017.

  1. eloyc

    Veteran Regular

    Joined:
    Jan 23, 2009
    Messages:
    2,007
    Likes Received:
    1,186
    Hi,

    Maybe we could use this thread to comment on this subject since it looks like creating a specific thread for each single new tech/piece of news could sometimes disperse the main theme and make it more difficult to know the latest info. Plus, there may be some other "the future of graphics" threads, but I would like to focus specifically on geometry (hopefull this won't suck :p ). Plus, I'm tired of commenting in the Unlimited Detail thread... I see no enthusiasm there and maybe it's not the best place to talk about other things, even though the may be related :-D .

    Just to start this off, I'm haunted by the persistent sensation that videogames' graphics are hollow, a mere facade of planes that only disguise a void. :-( I know, this is quite personal, but do you really think that polygons came to stay? Even though polygons are now the "easy" way, do you think that the advantages of other approaches can outweigh that? Are "volumetric" solutions really better to handle destruction, crafting, fluids?

    Do you think that some of these new techniques are worth investing? Maybe their implementation is harder because hardware is clearly not built with them in mind, the same way that at first we didn't have specific hardware to process polygons as we do now, so maybe the flaws are not in the approaches per se, rather than in the hardware, too?

    If polygons are the way to go (because, you know, we've been doing it for decades, now, and it keeps getting better), why are there so many people experimenting with alternate solutions? And why all these people are mainly isolated programmers, rather than big studios (with the exception of Media Molecule, maybe?).

    All seem fads, nothing that is truly catching on and making us all think that the industry is going to change, like when polygons appeared on the scene.

    Opinions? Thank you!
     
    London-boy and corysama like this.
  2. eloyc

    Veteran Regular

    Joined:
    Jan 23, 2009
    Messages:
    2,007
    Likes Received:
    1,186
    Some Atomontage (voxel engine) videos:




    Voxel Quest:


    Dreams:
     
    lefantome likes this.
  3. Infinisearch

    Veteran Regular

    Joined:
    Jul 22, 2004
    Messages:
    739
    Likes Received:
    139
    Location:
    USA
    Is this exclusively about polygons vs volumetric? Also I don't think destruction is possible or at least easy with SDF's.
     
    eloyc likes this.
  4. eloyc

    Veteran Regular

    Joined:
    Jan 23, 2009
    Messages:
    2,007
    Likes Received:
    1,186
    No... I guess. I guess this is polygons vs whatever other solutions. Are you thinking about something specific?

    If I'm not mistaken, in Dreams you have powerful crafting tools, including one that lets you "subtract". Isn't that a method that could be used to destruct?
     
  5. Voxilla

    Regular

    Joined:
    Jun 23, 2007
    Messages:
    711
    Likes Received:
    282
    Voxels have been used for decades already in the field of medical visualization.
    Basically because MR/CT/PET scanners provide you the voxels directly.
    4D CT/Echo now is also pretty common to visualize ie like a beating heart.



     
    lefantome and eloyc like this.
  6. eloyc

    Veteran Regular

    Joined:
    Jan 23, 2009
    Messages:
    2,007
    Likes Received:
    1,186
    Yes, I know that voxels have been used for quite a while, but do you think that their use will replace the use of polygons in the future? Why/why not?

    We have some notable examples in videogames, such as Outcast and No Man's Sky, but they seem just anecdotal.
     
  7. Rodéric

    Rodéric a.k.a. Ingenu
    Moderator Veteran

    Joined:
    Feb 6, 2002
    Messages:
    3,986
    Likes Received:
    847
    Location:
    Planet Earth.
    You can add Commanche Maximum Overkill, and Delta Forces 2 to your list of voxel based games.
     
    Lightman likes this.
  8. sebbbi

    Veteran

    Joined:
    Nov 14, 2007
    Messages:
    2,924
    Likes Received:
    5,288
    Location:
    Helsinki, Finland
    These games (and Outcast) are not actually voxel based games. These are height-field ray marchers. One ray is cast per pixel row (X). Tracing starts from bottom of the screen (last Y pixel column). The ray moves along the camera ray (2d xy) and you sample the heighmap at each location. For each heighmap pixel you calculate the projected Y coordinate (in screen space) and plot a pixel (when Y < prev). This is a bit similar to Doom renderer. Camera roll is not supported by default, but you can simply implement it by rotating the generated image, or by filling the screen space in different order (lines instead of X rows).

    I have implemented similar terrain renderers to two Nokia N-Gage games (Pathway to Glory and PtG: Ikuza Islands). N-Gage didn't have a GPU or a floating point unit, so this kind of terrain renderer was actually the most efficient one. Lately I have been wondering whether a similar terrain renderer would make sense with modern GPUs, since compute shaders allow similar implementation. The biggest problem is that one thread per X row is pretty low for modern GPUs parallelism. 4K threads (at 4K) isn't enough to reach good occupancy. You'd want at least 64K threads (on a modern 64 compute unit GPU).
     
    lefantome, Laa-Yosh, eloyc and 2 others like this.
  9. Voxilla

    Regular

    Joined:
    Jun 23, 2007
    Messages:
    711
    Likes Received:
    282
    No voxels will not replace polygons. Infact they are complementary.
    Also in medical visualization, both polygon and voxel objects can be rendered in the same scene and can interact with each other, like for depth compositing, sculpting, ...
     
    #9 Voxilla, Mar 21, 2017
    Last edited: Mar 21, 2017
  10. Infinisearch

    Veteran Regular

    Joined:
    Jul 22, 2004
    Messages:
    739
    Likes Received:
    139
    Location:
    USA
    Higher order surfaces for one.
    What about shattering an object into debris like an explosion?
     
  11. eloyc

    Veteran Regular

    Joined:
    Jan 23, 2009
    Messages:
    2,007
    Likes Received:
    1,186
    Oh, you're right, I remember, know. Thanks for making it clear!

    Do you think that polygons can be replaced by any other feasible technique?

    Also, what is the advantage of using voxels or other techniques if we already have good ol' polygons?

    I'm just trying to understand, if polygons are that good, why other methods?

    Could you please elaborate? I haven't heard of it before.

    Well, I'm quite ignorant on this but, following my previous idea, if you are able to cut/subtract, why not cutting into a lot of pieces and then apply physics to them?
     
  12. Voxilla

    Regular

    Joined:
    Jun 23, 2007
    Messages:
    711
    Likes Received:
    282
    Polygons can be replaced by other techniques.
    The first Nvidia GPU did not use polygons but quadratic surfaces...

    With polygons a lot of things are possible, but for ie volume rendering (see video above) they are not the best choice.
     
  13. eloyc

    Veteran Regular

    Joined:
    Jan 23, 2009
    Messages:
    2,007
    Likes Received:
    1,186
    Nice piece of info! On my question, I'm not only asking if that can be done. Obviously, it can be done. I'm asking if there's something with the potential to eventually replace polygons, because of its advantages.
     
  14. Vil

    Vil

    Joined:
    Feb 24, 2015
    Messages:
    6
    Likes Received:
    9
    Polygons, SDFs and voxels each have pros and cons. I think the future is being able to convert geometry into the most convenient representation for the task at hand. For one example, I'm currently building a geometry editor which uses an SDF as its primary representation but generates a triangle mesh from it for rendering. It's working out really well so far.
     
    eloyc and Lightman like this.
  15. Infinisearch

    Veteran Regular

    Joined:
    Jul 22, 2004
    Messages:
    739
    Likes Received:
    139
    Location:
    USA
    So you have an sdf object, and you want to cut it into pieces. From what i understand (if its possible, I'm still not sure) you'd have to evaluate the same object once for each piece and add the subtraction geometry to each evaluation. In other words if possible, its not cheap. You should read up on them (SDF's) they're pretty straight forward to understand.

    How do you generate the trimesh from the SDF? Is there a particular method I could look up? I've only read a little about SDF's and would find this interesting.
     
    eloyc likes this.
  16. Vil

    Vil

    Joined:
    Feb 24, 2015
    Messages:
    6
    Likes Received:
    9
    The Marching Cubes algorithm is a good place to start, it's pretty straightforward to implement in a geometry or compute shader and gives decent results. It has limitations that mean it's not a great fit for my particular project though, so I'm using something developed in-house which is distantly related to the Surface Nets algorithm.

    The forum won't let me post links because I haven't made enough posts here yet, but googling those terms should give you plenty to read up on.
     
    milk likes this.
  17. sebbbi

    Veteran

    Joined:
    Nov 14, 2007
    Messages:
    2,924
    Likes Received:
    5,288
    Location:
    Helsinki, Finland
    I wrote a compute shader implementation of surface net algorithm. Iterative refinement of vertex positions (move along SDF gradient). 2 passes: First generates vertices (shared between connected faces) and second generates faces. IIRC it takes around 0.02ms (high end PC GPU) to generate a mesh for a 64x64x64 SDF (output = around 10K triangles).

    But we don't render triangles. Our renderer ray traces SDFs directly. On high end PC, the primary ray tracing pass (1080p) is only taking 0.3 ms. Secondary rays (soft shadows, AO, etc) take (obviously) longer time to render. 60 fps is definitely possible on current gen consoles with a SDF ray tracer.
     
    dogen, milk, eloyc and 1 other person like this.
  18. Voxilla

    Regular

    Joined:
    Jun 23, 2007
    Messages:
    711
    Likes Received:
    282
    For alternative compute based rendering, it might be interesting to have a look at ComputeBench 2.0
    I bumped into it recently, and it is quite impressive.
     
    eloyc likes this.
  19. Infinisearch

    Veteran Regular

    Joined:
    Jul 22, 2004
    Messages:
    739
    Likes Received:
    139
    Location:
    USA
    Sebbbi could you explain this to me or post a link to some reading materials? All the cursory materials on SDF's I've read up on use distance functions, so what exactly do you mean by 64x64x64? I thought I was missing something when it comes to signed distance fields and this seems to be one of the things I'm missing.
     
  20. sebbbi

    Veteran

    Joined:
    Nov 14, 2007
    Messages:
    2,924
    Likes Received:
    5,288
    Location:
    Helsinki, Finland
    Volume texture of 64x64x64 resolution containing a SDF. Each voxel stores the signed distance to the nearest surface. SDF changes roughly linearly (except close to non-planar surfaces), so you can trilinear filter SDF value of any (non-integer) point from the volume texture.

    You can obviously also use surface net algorithm to convert a analytical distance function to a mesh. But you still need to define a SDF sampling resolution, since surface net algorithm (and matching cubes and dual contouring) sample the SDF using an uniform grid.

    Surface net construction doesn't need a perfect SDF. But the vertex position iteration needs that gradients (direction) should be close to correct around the level set (= around the surface = when SDF is close to zero).

    A link:
    https://0fps.net/2012/07/12/smooth-voxel-terrain-part-2/

    He presents this problem and then a hacky solution. However if you are converting a SDF to a mesh, you don't need hacky solutions, because you can calculate the gradient simply from SDF partial derivatives (identical to SDF normal vector math). Then follow the gradient to the surface iteratively. I used fixed step count (8) in my algorithm. The gradient descend was practically free in my GPU surface net generator (hardware trilinear filtering is the key here). If you are converting some other form of volumetric data (for example binary voxels), then you can't obviously use this easy solution. There simply isn't subvoxel quality data available. You need some filter kernel (= fancy blur) to get rid of the stair stepping (similar to image post process AA).
     
    #20 sebbbi, Mar 22, 2017
    Last edited: Mar 22, 2017
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...