Polygons, voxels, SDFs... what will our geometry be made of in the future?

eloic

Veteran
Hi,

Maybe we could use this thread to comment on this subject since it looks like creating a specific thread for each single new tech/piece of news could sometimes disperse the main theme and make it more difficult to know the latest info. Plus, there may be some other "the future of graphics" threads, but I would like to focus specifically on geometry (hopefull this won't suck :p ). Plus, I'm tired of commenting in the Unlimited Detail thread... I see no enthusiasm there and maybe it's not the best place to talk about other things, even though the may be related :D .

Just to start this off, I'm haunted by the persistent sensation that videogames' graphics are hollow, a mere facade of planes that only disguise a void. :( I know, this is quite personal, but do you really think that polygons came to stay? Even though polygons are now the "easy" way, do you think that the advantages of other approaches can outweigh that? Are "volumetric" solutions really better to handle destruction, crafting, fluids?

Do you think that some of these new techniques are worth investing? Maybe their implementation is harder because hardware is clearly not built with them in mind, the same way that at first we didn't have specific hardware to process polygons as we do now, so maybe the flaws are not in the approaches per se, rather than in the hardware, too?

If polygons are the way to go (because, you know, we've been doing it for decades, now, and it keeps getting better), why are there so many people experimenting with alternate solutions? And why all these people are mainly isolated programmers, rather than big studios (with the exception of Media Molecule, maybe?).

All seem fads, nothing that is truly catching on and making us all think that the industry is going to change, like when polygons appeared on the scene.

Opinions? Thank you!
 
Is this exclusively about polygons vs volumetric?
No... I guess. I guess this is polygons vs whatever other solutions. Are you thinking about something specific?

Also I don't think destruction is possible or at least easy with SDF's.
If I'm not mistaken, in Dreams you have powerful crafting tools, including one that lets you "subtract". Isn't that a method that could be used to destruct?
 
Voxels have been used for decades already in the field of medical visualization.
Basically because MR/CT/PET scanners provide you the voxels directly.
4D CT/Echo now is also pretty common to visualize ie like a beating heart.


 
Voxels have been used for decades already in the field of medical visualization.
Basically because MR/CT/PET scanners provide you the voxels directly.
4D CT/Echo now is also pretty common to visualize ie like a beating heart.


Yes, I know that voxels have been used for quite a while, but do you think that their use will replace the use of polygons in the future? Why/why not?

We have some notable examples in videogames, such as Outcast and No Man's Sky, but they seem just anecdotal.
 
You can add Commanche Maximum Overkill, and Delta Forces 2 to your list of voxel based games.
These games (and Outcast) are not actually voxel based games. These are height-field ray marchers. One ray is cast per pixel row (X). Tracing starts from bottom of the screen (last Y pixel column). The ray moves along the camera ray (2d xy) and you sample the heighmap at each location. For each heighmap pixel you calculate the projected Y coordinate (in screen space) and plot a pixel (when Y < prev). This is a bit similar to Doom renderer. Camera roll is not supported by default, but you can simply implement it by rotating the generated image, or by filling the screen space in different order (lines instead of X rows).

I have implemented similar terrain renderers to two Nokia N-Gage games (Pathway to Glory and PtG: Ikuza Islands). N-Gage didn't have a GPU or a floating point unit, so this kind of terrain renderer was actually the most efficient one. Lately I have been wondering whether a similar terrain renderer would make sense with modern GPUs, since compute shaders allow similar implementation. The biggest problem is that one thread per X row is pretty low for modern GPUs parallelism. 4K threads (at 4K) isn't enough to reach good occupancy. You'd want at least 64K threads (on a modern 64 compute unit GPU).
 
Yes, I know that voxels have been used for quite a while, but do you think that their use will replace the use of polygons in the future? Why/why not?

We have some notable examples in videogames, such as Outcast and No Man's Sky, but they seem just anecdotal.

No voxels will not replace polygons. Infact they are complementary.
Also in medical visualization, both polygon and voxel objects can be rendered in the same scene and can interact with each other, like for depth compositing, sculpting, ...
 
Last edited:
No... I guess. I guess this is polygons vs whatever other solutions. Are you thinking about something specific?
Higher order surfaces for one.
If I'm not mistaken, in Dreams you have powerful crafting tools, including one that lets you "subtract". Isn't that a method that could be used to destruct?
What about shattering an object into debris like an explosion?
 
These games (and Outcast) are not actually voxel based games.
Oh, you're right, I remember, know. Thanks for making it clear!

No voxels will not replace polygons. Infact they are complementary.
Do you think that polygons can be replaced by any other feasible technique?

Also, what is the advantage of using voxels or other techniques if we already have good ol' polygons?

I'm just trying to understand, if polygons are that good, why other methods?

Higher order surfaces for one.
Could you please elaborate? I haven't heard of it before.

What about shattering an object into debris like an explosion?
Well, I'm quite ignorant on this but, following my previous idea, if you are able to cut/subtract, why not cutting into a lot of pieces and then apply physics to them?
 
Do you think that polygons can be replaced by any other feasible technique?

Also, what is the advantage of using voxels or other techniques if we already have good ol' polygons?

I'm just trying to understand, if polygons are that good, why other methods?

Polygons can be replaced by other techniques.
The first Nvidia GPU did not use polygons but quadratic surfaces...

With polygons a lot of things are possible, but for ie volume rendering (see video above) they are not the best choice.
 
Polygons, SDFs and voxels each have pros and cons. I think the future is being able to convert geometry into the most convenient representation for the task at hand. For one example, I'm currently building a geometry editor which uses an SDF as its primary representation but generates a triangle mesh from it for rendering. It's working out really well so far.
 
Well, I'm quite ignorant on this but, following my previous idea, if you are able to cut/subtract, why not cutting into a lot of pieces and then apply physics to them?
So you have an sdf object, and you want to cut it into pieces. From what i understand (if its possible, I'm still not sure) you'd have to evaluate the same object once for each piece and add the subtraction geometry to each evaluation. In other words if possible, its not cheap. You should read up on them (SDF's) they're pretty straight forward to understand.

Polygons, SDFs and voxels each have pros and cons. I think the future is being able to convert geometry into the most convenient representation for the task at hand. For one example, I'm currently building a geometry editor which uses an SDF as its primary representation but generates a triangle mesh from it for rendering. It's working out really well so far.
How do you generate the trimesh from the SDF? Is there a particular method I could look up? I've only read a little about SDF's and would find this interesting.
 
How do you generate the trimesh from the SDF? Is there a particular method I could look up? I've only read a little about SDF's and would find this interesting.

The Marching Cubes algorithm is a good place to start, it's pretty straightforward to implement in a geometry or compute shader and gives decent results. It has limitations that mean it's not a great fit for my particular project though, so I'm using something developed in-house which is distantly related to the Surface Nets algorithm.

The forum won't let me post links because I haven't made enough posts here yet, but googling those terms should give you plenty to read up on.
 
The Marching Cubes algorithm is a good place to start, it's pretty straightforward to implement in a geometry or compute shader and gives decent results. It has limitations that mean it's not a great fit for my particular project though, so I'm using something developed in-house which is distantly related to the Surface Nets algorithm.
I wrote a compute shader implementation of surface net algorithm. Iterative refinement of vertex positions (move along SDF gradient). 2 passes: First generates vertices (shared between connected faces) and second generates faces. IIRC it takes around 0.02ms (high end PC GPU) to generate a mesh for a 64x64x64 SDF (output = around 10K triangles).

But we don't render triangles. Our renderer ray traces SDFs directly. On high end PC, the primary ray tracing pass (1080p) is only taking 0.3 ms. Secondary rays (soft shadows, AO, etc) take (obviously) longer time to render. 60 fps is definitely possible on current gen consoles with a SDF ray tracer.
 
to generate a mesh for a 64x64x64 SDF (output = around 10K triangles).
Sebbbi could you explain this to me or post a link to some reading materials? All the cursory materials on SDF's I've read up on use distance functions, so what exactly do you mean by 64x64x64? I thought I was missing something when it comes to signed distance fields and this seems to be one of the things I'm missing.
 
Sebbbi could you explain this to me or post a link to some reading materials? All the cursory materials on SDF's I've read up on use distance functions, so what exactly do you mean by 64x64x64? I thought I was missing something when it comes to signed distance fields and this seems to be one of the things I'm missing.
Volume texture of 64x64x64 resolution containing a SDF. Each voxel stores the signed distance to the nearest surface. SDF changes roughly linearly (except close to non-planar surfaces), so you can trilinear filter SDF value of any (non-integer) point from the volume texture.

You can obviously also use surface net algorithm to convert a analytical distance function to a mesh. But you still need to define a SDF sampling resolution, since surface net algorithm (and matching cubes and dual contouring) sample the SDF using an uniform grid.

Surface net construction doesn't need a perfect SDF. But the vertex position iteration needs that gradients (direction) should be close to correct around the level set (= around the surface = when SDF is close to zero).

A link:
https://0fps.net/2012/07/12/smooth-voxel-terrain-part-2/

In Gibson’s original paper, she formulated the process of vertex placement as a type of global energy minimization and applied it to arbitrary smooth functions. Starting with some initial guess for the point on the surface (usually just the center of the box), her idea is to perturb it (using gradient descent) until it eventually hits the surface somewhere. She also adds a spring energy term to keep the surface nice and globally smooth. While this idea sounds pretty good in theory, in practice it can be a bit slow, and getting the balance between the energy terms just right is not always so easy.
He presents this problem and then a hacky solution. However if you are converting a SDF to a mesh, you don't need hacky solutions, because you can calculate the gradient simply from SDF partial derivatives (identical to SDF normal vector math). Then follow the gradient to the surface iteratively. I used fixed step count (8) in my algorithm. The gradient descend was practically free in my GPU surface net generator (hardware trilinear filtering is the key here). If you are converting some other form of volumetric data (for example binary voxels), then you can't obviously use this easy solution. There simply isn't subvoxel quality data available. You need some filter kernel (= fancy blur) to get rid of the stair stepping (similar to image post process AA).
 
Last edited:
Back
Top