Polygons, voxels, SDFs... what will our geometry be made of in the future?

I wrote a compute shader implementation of surface net algorithm. Iterative refinement of vertex positions (move along SDF gradient). 2 passes: First generates vertices (shared between connected faces) and second generates faces. IIRC it takes around 0.02ms (high end PC GPU) to generate a mesh for a 64x64x64 SDF (output = around 10K triangles).

We're using an octree to store our SDF - we need the ability to have fine details in some areas, but using a fine-enough regular grid would take way too much memory. We're doing our meshing incrementally on the CPU rather than on the GPU at the moment and only uploading the differences. I have a GPU implementation of Dual Marching Cubes which is looking promising: last time I measured it was taking around 100 milliseconds to fully remesh an octree with about 1.4 million nodes on a GTX 1080. That's obviously still too slow for now, but I'm hoping that adding incremental remeshing support will make a big difference. There should also be some gains from improving the way I Iook up neighbouring nodes, so I'm cautiously optimistic.

But we don't render triangles. Our renderer ray traces SDFs directly. On high end PC, the primary ray tracing pass (1080p) is only taking 0.3 ms. Secondary rays (soft shadows, AO, etc) take (obviously) longer time to render. 60 fps is definitely possible on current gen consoles with a SDF ray tracer.

I'd love to get raytracing working for our SDF - it'd eliminate the entire cost of remeshing! - but so far I haven't been able to make it fast enough. The main bottleneck is locating nodes in the octree. Our octree is often 13 or 14 levels deep (more in some cases), so every sample we take along a ray ends up requiring at least that many texture accesses. I'm sure my implementation was far from optimal but I ran out of time to optimise it. Sticking to the octree will probably limit how much performance we can get, but it would be difficult for us to change this now. One thing I'd like to try, if I ever get the time, is to cache the 6 neighbour indexes for each node; could use that to avoid most of the top-down tree traversals, but populating the cache in itself might be costly.

I recently saw a preprint of an Eurographics paper titled "GPU Ray Tracing Using Irregular Grids", by Arsène Pérard-Gayot, Javor Kalojanov & Philipp Slusallek (not allowed to post links here yet, sorry). I'd like to try out their approach as well, at some point.
 
The forum won't let me post links because I haven't made enough posts here yet
If you post in this thread, your messages count x2, you know?

I wrote a compute shader implementation of surface net algorithm. Iterative refinement of vertex positions (move along SDF gradient). 2 passes: First generates vertices (shared between connected faces) and second generates faces. IIRC it takes around 0.02ms (high end PC GPU) to generate a mesh for a 64x64x64 SDF (output = around 10K triangles).

But we don't render triangles. Our renderer ray traces SDFs directly. On high end PC, the primary ray tracing pass (1080p) is only taking 0.3 ms. Secondary rays (soft shadows, AO, etc) take (obviously) longer time to render. 60 fps is definitely possible on current gen consoles with a SDF ray tracer.
Could you please share some screenshots? :)

You need some filter kernel (= fancy blur) to get rid of the stair stepping
Unlimited Detail.
 
I'd love to get raytracing working for our SDF - it'd eliminate the entire cost of remeshing! - but so far I haven't been able to make it fast enough. The main bottleneck is locating nodes in the octree. Our octree is often 13 or 14 levels deep (more in some cases), so every sample we take along a ray ends up requiring at least that many texture accesses. I'm sure my implementation was far from optimal but I ran out of time to optimise it. Sticking to the octree will probably limit how much performance we can get, but it would be difficult for us to change this now. One thing I'd like to try, if I ever get the time, is to cache the 6 neighbour indexes for each node; could use that to avoid most of the top-down tree traversals, but populating the cache in itself might be costly.
14 levels is pretty deep. Hierarchical sparse bitmaps (or b-trees or tries, whatever else people want to call them) are pretty good for storing SDF level sets. IIRC OpenVDB is configured to have 4-6 levels by default (http://www.openvdb.org/documentation/). Nodes have bitmask of childs and child pointer. Address is calculated by mask + popcount (basically a bitfield prefix sum). Judy Array is a bit similar sparse data structure (but for 1d data): https://en.wikipedia.org/wiki/Judy_array. Modern fast tree structures tend to be quite flat and have cache line aligned node size with tightly packed data (as you want to use all data you fetch to cache).

I also recommend this Remedy (Quantum Break) tech presentation (Page 74+). It describes an volume data structure similar to OpenVDB + interpolation method:
http://advances.realtimerendering.com/s2015/SIGGRAPH_2015_Remedy_Notes.pdf

Easiest way to store SDF level set on GPU is to use 3d tiled resources. But that's unfortunately a DX 12.1 feature, only supported by newest Nvidia (Maxwell2, Pascal) and Intel GPUs (Skylake, Kaby). Page size is 64 KB. If you use 16 bit float SDF, one page is 32x32x32 voxels. That's not optimal (a bit too large), but the upside is that trilinear (3d) hardware filtering works without any tricks. Ray trace loop becomes very tight (sampler bound). You obviously still want to have some coarse mip levels for empty space skipping, otherwise the sparse access pattern of the most detailed data will trash the caches.

I will be discussing about our tech in the future, but it's still too soon to spill the beans.
 
Yaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaassssssssss! I swear that I screamed like a little girl, eyes wide open, glittering, when I updated the main page and I saw that you posted here. I knew it was the real deal, I could feel it. :cool:

I'm going to open the link. OMG, I'm so excited, hahaha! :D
 
Real-time Rendering of Animated Light Fields

We propose an end-to-end solution for presenting movie quality animated graphics to the user while still allowing the sense of presence afforded by free viewpoint head motion. By transforming offline rendered movie content into a novel immersive representation, we display the content in real-time according to the tracked head pose. For each frame, we generate a set of cubemap images per frame (colors and depths) using a sparse set of of cameras placed in the vicinity of the potential viewer locations. The cameras are placed with an optimization process so that the rendered data maximise coverage with minimum redundancy, depending on the lighting environment complexity. We compress the colors and depths separately, introducing an integrated spatial and temporal scheme tailored to high performance on GPUs for Virtual Reality applications. We detail a real-time rendering algorithm using multi-view ray casting and view dependent decompression. Compression rates of 150:1 and greater are demonstrated with quantitative analysis of image reconstruction quality and performance.

Link to publication page: http://www.disneyresearch.com/real-ti...
 
Well, that's what they say on steam and other places.
"Next gen signed distance field (SDF) ray-tracing and morphing tech"

Plus, they talk about it a little here
Wow, amazing! However, I heard "voxels", not SDFs.

At any rate, amazing stuff. :) I guess some of the things they do would be harder to achieve with polygons, to get the same effect.
 
Many distance field ray tracers encode the volumes in a 3D texture, so voxels and SDF are not mutually exclusive.
Ok, I understand.

So... I've watched more Nex Machina videos. Bearing in mind that SDFs are being used in more than one game (Dreams... Claybook!), this is proof that it is possible and it works, so... why isn't this more mainstream? Why don't ditch the blockyness of polygons and use this instead? On the other hand, I noticed that it may be harder to apply materials the same way we do it with polygons + textures + shaders. Until now, SDF seem to be limited to a single color and material/lighting attributes.

Ideas? Thoughts?

At any rate, I'm superexcited by all the progress made in this field. So nice to forget about polygons! :D
 
Ok, I understand.

So... I've watched more Nex Machina videos. Bearing in mind that SDFs are being used in more than one game (Dreams... Claybook!), this is proof that it is possible and it works, so... why isn't this more mainstream? Why don't ditch the blockyness of polygons and use this instead? On the other hand, I noticed that it may be harder to apply materials the same way we do it with polygons + textures + shaders. Until now, SDF seem to be limited to a single color and material/lighting attributes.

Ideas? Thoughts?

At any rate, I'm superexcited by all the progress made in this field. So nice to forget about polygons! :D

SDFs aren't inherently limited to a single color. It's pretty simple to have a colour field in addition to the distance field, for example (if you can spare the memory for it). Texturing SDFs is possible but it can be awkward to generate UVs for them; something like Marco Tarini's "Volume-encoded UV-maps" (a paper from SIGGRAPH 2016) might be quite useful for this.
 
Back
Top