Polygons, voxels, SDFs... what will our geometry be made of in the future?

I still don't find it "great"...but then again raytracing has spoiled me /shrugs
I understand what you mean. It's just I don't think this is a case for proper comparison, since everything is very stylised on that video.
 
There is ton of twitter messages from casualeffects(click the the link). Also the authors of the paper have been very active on twitter.
That's maybe the problem here: Some researchers discuss their recent work, eventually make some promises, hype train takes notice and... whoo! - the usual hype before release practice has moved over from game dev to research.
I did not read the whole paper because the motivation section failed on me to say what problem they try to solve. Tracing octrees is nothing new, and they did not make clear what compression advantage against something like SVO or DAG they achieve. Model quality is not impressive to me.
The twitter posts don't help either - showing a huge model but only at a single scale, so it's not clear what's the detail. Promises of AI revolutionizing everything are just that.
This sounds like a nothing burger to me. Thus i ask, because i certainly miss something or get it all wrong.
 
That's maybe the problem here: Some researchers discuss their recent work, eventually make some promises, hype train takes notice and... whoo! - the usual hype before release practice has moved over from game dev to research.
I did not read the whole paper because the motivation section failed on me to say what problem they try to solve. Tracing octrees is nothing new, and they did not make clear what compression advantage against something like SVO or DAG they achieve. Model quality is not impressive to me.
The twitter posts don't help either - showing a huge model but only at a single scale, so it's not clear what's the detail. Promises of AI revolutionizing everything are just that.
This sounds like a nothing burger to me. Thus i ask, because i certainly miss something or get it all wrong.

It's the breadcrumbs that lead into future. It's rare everything would change in one instance. It's more likely that things happen brick by brick. That specific paper is one brick building towards future. The biggest thing in that paper for me is two fold. One is the compression achieved which can be significant. Another important thing is the results are workable as training material for other neural nets. If you want to do some movie scale rendering using a bigger farm the compression can make work distribution/memory usage much nicer.

One could for example wonder how mass storage space limited unreal5 engine is going to be. Neural representation even just for static objects could be a huge game changer.

 
It's the breadcrumbs that lead into future. It's rare everything would change in one instance. It's more likely that things happen brick by brick.
Sure, but usually if you do some first steps, those are simple at first, so it's clear what is happening and easy to explain. E.g. Pac Man is less complex than modern games.
Now i have missed referenced works from the paper, but use of ML in geometry processing is not entirely new to me. Followed some papers about shape matching / manipulation, segmentation, or procedural generation for example.
This paper however lacks any detailed explanation. It is basically just a report about their claimed results.
One could for example wonder how mass storage space limited unreal5 engine is going to be.
It surely is entirely limited by storage, thus my interest.
Thus also my worries, because any solution for fast rendering, lighting etc, likely has a strong dependency on the data structure and format, which may be very custom and hard to adapt.
If ML can do seriously better compression / quality ratio than anything else and workflow is fine too, we should know better early than late. But promises and claims without backing are more distracting than helpful.
Well, we'll see the upcoming gfx conferences. Maybe it will clear up after some time...
 
Sure, but usually if you do some first steps, those are simple at first, so it's clear what is happening and easy to explain. E.g. Pac Man is less complex than modern games.
Now i have missed referenced works from the paper, but use of ML in geometry processing is not entirely new to me. Followed some papers about shape matching / manipulation, segmentation, or procedural generation for example.
This paper however lacks any detailed explanation. It is basically just a report about their claimed results.

It surely is entirely limited by storage, thus my interest.
Thus also my worries, because any solution for fast rendering, lighting etc, likely has a strong dependency on the data structure and format, which may be very custom and hard to adapt.
If ML can do seriously better compression / quality ratio than anything else and workflow is fine too, we should know better early than late. But promises and claims without backing are more distracting than helpful.
Well, we'll see the upcoming gfx conferences. Maybe it will clear up after some time...

Maybe you can try asking the authors in twitter? They seem to be tweeting a lot.
 
Quite an improvement over the previous ultra-blocky engine revision...


EwSl3TWVkAYDx1M
 
Things have recently gotten quite wild with Lin's voxel engine.




John Lin said:
This scene is composed entirely of voxels. What started as an innocent attempt at increasing the view distance evolved into a 4 month quest to achieve not only that, but rewrite a brand new voxel engine from the ground up that would feature:

- An 8x (512 cubic) detail increase with animation support, high compression rates, and per-voxel material attributes
- Fast collision detection that physics and player walking will utilize
- Full fracture and mutation of objects
- Improved path traced global illumination that features 5 bounces from the sun, atmosphere and all emissive objects
- A powerful asset pipeline that can utilize Quixel Megascans, PlantFactory/PlantCatalog's detailed vegetation, and other highly detailed polygon models after being converted and processed into voxels (all 3 seen in this video)
- Ray-traced world generation that could easily place trees, grass, flowers, and other assets (eg. crystals) in designated areas: eg. in sunlight, on cave walls, in big open areas, and so on
- A 10x speedup in rendering over the previous engine

Most of these features are showcased in the video. Of course, there are some kinks to work out, like you can see object intersections occurring or being placed in odd places, and the path traced denoiser is lacking a spatial filter, but I got too excited with how this looks that I couldn't wait any longer to share. The effective world size here is 256K^3 and the island is generated once upon startup (like Terraria for instance), which takes about a minute. Things won't look this chaotic in whatever the final version is, as it's more a feature test than anything else.

To those who prefer the charm of the old engine, there is a low-detail mode that will be toggleable. More likely than not, this level of detail will be reserved for RTX graphics cards, and the low-detail mode will be the fall back for older ones. I don't have exact spec predictions yet, because there are so many tweaks to be made like level-of-detail threshold, lighting refresh rates, and the world size itself, so general performance requirements remain to be outlined.

Stay tuned for more updates, as in the future we'll soon see that this world is dynamic and not just there to be looked at. Rigid body physics will make a return, so will fluid dynamics eventually, and really everything seen here is only the beginning. Thank you to everyone who supports this project, and doesn't leave mean comments calling out how I keep saying the next video will be the official announcement. Soon(tm) ;)

Yeah, I know, yet another John Lin post from me, but, considering the topic, there simply isn't anything else this exciting, right now.
 
Last edited:
Yeah, I know, yet another John Lin post from me, but, considering the topic, there simply isn't anything else this exciting, right now.
I also follow him on Twitter. It's amazing how the voxel density skyrocketed in these last months, while adding more features.
 
I also follow him on Twitter. It's amazing how the voxel density skyrocketed in these last months, while adding more features.

The "it works with animated vegetation" thing is something you always want to hear. Severely interested in how he managed this. Sure, voxels are blocky, but maybe you can trace some sort of displacement map over them or virtualize the underlying geo to get smooth geo and suddenly you've got a generalized solution.
 
I can't be bothered to make an account, someone tell me about Nanite :p Can the software shader dynamically traverse the hierachy and pick a LOD per tri?
 
Can the software shader dynamically traverse the hierachy and pick a LOD per tri?
Not per tri but per triangle clusters. Seems presistent threads doing top down BVH traversal, culling bounding boxes against view frustum (many views supported), occlusion culling based on previous frame Z pyramid, lod selection based on pixel ratio, then appending to either HW or SW rasterizers.
It's not so much code and pretty cool to look at. Though all typical compute shader fuzz with #ifdefs so i'm not sure if i got all this right, lacking overall context. Did not check API / C++ stuff.
Still waiting for someone to make frame analysis with gfx debugger...
 
The "it works with animated vegetation" thing is something you always want to hear. Severely interested in how he managed this. Sure, voxels are blocky, but maybe you can trace some sort of displacement map over them or virtualize the underlying geo to get smooth geo and suddenly you've got a generalized solution.

Looks like he is simply voxelising polygon geometry on the fly to an object aligned grid for the skinned vegetation and then using raycasting similar to Teardown. With a world aligned grid used to traced some rays for lighting again like Teardown. From the earlier lower resolution videos you can see the artifacts you would expect from conservative voxel rasterisation of the polygons on the GPU. Similarly for the modifiable terrain it looks like levelset => marching cubes or similar polygonisation => GPU rasterise polygons to voxel object chunks. For the terrain the objects grid could be split into cached nested chunks for LOD. The more recent higher voxel resolution videos seem to coincide with him upgrading to a newer much faster GPU, and he mentions an increase in expected GPU specs along with that.

If it is that you could question if it would be simply better to draw the polygons though :p Unless you were specifically going for that low resolution voxel look like in his earlier videos for stylistic reasons.
 
Not per tri but per triangle clusters. Seems presistent threads doing top down BVH traversal, culling bounding boxes against view frustum (many views supported), occlusion culling based on previous frame Z pyramid, lod selection based on pixel ratio, then appending to either HW or SW rasterizers.
It's not so much code and pretty cool to look at. Though all typical compute shader fuzz with #ifdefs so i'm not sure if i got all this right, lacking overall context. Did not check API / C++ stuff.
Still waiting for someone to make frame analysis with gfx debugger...

Yeh from a quick skim of the Unreal V documentation it seems to be hierarchical LOD clusters of triangles with some precomputed LOD stitching data (which is probably the most interesting bit along with LOD selection). Compute shader culls the cluster hierarchy against a GPU software hierarchical z buffer to minimise overdraw - of which there still seems to be a fair bit, and specifically warn about in the documentation. Then a straight forward compute shader rasteriser for small polygons, and optionally traditional hardware rasterisation for larger ones. It's not a micropolygon renderer though, so the marketing has been a bit misleading, and despite being a nice progression it's not really cinematic quality detail.

The LOD popping is not as bad as I thought as maybe the temporal AA helps, but I still do notice the wobbling across everything due to it not being close to pixel level detail. To be fair though I am automatically looking for those artifacts so I don't know if the average person will even notice. I guess the difficulty in supporting skinned meshes currently is more due to LOD selection problems.

Lighting is done with distance fields per objects or RTX style or axis aligned box cards, with a massive virtual shadow map. It's interesting that the mismatch between lighting representation and drawn geometry more or less works out ok (there are visible artifacts from it), but they do warn of light leaking issues in the documentation. It's all a pretty vanilla progression of progressive mesh clusters in GPU compute (a natural step from stuff we saw in the last Dragon Age game), which is fine and makes sense as it maps well across the various platforms they have to support. It does seem like you could get the same or better with mesh shaders though, and even re-implement it with mesh shaders:

From a quick play with the demo in the editor on my PC I am not sure if there is a setting to increase the detail settings, as close up on my RTX3090 I get the below image. But it shouldn't be an inherent limitation of the technique, more the data size you decide to go with - but it was an 100GB install of data for the demo :p

polys.jpg
 
Part of the problem with mesh shaders I suppose is the lack of direct support on PS5. Which kind of kills that for cross platform games this entire generation, unless two implementations are done and one in software like Nanite is just to run on PS5.

As for sub pixel detail, something like LEADR mapping should work on Nanite as well as anything else.
 
with some precomputed LOD stitching data (which is probably the most interesting bit along with LOD selection)
Yeah. I guess doing the preprocessing tool is more work than writing the traversal / raster shaders.
I did not notice how they process variable resolution along cluster boundaries to prevent cracks in the compute shaders. The way the LOD switches over islands of clusters hints they maybe need only one permutation of geometry per cluster. That's the most interesting and seems clever and the main innovation here.
I still do notice the wobbling across everything due to it not being close to pixel level detail.
For me, rarely triangles go down closely to a size of 1 pixel either. I even started to doubt a SW rasterizer would be necessary. But maybe it's just adapting to my older GPU (Vega 56), and IMO we do not really need subpixel triangles.

Lighting is done with distance fields per objects or RTX style or axis aligned box cards
I assume the box cards are only used for a radiance cache.
Traced hitpoints on the surface project along normal to the cards, fetch cached irradiance from texels on those cards, calculate radiance with material.
When shading, they do the same for the point on the surface.

Because this boils down to triplanar mapping of a box, detailed / concave surface sections have some error - multiple surfaces might share the same irradiance cahed texels, and the resolution of the cache might have inconsistent area relation to the surface.
But this dos not explain why i was not able to get realistic results for a Cornell Box scene, where this simple mapping should be perfect. This puzzles me still. Even if they blur radiance texels with neighbors the results should be better. It's jut flat lighting i got.
But if the geometry has some detail and isn't flat itself, often results look photorealistic and fine.

Did not look at the code here, so i'm just guessing. But i don't think they ever trace against those cards. Their geometry should not affect the lighting directly. The main limiation i expect here is that we need to split large complex meshes into convex parts.
Oh - I just realize: If we do so, cards are a better fit. But also they had to blend multiple cards to prevent seams from our splitting. So maybe the flatness i criticize is partially related to such blending of overlapping card boxes.

RTX style or axis aligned box cards
I assumed they would do similar as NVs DDGI probe grids (guess you mean that with RTX), but no. They use surface caches, not volume grid of probes. Surface cache is more efficient and accurate, but more complexity to implement.
If i'm right, then i don't know how they do volumetric lighting for things which don't have that card cage around them. There might be a probe grid too to support this.

It does seem like you could get the same or better with mesh shaders though, and even re-implement it with mesh shaders:
I think the same. The traversal could be replaced with caching selected LOD and allowing only one step up or down in the hierarchy per frame. Though i guess they do this - quite an obvious optimization.
Compute raster feels only attractive to me if we splat single points. Triangles feels too divergent and causes either variable iterations or idle threads. (Though, as we see - it works.)

From a quick play with the demo in the editor on my PC I am not sure if there is a setting to increase the detail settings, as close up on my RTX3090 I get the below image.
Oh, seems my Vega56 is not the cause then :) Maybe our HDDs are too slow so detail is capped? Does not really make sense.
I did not download the demo, just a bunch of Quixel Nanite models to test it out. Detail of those assets is not 'unlimited'. I can still move close enough to give Druce Dell some arguments :D
 
Oh - I just realize: If we do so, cards are a better fit. But also they had to blend multiple cards to prevent seams from our splitting. So maybe the flatness i criticize is partially related to such blending of overlapping card boxes.
Yeah - see those seams. Also this cache is noisy and bad quality:
upload_2021-5-29_10-41-3.png
They must blur this like crazy for the final shading:
upload_2021-5-29_10-42-27.png
So this is where my expected AO alike detail gets lost, but due to noise it never really existed in the first place. So the advantage of surface cache over volume probes remains theoretical for them.
 
Oh, seems my Vega56 is not the cause then :) Maybe our HDDs are too slow so detail is capped? Does not really make sense.

They have mentioned before about blending in tiling normal and albedo detail maps on top - I assume that's actually the case in the close up screenshot I took, and their short term solution. Which gives it a very early gen Unreal or Rage detail texture feel when close.

I think either they either conservatively set some max detail setting somewhere I couldn't find at a glance, or they had to limit the download size to not more than the 100 GB. As the Samsung 980 Pro SSD I have should be more than enough. I tried some of the same megascans assets they used in this demo, and the one before in my own stuff over a year ago, and for some of them at least higher detail in the source files does exist. *Digs out old screenshot* for a bad example:

Base Profile Screenshot 2020.06.09 - 15.23.57.37.png

(I think my imported colours were screwed back then hence the chocolatey look :p) Of course if an artist scales up a geometry instance to a large enough size it will stretch out the detail anyway, but I don't think that's what we were seeing.
 
Back
Top