I understand what you mean. It's just I don't think this is a case for proper comparison, since everything is very stylised on that video.I still don't find it "great"...but then again raytracing has spoiled me /shrugs
I understand what you mean. It's just I don't think this is a case for proper comparison, since everything is very stylised on that video.I still don't find it "great"...but then again raytracing has spoiled me /shrugs
That's maybe the problem here: Some researchers discuss their recent work, eventually make some promises, hype train takes notice and... whoo! - the usual hype before release practice has moved over from game dev to research.There is ton of twitter messages from casualeffects(click the the link). Also the authors of the paper have been very active on twitter.
That's maybe the problem here: Some researchers discuss their recent work, eventually make some promises, hype train takes notice and... whoo! - the usual hype before release practice has moved over from game dev to research.
I did not read the whole paper because the motivation section failed on me to say what problem they try to solve. Tracing octrees is nothing new, and they did not make clear what compression advantage against something like SVO or DAG they achieve. Model quality is not impressive to me.
The twitter posts don't help either - showing a huge model but only at a single scale, so it's not clear what's the detail. Promises of AI revolutionizing everything are just that.
This sounds like a nothing burger to me. Thus i ask, because i certainly miss something or get it all wrong.
Sure, but usually if you do some first steps, those are simple at first, so it's clear what is happening and easy to explain. E.g. Pac Man is less complex than modern games.It's the breadcrumbs that lead into future. It's rare everything would change in one instance. It's more likely that things happen brick by brick.
It surely is entirely limited by storage, thus my interest.One could for example wonder how mass storage space limited unreal5 engine is going to be.
Sure, but usually if you do some first steps, those are simple at first, so it's clear what is happening and easy to explain. E.g. Pac Man is less complex than modern games.
Now i have missed referenced works from the paper, but use of ML in geometry processing is not entirely new to me. Followed some papers about shape matching / manipulation, segmentation, or procedural generation for example.
This paper however lacks any detailed explanation. It is basically just a report about their claimed results.
It surely is entirely limited by storage, thus my interest.
Thus also my worries, because any solution for fast rendering, lighting etc, likely has a strong dependency on the data structure and format, which may be very custom and hard to adapt.
If ML can do seriously better compression / quality ratio than anything else and workflow is fine too, we should know better early than late. But promises and claims without backing are more distracting than helpful.
Well, we'll see the upcoming gfx conferences. Maybe it will clear up after some time...
If ML can do seriously better compression
John Lin said:This scene is composed entirely of voxels. What started as an innocent attempt at increasing the view distance evolved into a 4 month quest to achieve not only that, but rewrite a brand new voxel engine from the ground up that would feature:
- An 8x (512 cubic) detail increase with animation support, high compression rates, and per-voxel material attributes
- Fast collision detection that physics and player walking will utilize
- Full fracture and mutation of objects
- Improved path traced global illumination that features 5 bounces from the sun, atmosphere and all emissive objects
- A powerful asset pipeline that can utilize Quixel Megascans, PlantFactory/PlantCatalog's detailed vegetation, and other highly detailed polygon models after being converted and processed into voxels (all 3 seen in this video)
- Ray-traced world generation that could easily place trees, grass, flowers, and other assets (eg. crystals) in designated areas: eg. in sunlight, on cave walls, in big open areas, and so on
- A 10x speedup in rendering over the previous engine
Most of these features are showcased in the video. Of course, there are some kinks to work out, like you can see object intersections occurring or being placed in odd places, and the path traced denoiser is lacking a spatial filter, but I got too excited with how this looks that I couldn't wait any longer to share. The effective world size here is 256K^3 and the island is generated once upon startup (like Terraria for instance), which takes about a minute. Things won't look this chaotic in whatever the final version is, as it's more a feature test than anything else.
To those who prefer the charm of the old engine, there is a low-detail mode that will be toggleable. More likely than not, this level of detail will be reserved for RTX graphics cards, and the low-detail mode will be the fall back for older ones. I don't have exact spec predictions yet, because there are so many tweaks to be made like level-of-detail threshold, lighting refresh rates, and the world size itself, so general performance requirements remain to be outlined.
Stay tuned for more updates, as in the future we'll soon see that this world is dynamic and not just there to be looked at. Rigid body physics will make a return, so will fluid dynamics eventually, and really everything seen here is only the beginning. Thank you to everyone who supports this project, and doesn't leave mean comments calling out how I keep saying the next video will be the official announcement. Soon(tm)
I also follow him on Twitter. It's amazing how the voxel density skyrocketed in these last months, while adding more features.Yeah, I know, yet another John Lin post from me, but, considering the topic, there simply isn't anything else this exciting, right now.
I also follow him on Twitter. It's amazing how the voxel density skyrocketed in these last months, while adding more features.
Not per tri but per triangle clusters. Seems presistent threads doing top down BVH traversal, culling bounding boxes against view frustum (many views supported), occlusion culling based on previous frame Z pyramid, lod selection based on pixel ratio, then appending to either HW or SW rasterizers.Can the software shader dynamically traverse the hierachy and pick a LOD per tri?
The "it works with animated vegetation" thing is something you always want to hear. Severely interested in how he managed this. Sure, voxels are blocky, but maybe you can trace some sort of displacement map over them or virtualize the underlying geo to get smooth geo and suddenly you've got a generalized solution.
Not per tri but per triangle clusters. Seems presistent threads doing top down BVH traversal, culling bounding boxes against view frustum (many views supported), occlusion culling based on previous frame Z pyramid, lod selection based on pixel ratio, then appending to either HW or SW rasterizers.
It's not so much code and pretty cool to look at. Though all typical compute shader fuzz with #ifdefs so i'm not sure if i got all this right, lacking overall context. Did not check API / C++ stuff.
Still waiting for someone to make frame analysis with gfx debugger...
Yeah. I guess doing the preprocessing tool is more work than writing the traversal / raster shaders.with some precomputed LOD stitching data (which is probably the most interesting bit along with LOD selection)
For me, rarely triangles go down closely to a size of 1 pixel either. I even started to doubt a SW rasterizer would be necessary. But maybe it's just adapting to my older GPU (Vega 56), and IMO we do not really need subpixel triangles.I still do notice the wobbling across everything due to it not being close to pixel level detail.
I assume the box cards are only used for a radiance cache.Lighting is done with distance fields per objects or RTX style or axis aligned box cards
I assumed they would do similar as NVs DDGI probe grids (guess you mean that with RTX), but no. They use surface caches, not volume grid of probes. Surface cache is more efficient and accurate, but more complexity to implement.RTX style or axis aligned box cards
I think the same. The traversal could be replaced with caching selected LOD and allowing only one step up or down in the hierarchy per frame. Though i guess they do this - quite an obvious optimization.It does seem like you could get the same or better with mesh shaders though, and even re-implement it with mesh shaders:
Oh, seems my Vega56 is not the cause then Maybe our HDDs are too slow so detail is capped? Does not really make sense.From a quick play with the demo in the editor on my PC I am not sure if there is a setting to increase the detail settings, as close up on my RTX3090 I get the below image.
Yeah - see those seams. Also this cache is noisy and bad quality:Oh - I just realize: If we do so, cards are a better fit. But also they had to blend multiple cards to prevent seams from our splitting. So maybe the flatness i criticize is partially related to such blending of overlapping card boxes.
Oh, seems my Vega56 is not the cause then Maybe our HDDs are too slow so detail is capped? Does not really make sense.