Polygons, voxels, SDFs... what will our geometry be made of in the future?

Discussion in 'Rendering Technology and APIs' started by eloyc, Mar 18, 2017.

  1. eloyc

    Veteran

    Joined:
    Jan 23, 2009
    Messages:
    2,551
    Likes Received:
    1,705
    I understand what you mean. It's just I don't think this is a case for proper comparison, since everything is very stylised on that video.
     
  2. JoeJ

    Veteran

    Joined:
    Apr 1, 2018
    Messages:
    1,523
    Likes Received:
    1,772
    That's maybe the problem here: Some researchers discuss their recent work, eventually make some promises, hype train takes notice and... whoo! - the usual hype before release practice has moved over from game dev to research.
    I did not read the whole paper because the motivation section failed on me to say what problem they try to solve. Tracing octrees is nothing new, and they did not make clear what compression advantage against something like SVO or DAG they achieve. Model quality is not impressive to me.
    The twitter posts don't help either - showing a huge model but only at a single scale, so it's not clear what's the detail. Promises of AI revolutionizing everything are just that.
    This sounds like a nothing burger to me. Thus i ask, because i certainly miss something or get it all wrong.
     
  3. manux

    Veteran

    Joined:
    Sep 7, 2002
    Messages:
    3,034
    Likes Received:
    2,276
    Location:
    Self Imposed Exhile
    It's the breadcrumbs that lead into future. It's rare everything would change in one instance. It's more likely that things happen brick by brick. That specific paper is one brick building towards future. The biggest thing in that paper for me is two fold. One is the compression achieved which can be significant. Another important thing is the results are workable as training material for other neural nets. If you want to do some movie scale rendering using a bigger farm the compression can make work distribution/memory usage much nicer.

    One could for example wonder how mass storage space limited unreal5 engine is going to be. Neural representation even just for static objects could be a huge game changer.

     
    DegustatoR likes this.
  4. JoeJ

    Veteran

    Joined:
    Apr 1, 2018
    Messages:
    1,523
    Likes Received:
    1,772
    Sure, but usually if you do some first steps, those are simple at first, so it's clear what is happening and easy to explain. E.g. Pac Man is less complex than modern games.
    Now i have missed referenced works from the paper, but use of ML in geometry processing is not entirely new to me. Followed some papers about shape matching / manipulation, segmentation, or procedural generation for example.
    This paper however lacks any detailed explanation. It is basically just a report about their claimed results.
    It surely is entirely limited by storage, thus my interest.
    Thus also my worries, because any solution for fast rendering, lighting etc, likely has a strong dependency on the data structure and format, which may be very custom and hard to adapt.
    If ML can do seriously better compression / quality ratio than anything else and workflow is fine too, we should know better early than late. But promises and claims without backing are more distracting than helpful.
    Well, we'll see the upcoming gfx conferences. Maybe it will clear up after some time...
     
  5. manux

    Veteran

    Joined:
    Sep 7, 2002
    Messages:
    3,034
    Likes Received:
    2,276
    Location:
    Self Imposed Exhile
    Maybe you can try asking the authors in twitter? They seem to be tweeting a lot.
     
  6. SlmDnk

    Regular

    Joined:
    Feb 9, 2002
    Messages:
    703
    Likes Received:
    568
    Quite an improvement over the previous ultra-blocky engine revision...



    [​IMG]
     
  7. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    7,610
    Likes Received:
    825
    It's more a question of having all these mm2 and TFLOPs of matrix multipliers hanging around any way.
     
  8. SlmDnk

    Regular

    Joined:
    Feb 9, 2002
    Messages:
    703
    Likes Received:
    568
  9. SlmDnk

    Regular

    Joined:
    Feb 9, 2002
    Messages:
    703
    Likes Received:
    568
    Things have recently gotten quite wild with Lin's voxel engine.







    Yeah, I know, yet another John Lin post from me, but, considering the topic, there simply isn't anything else this exciting, right now.
     
    #229 SlmDnk, May 13, 2021
    Last edited: May 13, 2021
    Arwin, pharma, jlippo and 3 others like this.
  10. eloyc

    Veteran

    Joined:
    Jan 23, 2009
    Messages:
    2,551
    Likes Received:
    1,705
    I also follow him on Twitter. It's amazing how the voxel density skyrocketed in these last months, while adding more features.
     
    milk likes this.
  11. Frenetic Pony

    Regular

    Joined:
    Nov 12, 2011
    Messages:
    807
    Likes Received:
    478
    The "it works with animated vegetation" thing is something you always want to hear. Severely interested in how he managed this. Sure, voxels are blocky, but maybe you can trace some sort of displacement map over them or virtualize the underlying geo to get smooth geo and suddenly you've got a generalized solution.
     
    eloyc likes this.
  12. manux

    Veteran

    Joined:
    Sep 7, 2002
    Messages:
    3,034
    Likes Received:
    2,276
    Location:
    Self Imposed Exhile
    Cool to see implementations of different papers appearing in github. Looks like Morgan McGuire has moved to roblox



    This is the paper in question that has now implementation in github.

     
    Scott_Arm, chris1515, Krteq and 5 others like this.
  13. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    7,610
    Likes Received:
    825
    I can't be bothered to make an account, someone tell me about Nanite :p Can the software shader dynamically traverse the hierachy and pick a LOD per tri?
     
  14. JoeJ

    Veteran

    Joined:
    Apr 1, 2018
    Messages:
    1,523
    Likes Received:
    1,772
    Not per tri but per triangle clusters. Seems presistent threads doing top down BVH traversal, culling bounding boxes against view frustum (many views supported), occlusion culling based on previous frame Z pyramid, lod selection based on pixel ratio, then appending to either HW or SW rasterizers.
    It's not so much code and pretty cool to look at. Though all typical compute shader fuzz with #ifdefs so i'm not sure if i got all this right, lacking overall context. Did not check API / C++ stuff.
    Still waiting for someone to make frame analysis with gfx debugger...
     
    milk, Jensen Krage, BRiT and 2 others like this.
  15. Warrick

    Newcomer

    Joined:
    Jun 5, 2003
    Messages:
    33
    Likes Received:
    37
    Location:
    Hong Kong
    Looks like he is simply voxelising polygon geometry on the fly to an object aligned grid for the skinned vegetation and then using raycasting similar to Teardown. With a world aligned grid used to traced some rays for lighting again like Teardown. From the earlier lower resolution videos you can see the artifacts you would expect from conservative voxel rasterisation of the polygons on the GPU. Similarly for the modifiable terrain it looks like levelset => marching cubes or similar polygonisation => GPU rasterise polygons to voxel object chunks. For the terrain the objects grid could be split into cached nested chunks for LOD. The more recent higher voxel resolution videos seem to coincide with him upgrading to a newer much faster GPU, and he mentions an increase in expected GPU specs along with that.

    If it is that you could question if it would be simply better to draw the polygons though :p Unless you were specifically going for that low resolution voxel look like in his earlier videos for stylistic reasons.
     
    Ext3h and Frenetic Pony like this.
  16. Warrick

    Newcomer

    Joined:
    Jun 5, 2003
    Messages:
    33
    Likes Received:
    37
    Location:
    Hong Kong
    Yeh from a quick skim of the Unreal V documentation it seems to be hierarchical LOD clusters of triangles with some precomputed LOD stitching data (which is probably the most interesting bit along with LOD selection). Compute shader culls the cluster hierarchy against a GPU software hierarchical z buffer to minimise overdraw - of which there still seems to be a fair bit, and specifically warn about in the documentation. Then a straight forward compute shader rasteriser for small polygons, and optionally traditional hardware rasterisation for larger ones. It's not a micropolygon renderer though, so the marketing has been a bit misleading, and despite being a nice progression it's not really cinematic quality detail.

    The LOD popping is not as bad as I thought as maybe the temporal AA helps, but I still do notice the wobbling across everything due to it not being close to pixel level detail. To be fair though I am automatically looking for those artifacts so I don't know if the average person will even notice. I guess the difficulty in supporting skinned meshes currently is more due to LOD selection problems.

    Lighting is done with distance fields per objects or RTX style or axis aligned box cards, with a massive virtual shadow map. It's interesting that the mismatch between lighting representation and drawn geometry more or less works out ok (there are visible artifacts from it), but they do warn of light leaking issues in the documentation. It's all a pretty vanilla progression of progressive mesh clusters in GPU compute (a natural step from stuff we saw in the last Dragon Age game), which is fine and makes sense as it maps well across the various platforms they have to support. It does seem like you could get the same or better with mesh shaders though, and even re-implement it with mesh shaders:


    From a quick play with the demo in the editor on my PC I am not sure if there is a setting to increase the detail settings, as close up on my RTX3090 I get the below image. But it shouldn't be an inherent limitation of the technique, more the data size you decide to go with - but it was an 100GB install of data for the demo :p

    polys.jpg
     
    iroboto, JoeJ and Frenetic Pony like this.
  17. Frenetic Pony

    Regular

    Joined:
    Nov 12, 2011
    Messages:
    807
    Likes Received:
    478
    Part of the problem with mesh shaders I suppose is the lack of direct support on PS5. Which kind of kills that for cross platform games this entire generation, unless two implementations are done and one in software like Nanite is just to run on PS5.

    As for sub pixel detail, something like LEADR mapping should work on Nanite as well as anything else.
     
  18. JoeJ

    Veteran

    Joined:
    Apr 1, 2018
    Messages:
    1,523
    Likes Received:
    1,772
    Yeah. I guess doing the preprocessing tool is more work than writing the traversal / raster shaders.
    I did not notice how they process variable resolution along cluster boundaries to prevent cracks in the compute shaders. The way the LOD switches over islands of clusters hints they maybe need only one permutation of geometry per cluster. That's the most interesting and seems clever and the main innovation here.
    For me, rarely triangles go down closely to a size of 1 pixel either. I even started to doubt a SW rasterizer would be necessary. But maybe it's just adapting to my older GPU (Vega 56), and IMO we do not really need subpixel triangles.

    I assume the box cards are only used for a radiance cache.
    Traced hitpoints on the surface project along normal to the cards, fetch cached irradiance from texels on those cards, calculate radiance with material.
    When shading, they do the same for the point on the surface.

    Because this boils down to triplanar mapping of a box, detailed / concave surface sections have some error - multiple surfaces might share the same irradiance cahed texels, and the resolution of the cache might have inconsistent area relation to the surface.
    But this dos not explain why i was not able to get realistic results for a Cornell Box scene, where this simple mapping should be perfect. This puzzles me still. Even if they blur radiance texels with neighbors the results should be better. It's jut flat lighting i got.
    But if the geometry has some detail and isn't flat itself, often results look photorealistic and fine.

    Did not look at the code here, so i'm just guessing. But i don't think they ever trace against those cards. Their geometry should not affect the lighting directly. The main limiation i expect here is that we need to split large complex meshes into convex parts.
    Oh - I just realize: If we do so, cards are a better fit. But also they had to blend multiple cards to prevent seams from our splitting. So maybe the flatness i criticize is partially related to such blending of overlapping card boxes.

    I assumed they would do similar as NVs DDGI probe grids (guess you mean that with RTX), but no. They use surface caches, not volume grid of probes. Surface cache is more efficient and accurate, but more complexity to implement.
    If i'm right, then i don't know how they do volumetric lighting for things which don't have that card cage around them. There might be a probe grid too to support this.

    I think the same. The traversal could be replaced with caching selected LOD and allowing only one step up or down in the hierarchy per frame. Though i guess they do this - quite an obvious optimization.
    Compute raster feels only attractive to me if we splat single points. Triangles feels too divergent and causes either variable iterations or idle threads. (Though, as we see - it works.)

    Oh, seems my Vega56 is not the cause then :) Maybe our HDDs are too slow so detail is capped? Does not really make sense.
    I did not download the demo, just a bunch of Quixel Nanite models to test it out. Detail of those assets is not 'unlimited'. I can still move close enough to give Druce Dell some arguments :D
     
    NovelYork and Warrick like this.
  19. JoeJ

    Veteran

    Joined:
    Apr 1, 2018
    Messages:
    1,523
    Likes Received:
    1,772
    Yeah - see those seams. Also this cache is noisy and bad quality:
    upload_2021-5-29_10-41-3.png
    They must blur this like crazy for the final shading:
    upload_2021-5-29_10-42-27.png
    So this is where my expected AO alike detail gets lost, but due to noise it never really existed in the first place. So the advantage of surface cache over volume probes remains theoretical for them.
     
    milk likes this.
  20. Warrick

    Newcomer

    Joined:
    Jun 5, 2003
    Messages:
    33
    Likes Received:
    37
    Location:
    Hong Kong
    They have mentioned before about blending in tiling normal and albedo detail maps on top - I assume that's actually the case in the close up screenshot I took, and their short term solution. Which gives it a very early gen Unreal or Rage detail texture feel when close.

    I think either they either conservatively set some max detail setting somewhere I couldn't find at a glance, or they had to limit the download size to not more than the 100 GB. As the Samsung 980 Pro SSD I have should be more than enough. I tried some of the same megascans assets they used in this demo, and the one before in my own stuff over a year ago, and for some of them at least higher detail in the source files does exist. *Digs out old screenshot* for a bad example:

    Base Profile Screenshot 2020.06.09 - 15.23.57.37.png

    (I think my imported colours were screwed back then hence the chocolatey look :p) Of course if an artist scales up a geometry instance to a large enough size it will stretch out the detail anyway, but I don't think that's what we were seeing.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...