Direct3D Mesh Shaders

Discussion in 'Rendering Technology and APIs' started by DmitryKo, Jul 1, 2019.

  1. Frenetic Pony

    Frenetic Pony Regular

    Lot of potential differences, but Epic hasn't said what it's doing. So whatever difference there is, is generally speculation right now.
     
  2. Dampf

    Dampf Regular

    WOW! They turned this into a MMO with next gen graphics!

    Incredible!
     
  3. jlippo

    jlippo Veteran

    Really depends on how you have coded the LoD and loading.

    There should be nothing that prevents you to have meshlet based Lod, which starts from different 'static' LoDs or something different like having SDF and evaluate it to objects during runtime.
     
    Last edited: May 25, 2021
  4. JoeJ

    JoeJ Veteran

    ... until you realize your ideas break RT support ;)
     
  5. jlippo

    jlippo Veteran

    RT does sound like it always needs special consideration at the moment.

    BTW, how bad is custom intersections at the moment?
    Perhaps one can RT the SDF directly. ;)
     
  6. JoeJ

    JoeJ Veteran

    Then you have LOD anymore. LOD means to have multiple levels of detail. Your proposal may boil down to using just one switch of two different methods, where RT HW works down til the 'coarse' meshlets, then doing custom solutions to refine detail further.
    This has two problems: 1. Missing a hierarchy of detail levels, how do you reduce detail for the fixed meshlet representation? 2. Switching between different methods of intersecting geometry within a ray adds too much divergence to be efficient.

    So there are limitations, and those limits force you to make bad compromises and only partial solutions.
    I'm considering such options at the moment, but seems better to wait for more flexibility... :/
     
    jlippo likes this.
  7. Ethatron

    Ethatron Regular Subscriber

    It's possible to evaluate the TLAS-leaf analytically, if you don't have or like triangles. I wonder when NURBS are (finally) coming back ... Mr. Gouraud's ideas are out of favor, aren't they?
     
  8. JoeJ

    JoeJ Veteran

    I guess you'd had a hard time convincing artists to work with NURBS. Whenever this topic comes up, everybody seems to hate them. For good reasons, i think:
    They can only model human made stuff efficiently. Cars, guns, architecture. Not good for natural stuff like terrain or foliage. Subdiv modeling is a better compromise from the artists view, and less an 'engineering task' while working.
    They can turn straight lines into curves, but they can't add topological detail, only displacement. Also, reducing detail of the control mesh is notoriously difficult to automate.

    I don't think they'll come into games. Maybe for some military / SciFi shooter setting, but it's likely not worth to develop a complicated realtime NURBS engine just for that?
    I also don't believe in realtime subdiv either anymore, although with mesh shaders the recursion problem could be solved efficiently now i guess.

    I believe in 'import any crap at high detail, and convert it into whatever engine specific optimized geometry we need', though this misses the easy compression options of parametric surfaces.
    Epic is on good track with exactly that promise. But while you no longer need to make manual LODs or bake normals, you now will spend even more time with generating very high detail in the first place. Does not really reduce costs i think.
    What i want is to generate high detail mostly procedurally in the engine editor, not in external tools breaking a nondestructive workflow and being unable to shape out things in relation to the game world.
    So i'm currently working on this, as the volumetric geometry processing tools i made to generate GI surfel hierarchy easily support things like CSG, geometry blending, or particle based modeling. I hope it's worth the extra time, but looks promising so far.
    Output is quad mesh like from the remesher in ZBrush for example. That's an interesting geometry format because it supports seamless texturing and easy subdivision if needed, and instancing / blending of texture quads should help with compression.
     
  9. Ethatron

    Ethatron Regular Subscriber

    You only need to prove them wrong. ;)
    I believe that, if we persue parameterizable surface representations - in the long term - we will gravitate towards hybrids (very great paper!). You should not think as a purist, that everything has to be made from one thing alone. Instead there probably will be layers of different expressions, suitable for each level of detail. Say vector displacements on top of these hybrids, or procedural layers (basically all kinds of free analytic formulas). Parameterization is, AFAICS, the only way out of the data-explosion and towards massive scalability (the gap between low end and high end is growing).
     
    Rodéric and JoeJ like this.
  10. Frenetic Pony

    Frenetic Pony Regular

    Sounds complicated as hell and unweildy. Besides there's only 2 representations that matter. Where a solid continuous surface is likely to be wider than the sampled pixel (thus the sampled frustum) or smaller. First just needs some compact contiguous representation, SDFs are great here because they can just be mipmapped and that's cool. Second representation might be pre-computed statistical mapping to a volumetric representation of sub pixel geometry. Then you can do whatever with that, raymarch and gather, raymarch and use russian roullete for hit, etc.

    The question mostly becomes that of memory representation and compaction. But displacement and texture maps still seem popular for artists and general workflow, and since you can reference the same material for that again and again that's one way to do memory compaction. Maybe map all a models curvature that can be stored in 2d and bake all that into displacement maps, leaving the base model lower entropy while having virtually lossless compression.
     
  11. Ethatron

    Ethatron Regular Subscriber

    Things won't evolve towards simplicity. A 1.6 million line engine is complicated as hell² and very unweildy. To me that's not an argument. The focus starts to shift towards reliability and consistency, and not speed and hacks upon hacks upon hacks (preferable the garbage sampling types). Angelo observes, that we piled up so much approximations on top of another, we lost the ability to understand where in regards to some "optimality" we're standing. Look at what became of simple curved surface definitions: normal mapping, tangent space. It's so wrong and complicated and unweildy to get precise results, there's a severe lack of control. It's not clear, what happens when you skin a triangulated mesh with an semi-implicit tangent space. And all the linearities in the operations aren't helping either. The loss of correctness/controlability is fairly severe here.

    I don't understand that, can you explain? To me analytical methods within the pixel's cone do matter.

    I'm not a fan of point-sampling, signal quantization, piecewise linearity or brute force.

    Right, that's what going to offer scalability, and implicit continous LODability. Regular displacement mapping won't cut it though IMO, should be vector based. Sculpting is picking up these ideas/features, and they are the information source. Why throw it all down the discretization drain? In one way or another I think this is what Nanite is doing. From the looks, they should have a parametric surface representation to produce their continous LOD + this and that on top.
     
  12. JoeJ

    JoeJ Veteran

    Ok, i take it back. Ofc. nurbs can do plant branches and leafs, flowers etc. well. :)
    Interesting paper. Personally i lack experience with nurbs modeling or CAD in general. Tried it using Rhino, but somehow found it too hard to grasp.
    For my own modeling tools i started with bezier patches, but switched to catmull clark later. For characters that's just more convenient and freedom. For cars nurbs would do better i guess.

    Now, my argument is parametric surfaces are fine to increase detail, but they can't reduce it below the control mesh. But ofc. we can just switch to usual mesh models and reduce them for this purpose.

    With procedural generation in mind, any surface representation sucks. It lacks a definition of volume, which is likely what we simulate, and the surface is only a result of this process.
    Surface is also hard to edit because adjacency is complicated to manage, filtering it is hard because edge / patch layout is no longer ideal after changing the surface, and computers have a hard time optimizing this because missing understanding of shapes at all frequencies.
    That's where particles make sense. Easy to simulate, then extract local curvature to get detailed surface. Easy to change - just repeat the surface meshing process.

    It's useful because CSG or blending becomes easy, which is very hard using surface representation alone. Quality is restricted ofc., but can be increased using higher resolutions.
    But i'm not a fan of using SDF at runtime. Static, a lot of memory, and brute force. Although it's attractive to have volumetric shells on the surface, e.g. to add diffuse detail we see in the woods. Cases where displacement mapping won't suffice.

    Maybe it can help, but wouldn't this offload compression work to the artist, who then has to care about good parametrizations?
    Thinking of nurbs, the engineering workflow really breaks the desired freedom and creativity. Though if modeling tools are good and give results faster, they might be fine with it. (I'm not up to date with current tools here)
    I would not be optimistic to make good nurbs parameterizations automatically. Quad remeshing is already difficult but nurbs seems too hard. Maybe if we have a format which allows less perfect definitions than something like nurbs.
     
    Frenetic Pony likes this.
Loading...

Share This Page

Loading...