No DX12 Software is Suitable for Benchmarking *spawn*

Discussion in 'Architecture and Products' started by trinibwoy, Jun 3, 2016.

  1. chris1515

    Legend Regular

    Joined:
    Jul 24, 2005
    Messages:
    6,114
    Likes Received:
    6,398
    Location:
    Barcelona Spain
    Nanite skip mesh shader/primitive shaders for the majoity of the Nanite workload when triangle are pixel sized. Only triangle bigger than pixel size use mesh shader/primitive shader path. Non Nanite geometry probably use mesh shader, out of hair rendering probably like Frosbite solution is using compute.



    We begins to see special case software rasterizing solution because current HW rasterizing is inefficient with pixel or sub pixel triangle(Nanite or hair rendering in Frosbite). And they succeed to do what they want to do and Nanite evolve to take more and more part of the rendering (animated character for example), it will be more and more compute shading. They think the only things which is impossible to do with Nanite is vegetation.

     
    #1421 chris1515, Feb 15, 2021
    Last edited: Feb 15, 2021
    DavidGraham and pjbliverpool like this.
  2. pjbliverpool

    pjbliverpool B3D Scallywag
    Legend

    Joined:
    May 8, 2005
    Messages:
    8,546
    Likes Received:
    2,890
    Location:
    Guess...
    Chris beat me to it but this is from the Digital Foundry article:

    https://www.eurogamer.net/articles/...eal-engine-5-playstation-5-tech-demo-analysis
     
  3. chris1515

    Legend Regular

    Joined:
    Jul 24, 2005
    Messages:
    6,114
    Likes Received:
    6,398
    Location:
    Barcelona Spain
    And they think they can make evolve Nanite to be used for animated character and the only things where it will be impossible to use is vegetation. And we see with FIFA 21, the same software rascterizing lines for hair is a great solution for LOD0/1.
     
    pjbliverpool likes this.
  4. techuse

    Regular Newcomer

    Joined:
    Feb 19, 2013
    Messages:
    742
    Likes Received:
    439
    I think market fragmentation could be an issue with bringing mesh shaders to PC games.
     
  5. Dampf

    Newcomer

    Joined:
    Nov 21, 2020
    Messages:
    67
    Likes Received:
    143
    Keep in mind all those tweets and interviews were made in correlation with the PS5 demo. So specifically for PS5, software rasterization might be more efficient.

    For PC and Xbox, that could be different, using mesh shading. Epic would be foolish to not use such a groundbreaking technology found in modern GPUs in their next gen engine to its full potential.
     
    pharma and pjbliverpool like this.
  6. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    2,214
    Likes Received:
    1,618
    Location:
    msk.ru/spb.ru
    They are using primitive shaders and will use mesh shaders. It is possible that there will be h/w improvements in future GPUs which will help with small sized triangles handling too.

    And as for Lumen it would be really weird for UE5 not to support RT when UE4 does it already. It is possible though that they will use compute based solution for GI instead of h/w RT one.

    We should remember that key aspect of UE is wide compatibility which means that they will compromise and fall back from h/w solutions where possible.
     
    PSman1700, pjbliverpool and BRiT like this.
  7. Remij

    Newcomer

    Joined:
    May 3, 2008
    Messages:
    233
    Likes Received:
    387
    I'm sure they will support both.. because Nvidia is too big of a player to ignore. Nvidia and Epic have always worked together and integrated hardware support together, so I'm not expecting that to change.
     
  8. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    2,214
    Likes Received:
    1,618
    Location:
    msk.ru/spb.ru
    Why Nvidia? RT and mesh shaders are supported by MS, Sony, AMD and Intel too. PS5 is the only one here which may not support mesh shaders fully and use a simpler primitive shaders instead but I don't think that this difference will mean much.
     
  9. Remij

    Newcomer

    Joined:
    May 3, 2008
    Messages:
    233
    Likes Received:
    387
    I was referring to your "compute based GI vs Hardware RT" comment. Nvidia will continue to work with Epic games to ensure that their RT hardware is supported. They aren't going to drop hardware based RT support for only compute based GI/RT... because as I said, Nvidia is too big and has too much invested in it.
     
  10. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    2,214
    Likes Received:
    1,618
    Location:
    msk.ru/spb.ru
    Compute based GI can be a better overall fit on this generation of h/w with RT being used for other parts of rendering (shadows, reflections, etc). The question of if Lumen will be able to take advantage of RT h/w or if Epic will add a separate RT based GI solution for PC high end GPUs will be answered in time. I see nothing wrong with Lumen using compute only for now, and I doubt that Nv will see anything wrong either considering that they have a pretty hefty advantage in compute at the moment.
     
  11. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    11,207
    Likes Received:
    1,780
    Location:
    New York
    Yes it’s useless as a comparison of old vs new. At best it’s a benchmark of mesh shader performance on competing architectures but who knows how optimized it is for each arch.

    RT GI is also compute based. A better distinction might be RT vs volume based GI.
     
  12. Remij

    Newcomer

    Joined:
    May 3, 2008
    Messages:
    233
    Likes Received:
    387
    Like I said, I think they will support both. Lumen has it's own caveats at the moment which may not be suitable for many developers depending on the kind of game/visuals they want to make.
     
  13. OlegSH

    Regular Newcomer

    Joined:
    Jan 10, 2010
    Messages:
    522
    Likes Received:
    759
    Modern GPUs already cull subpixel triangles at faster pace than triangle setup. Though, HW have to be conservative so it's culling pretty late in the pipeline.
    Mesh and compute shaders still allow to cull even faster by culling a meshlet (a bunch of triangles) at once and earlier instead of culling per triangle and late in HW.
     
  14. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    2,214
    Likes Received:
    1,618
    Location:
    msk.ru/spb.ru
    RT GI here means that it's using RT h/w in the GPU and thus perform pixel sized ray tracing. Most of compute GI also use various forms of tracing just with much less precision.

    I think so too and also assume that Lumen is made for compatibility first now, so that it could run on h/w without RT capabilities. Eventually they'll either upgrade it or add a separate h/w RT based GI solution.
     
    Remij likes this.
  15. chris1515

    Legend Regular

    Joined:
    Jul 24, 2005
    Messages:
    6,114
    Likes Received:
    6,398
    Location:
    Barcelona Spain
    HW rasterizer aren't efficient for pixel or sub pixel triangle sized. I saw a patent about a rasterizer efficient for micropolygon on AMD side but currently this is not used inside a released AMD GPU.

    https://patents.google.com/patent/US10062206B2/en
     
    #1435 chris1515, Feb 15, 2021
    Last edited: Feb 15, 2021
    iroboto likes this.
  16. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,286
    Likes Received:
    1,551
    Location:
    London
    Why do you use the term "simpler" and what do you mean by "simpler"?
     
  17. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    2,214
    Likes Received:
    1,618
    Location:
    msk.ru/spb.ru
    From what we know of primitive shaders they seem to tackle only one part of what mesh shaders are. Thus they are "simpler".
     
    PSman1700 likes this.
  18. Digidi

    Regular Newcomer

    Joined:
    Sep 1, 2015
    Messages:
    392
    Likes Received:
    222
    Golden Times for Polygons Nerd like me. 16 Billon triangles :lol:<3:runaway::cool2:

     
    BRiT likes this.
  19. techuse

    Regular Newcomer

    Joined:
    Feb 19, 2013
    Messages:
    742
    Likes Received:
    439


    Interesting results here. Looks like Nvidia still has a relatively poor driver for low level APIs. It’s a reversal of the DX11 situation we saw with AMD in the past.
     
  20. Lurkmass

    Regular Newcomer

    Joined:
    Mar 3, 2020
    Messages:
    309
    Likes Received:
    348
    I guess primitive shaders might be 'simpler' in the sense that both it and mesh shaders share a common subset in terms of functionality but other than that primitive shaders has the exclusive capability to map both the traditional geometry and the next generation geometry pipelines to it so there might arguably less hardware implementation redundancies on AMD HW. On NV HW, you straight up have two different geometry pipelines implemented in hardware ...
     
    iroboto and BRiT like this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...