AMD Radeon RDNA2 Navi (RX 6800, 6800 XT, 6900 XT) [2020-10-28]

Discussion in 'Architecture and Products' started by BRiT, Oct 28, 2020.

  1. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,272
    Likes Received:
    1,529
    Location:
    London
    Unknown Soldier, Lightman and BRiT like this.
  2. SimBy

    Regular Newcomer

    Joined:
    Jun 21, 2008
    Messages:
    700
    Likes Received:
    391
    Yes but those changes don't require new hardware support.

    https://devblogs.microsoft.com/directx/dxr-1-1/

    Support
    None of these features specifically require new hardware. Existing DXR Tier 1.0 capable devices can support Tier 1.1 if the GPU vendor implements driver support.

    Reach out to GPU vendors for their timelines for hardware and drivers.
     
    Alexko and Unknown Soldier like this.
  3. Lurkmass

    Newcomer

    Joined:
    Mar 3, 2020
    Messages:
    228
    Likes Received:
    226
    Until Intel shows a driver implementation, I don't think this is worth being up for debate and I don't imagine most people particularly care about how Intel HW works anyways ...

    If more games do start using the proprietary ray tracing extension then it's pretty much a sign that Vulkan will end as a failed project since Khronos doesn't exactly have a good record of foresight with respect to resolving implementation specific details ...
     
  4. JoeJ

    Veteran Newcomer

    Joined:
    Apr 1, 2018
    Messages:
    1,057
    Likes Received:
    1,241
    Maybe we would have such a problem if there were really new features, but until yet no sign of that.
    To me the API differences seem minor. Neither supports traversal shaders, streaming BVH or custom BVH build.
    Also, in case one approach works much faster on NV but the other is faster on AMD, two code paths might be necessary.
    I don't think Turing will lack support anytime soon - worst case is probably some missing optimiziations for either architecture.
     
    pharma likes this.
  5. Unknown Soldier

    Veteran

    Joined:
    Jul 28, 2002
    Messages:
    3,640
    Likes Received:
    1,184
    Well if MS has ratified DXR1.0, then any update after that is likely 1.1 or 1.01. I can't see them making it DXR2.0

    Ugh! I missed reading a few of the posts above about 1.1.
     
    #605 Unknown Soldier, Nov 7, 2020
    Last edited: Nov 7, 2020
  6. Unknown Soldier

    Veteran

    Joined:
    Jul 28, 2002
    Messages:
    3,640
    Likes Received:
    1,184
    Nvidia has always done their own thing as far as I remember. With DX9 and the AMD 9700/9800 was kicking the crap out of the Geforce 5700, I remember Nvidia stalling the use of DX9.0, 9.0a and 9.0b implementation and only really started pushing 9.0c with developers when the Geforce 6x00 series cards came out because the series was again competitive with AMD.

    When AMD came out with Mantle and Mantle was in beta, I don't remember any Nvidia card supporting Mantle however when Mantle became Vulkan, then Nvidia had support for the API.

    So them having their own API for RT is not surprising, but it is proprietary and AMD cards won't be able to support any software that uses the Nvidia API. AMD and Intel could make their own API, but as you mentioned, it's would then open up a can of worms because not only would Vulkan to affected, but DX Ultimate(DXR) could also potentially be affected. MS would not be happy.

    Vulkan would be less affected I would think, they tend to just amend their roadmap with new features and move on.
     
    w0lfram, NightAntilli and BRiT like this.
  7. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,272
    Likes Received:
    1,529
    Location:
    London
    Why are traversal shaders required? Doesn't inline ray tracing do the same?

    Inline ray tracing effectively means using uber shaders (though specialised for inline ray tracing, not necessarily for all pixel shading). I remember that uber shaders were once considered evil. I'm not sure if they still leave a bad taste in the mouth for most developers.

    What does that mean?

    What customisations are missing from the current BVH build capabilities? Are you thinking of something that's neither defined as a primitive nor as a triangle-mesh based acceleration structure?

    It sounds like you have ideas for DXR 2.0. Have you seen discussion of these ideas?
     
  8. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    11,159
    Likes Received:
    1,674
    Location:
    New York
    I guess I don’t understand why inline would be uniquely helpful to AMD and not anyone else. Is it a case of them having really good inline performance or really bad dynamic linking performance?
     
  9. Lurkmass

    Newcomer

    Joined:
    Mar 3, 2020
    Messages:
    228
    Likes Received:
    226
    If we're going to let driver implementations dictate standards then what even is the point of having a cooperative industry consortium in general ? I think both D3D12 or Vulkan would be screwed in the same way regardless since they'd defacto become anti-portable ...

    Corporations should operate based on the principle of repirocity when developing technical standard. You don't usually see Nvidia going out of their way to emulate AMD specific functionality out of good faith, now do you ? AMD refusing to implement a proprietary vendor extension shows an example of the reality that vendors have a multilateral relationship with the industry. Demanding that they do otherwise would be solely one-sided since it only benefits Nvidia ...
     
    ethernity and Unknown Soldier like this.
  10. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    11,159
    Likes Received:
    1,674
    Location:
    New York
    I don’t think AMD has any business supporting Nvidia’s extensions and vice versa. That’s a recipe for chaos. Yes it would be ideal if the early Vulkan RT games ran on AMDs stuff but the reality is that Nvidia had hardware and software on the market before an industry standard existed. So either the devs patch in standard api support or the feature remains Nvidia exclusive.

    I also don’t see how it benefits Nvidia if AMD supports a dead extension in a handful of games.
     
    xpea likes this.
  11. pharma

    Veteran Regular

    Joined:
    Mar 29, 2004
    Messages:
    4,115
    Likes Received:
    3,239
    Intel stated they can use Nvidia's Vulkan RT extension if they want, so is it really proprietary? Apparently it might be more a factor of architecture and performance.
     
    PSman1700 likes this.
  12. pTmdfx

    Regular Newcomer

    Joined:
    May 27, 2014
    Messages:
    379
    Likes Received:
    338
    It is proprietary in control and is designed for their own hardware roadmap. But since it is merely an API contract, any vendor can still implement it. Pretty much like how CUDA being proprietary, but we still have tools like AMD HIP that kind of implements CUDA.

    You could argue that if other vendors implement it to the point which a critical mass is reached, Nvidia could be forced not to mess it up for others, or risk dissatisfaction from ISVs.
     
    Man from Atlantis likes this.
  13. JoeJ

    Veteran Newcomer

    Joined:
    Apr 1, 2018
    Messages:
    1,057
    Likes Received:
    1,241
    No, easy to confuse but big difference.
    Inline tracing means to trace from any shader stage, including compute. This has some advantages, e.g. parallel processing of results. But it also prevents potential future HW like reordering to shuffle rays for better coherency.
    ImgTec RT GPU already has such coherency engine. This tells us that inline tracing probably becomes secondary at some time. DXR 1.0 ray generation shaders could support such reordering with very little changes. Thus 1.1 feels more like a temporary need.

    Traversal shaders means programmable traversal, which would allow things like stochastic LOD. It is listed as optional future DXR feature, so i guessed ther might be HW support on RDNA2 (or Ampere).
    Maybe there is such support (AMDs TMU patent would have allowed this), but it turned out too costly to be practical yet. IDK, just speculating, maybe Intel has it then...

    To me, using thousands of shader permutations and switching them for just a few pixels is just as evil. Main downside of Uber shader is wasting registers that are not used most of the time, but cause a constant drop in occupancy.
    That's a problem, but we can mix until we get the best of both, until GPUs become flexible enough to do stuff like dynamic register (or memory) allocation, if this ever happens.

    Yeah, i have some ideas. But it's difficult to get there on PC because of multiple vendors / different GPU generations.
    It's all about BVH, but each vendor is free to figure out what works best for them, and the programmer does not even know what they use.
    Still, we have those problems, some are technically solvable right now on any HW:

    Streaming BVH means: We have huge open worlds, but we can not stream the BVH for that from disk. Instead we have to stream geometry, build BVH on CPU and upload. That's very expensive but a waste, and we have to do it during gameplay.
    For a solution it could work to generate some BVH during install / startup, save to disk, and stream next time. This way each vendor can keep his format and future HW has no problem.
    That's doable and i wonder why we do not have this already.

    Custom BVH: We might already have a hierarchy for our game worlds, and we could use this to build the BVH faster / better than the driver which knows nothing but the triangles.
    But to do this we would need an entire API for BVH build, and it's difficult to design this so it works fer every data structure that might come up in future HW. Vendors might prefer to keep it secret, and probably the feature would not be so widely used yet.
    It would be very interesting to extend this to combine progressive mesh ideas with progressive BVH to support dynamic geometry and LOD. Currently, RT is pretty static.

    BVH access beyond RT: It would be cool if we could use BVH for other things like region or proximity queries, e.g. for physics. Currently the only way to access BVH is to shoot rays around. Even if there is no HW for region query, the BVH data is there but sadly it's black boxed.
     
    T2098, Ext3h, PSman1700 and 1 other person like this.
  14. Frenetic Pony

    Regular Newcomer

    Joined:
    Nov 12, 2011
    Messages:
    631
    Likes Received:
    323
    Eh, open worlds doesn't actually seem as big a problem as stated. Insomniac has already stated they do their entire map for Spiderman Miles Morales, including tracing through the whole thing. Besides, as distance increases the importance of tracing traingles decreases, if you trace triangles at all of course. Either way other structures becomes more and more viable, as well as potentially faster, especially if you're doing things other than sharp reflections.

    I'm not sure I see DXR, or triangles for that matter, as a particularly forward looking or scaleable tech. UE5 already has there static new geometry representation that only software rasters triangles at the end stage, for whatever reason it even needs to do that. And judging by their stated performance at 1440p they're already not as efficient as they could be. Implicit surface representations seem even more efficient, heck here's a fairly good recreation of the UE5 demo running in realtime on the PS4 thanks to Dreams:


    Even the lighting is similar. And considering the lighting in UE5 is represented by a separate data structure, while in Dreams it's a unified one, I'd state Dreams wins here as well. Less memory, less code complexity. Sebbi is already making his own toy distance field renderer, and already has a million instances of a million poly model running at 10fps, on an older Intel integrated GPU. With distance fields being so utterly tracing friendly as well the only thing missing from these sorts of renderers is animation, which triangle tracing has problems with anyway. And animation is already somewhat supported (see The Last of Us II's "capsule man" reflection primitives, etc.)

    To bring it back around, I'm not sure AMD or Nvidia should actually concentrate that much on triangle tracing. In comparison it's just slow and has inherent inefficiencies.
     
    w0lfram likes this.
  15. chris1515

    Legend Regular

    Joined:
    Jul 24, 2005
    Messages:
    5,981
    Likes Received:
    6,102
    Location:
    Barcelona Spain
    PS5 does not use DXR. The API probably have less limitation. MS talked about custom BVH on Xbox Series.
     
    pharma and Unknown Soldier like this.
  16. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    9,598
    Likes Received:
    3,715
    Location:
    Finland
    Turing supports DXR 1.1. RDNA2 goes beyond DXR 1.1 and it's exposed to Xbox devs but no details on what's there tthat standard dxr doesn't cover
     
    BRiT likes this.
  17. andermans

    Newcomer

    Joined:
    Sep 11, 2020
    Messages:
    24
    Likes Received:
    38
    I think inline raytracing is just fine when you only have a raydepth of 1 or if you don't branch out and there is good convergence of the rays. Once you do more bounces and branch out the non-inline path is likely better as it allows rebalancing shader work to achieve better convergence.
     
    w0lfram likes this.
  18. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    11,866
    Likes Received:
    6,807
    Having compliance doesn't mean the hardware to make it performant is present.
    AFAIK any DX12_0 GPU is capable of running DXR through compute (initial "pure" compute path was developed for Volta).
     
  19. JoeJ

    Veteran Newcomer

    Joined:
    Apr 1, 2018
    Messages:
    1,057
    Likes Received:
    1,241
    Depends on the scene. For city scene all is discontinuous. You can just remove a house or car at the distance (but still need to build BVH when stuff comes in). For a nature scene that's harder because one mountain blends continuously with the next.
    Did they state if they stream or build BVH for their models? On PS5, streaming BVH should be no problem i guess.

    I don't think implicit surface rendering is efficient because of code divergence. If you build your scene from SDF primitives like spheres, cubes, etc, you end up with branches handling many different primitives.
    I have recently added only basic SDF primitives to my editor, and it's already 10-20 of them. Then there are different blending and CSG modes that amplify this divergence further, not to mention various procedural materials you might use as well.
    And finally you have to search for the surface in the SDF.
    So, i think all this is very interesting for new options in content creation and compression, but to render you might want to convert to a uniform surface representation.
    AFAIK Dreams does this too: They use SDF to model and transfer the scene, but then convert the resulting surface to point clouds - the origin of SDF primitives no longer matters. I assume they do not even have a low res SDF for the whole scene at runtime.
    If this is true, Dreams is not very different to other renderers. Rasterizing triangles or splatting points - can be pretty much the same process. Main difference seems content creation to me.

    I don't have a problem with triangles being the major primitive, but the assumption we could use static meshes for everything sucks. It makes little sense for a perspective projection - we need LOD.
    But likely if we would add support for dynamic geometry to HW RT, plus some other things like reordering, the complexity becomes very high on all ends and it's difficult to find a HW solution that works for everything.
    The simplest solution would be stochastic LOD so we can use discrete LODs. That's not cache efficient but could be solved with a bit of reordering.
    So i'm still hoping for traversal shaders, or even better an implementation of stochastic LOD in HW...
     
    PSman1700 and pharma like this.
  20. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,272
    Likes Received:
    1,529
    Location:
    London
    It also means that the shader can generate rays in a loop. So a ray result can be tested for another loop iteration or new rays can be spawned.

    At about 1 trillion rays/s, how much sorting and over what proportion of the rays is "shuffling" going to be meaningful? Seems to me that the cache hierarchy is the right place to worry about coherence...
     
    ethernity and BRiT like this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...