AMD: RDNA 3 Speculation, Rumours and Discussion

Discussion in 'Architecture and Products' started by Jawed, Oct 28, 2020.

Tags:
  1. pTmdfx

    Regular

    Joined:
    May 27, 2014
    Messages:
    416
    Likes Received:
    379
    It is also a rigged comparison towards monoliths, once one considers that 3D packaging solutions can break rectile limit, and that the economies of scale (for foundries) is strong once the first movers prove themselves and the fly wheel gets going. Everyone across the spectrum wants cheap(er) 3D KGDs.

    Though Nvidia has large enough margins and a tight grip on key markets to shallow it for at least a generation, I suppose. :)

    (PVC is exclusive to the ICU, ahem, Intel Cinematic Universe. Let it be.)
     
    #1121 pTmdfx, Apr 11, 2022
    Last edited: Apr 11, 2022
    Lightman likes this.
  2. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,714
    Likes Received:
    2,135
    Location:
    London
    I solved this problem!

    [​IMG]
    It's actually quite a simple solution, I just needed a glass of Cabernet Sauvignon to lubricate the neurons...

    It amuses me that the red dots in the 2x GCD configuration are reminiscent of the stochastic arrangement of sample points in 4xMSAA of old :cool2:
     
    hoom, tsa1, Krteq and 6 others like this.
  3. Megadrive1988

    Veteran

    Joined:
    May 30, 2002
    Messages:
    4,723
    Likes Received:
    242
    Why do I feel like RDNA 3 Navi 31 is gonna be the next Radeon 9700/Pro (R300) ? Maybe cause it's been exactly 2 decades? Maybe I'm excited by the chiplet approach. Maybe I'm excited about the potential for an ultra high-end HBM3 halo product sometime in 2023. Maybe lotta things.
     
    Lightman likes this.
  4. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,058
    Likes Received:
    3,116
    Location:
    New York
    Was tessellation really that bad? It didn’t feel like a decade before AMD caught up. The funny thing with tessellation is that it’s hard to tell when games are using it and how much of it. It’s rarely available as an option in graphics settings. So is it omnipresent or is nobody using it?

    AMD is overdue for a win. They have a lot of ground to make up though. I just hope they come within striking distance on RT so they won’t have to keep promoting half baked implementations in AMD sponsored games.
     
  5. techuse

    Veteran

    Joined:
    Feb 19, 2013
    Messages:
    1,426
    Likes Received:
    909
    Tessellation performance never amounted to much outside performance in that tiny selection of Gameworks titles which didn't run good on Nvidia hardware of the time either.
     
    #1125 techuse, Apr 12, 2022
    Last edited: Apr 12, 2022
    Krteq likes this.
  6. Lurkmass

    Regular

    Joined:
    Mar 3, 2020
    Messages:
    565
    Likes Received:
    711
    What's the matter with IHVs lobbying their own ideal implementations of ray traced effects for ISVs ?



    Even Intel employees in the above thread tells every graphics programmer out there to avoid inline RT like the plague which is somewhat ironic given the nature of their profession to customize graphics code ...

    No IHVs really likes the idea of having to spend more die space just to implement multiple redundant hardware paths to have optimal performance in all of the different APIs ...
     
    trinibwoy, Silent_Buddha and BRiT like this.
  7. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,058
    Likes Received:
    3,116
    Location:
    New York
    That’s a fun thread . Although the poll isn’t really about inline vs callable shading. It’s about doing hit shading in the RT pipeline or in a separate compute shader after recording the hits. I’ve seen signs of the latter while profiling some apps.

    Intel’s motivation is obvious. Their sorting hardware will go to waste when doing inline RT and they lose an advantage over the competition. Both Nvidia and Microsoft recommend using callable hit shaders in the general case which also favors Intel.

    In the end it’s performance that matters. If developers want the freedom to write ubershaders and optimize for coherency themselves they have to prove they can make it fast on some/all hardware. If the result is that it’s slow on all hardware just to even the playing field that’s not a win for us.

    Perhaps but that’s not applicable to AMD is it? No implementation of RT is fast on their current hardware so they don’t really get to have an opinion…yet.
     
  8. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,714
    Likes Received:
    2,135
    Location:
    London
    Software rasterisation in UE5 is a nice wake up call. When you go really big you can do something magical.

    Just because you write an inline RT shader it doesn't mean you can't write stuff to memory.
     
  9. Lurkmass

    Regular

    Joined:
    Mar 3, 2020
    Messages:
    565
    Likes Received:
    711
    I think if APIs (geometry shaders, tessellation, & etc) don't fit a developers usage patterns or their requirements then they'll fall out of use in spite of such possible performance benefits. Developers could opt in to use pixel shaders over compute shaders for their lighting pass in deferred renderers to take advantage of their optimal tiling access patterns, render target compression, or the ability to use hardware VRS but reality chooses to show a different trend. Nanite doesn't use mesh shading to render it's micropolygon meshes and in fact uses the compute shader despite not having the rasterizer at it's disposal. Just because current usage of APIs isn't optimal now doesn't means that graphics programmers won't be able to invent other ways in the future to show us otherwise ...

    It's too early to say either of which API will win out in terms of popularity among developers ...
     
  10. PSman1700

    Legend

    Joined:
    Mar 22, 2019
    Messages:
    7,118
    Likes Received:
    3,092
    Theres no troubles for AMD, GPUs where scarce (availebility problems, mining etc), they could catch up to Intel and nvidia with dedicated hw acceleration for ray tracing with rdna3+. Its the consoles that missed the boat completely.
     
  11. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    3,242
    Likes Received:
    3,405
    It doesn't?
     
    PSman1700 likes this.
  12. Lurkmass

    Regular

    Joined:
    Mar 3, 2020
    Messages:
    565
    Likes Received:
    711
    It may use it to render meshes with larger triangle size (basically for the same purpose they use primitive shaders on PS5 with nanite) but it's meshes with smaller triangle sizes are firmly software rasterized with compute shaders ...

    Epic Games are also working on a "programmable raster" feature which allows nanite to support even more types content and chances are it's even more compute shaders ...

    There are all sorts of reasons APIs become infeasible overtime outside of just performance. Content ? Awkward usage patterns ? Other requirements ? Etc ...

    It's just too early to predict exactly how the future is going to look because developers only have the faintest idea of what they're going to do and for us even less so. Therefore, it's too early for any one IHV to cater their hardware designs around a specific API ...
     
  13. PSman1700

    Legend

    Joined:
    Mar 22, 2019
    Messages:
    7,118
    Likes Received:
    3,092
    I think its abit like the pixel/vertex/hw t&l days. Some things are better done in (dedicated) hw, like decompression, media encoding (apple proress, av1 etc), untill 'normal' hw becomes fast enough. Now i think things can be done in hybrid, but hw will have its advantages to have along.
     
  14. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    3,242
    Likes Received:
    3,405
    So it does then? I feel that the s/w rasterization portion here doesn't have much to do with how the triangles themselves are handled / rendered, and the latter can be done with mesh shaders, especially as mesh shaders are just a relatively minor optimization of compute shaders in relation to geometry processing.
     
    pharma and PSman1700 like this.
  15. Lurkmass

    Regular

    Joined:
    Mar 3, 2020
    Messages:
    565
    Likes Received:
    711
    They could very well use mesh shading to render these dense micropolys but Epic Games doesn't do this because they found out this type of content is not a good fit for the HW rasterizer so they use compute shaders instead ...

    How can one absolutely know what developers are going to do with ray tracing in the future and how that'll affect each IHVs HW implementation of ray tracing ?
     
  16. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    3,242
    Likes Received:
    3,405
    But can you use mesh shaders with s/w rasterizer? I don't see why it can't be possible. No idea if current UE5 build is using them in any way though.

    That's a different question and I feel that it's not entirely up to what developers are going to do but also up to what GPU h/w vendors will allow them to do. It's not a decision of s/w developers on how the GPU h/w will evolve.
    Key point here is that RT h/w must provide a) performance and b) flexibility. Both of these are needed for RT to evolve, and looking at how DXR 1.0 did over the past 3 years it is pretty obvious that without it there would be about zero ray tracing in modern games - despite even the old DX11 h/w being flexible enough to allow it. Which proves that flexibility on its own isn't enough here.

    So saying that developers will do something which the h/w doesn't expect them to do is nice and all and even may be true in some cases (likely very limited ones, like Dreams renderer or CryEngine's RT) but there's also a huge chance that it won't actually happen en mass and most developers will opt to use what IHVs are proposing to use.

    Note that UE5 itself didn't use RT h/w at first and there was even that argument that "RT h/w is dead!" at that point. Turns out that it's not and UE5 is using it in its release form.
     
    #1136 DegustatoR, Apr 12, 2022
    Last edited: Apr 12, 2022
    xpea, pharma and PSman1700 like this.
  17. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,058
    Likes Received:
    3,116
    Location:
    New York
    Sure and if you’re planning to just write parameters to memory doing it inline seems like the natural fit. And for multi bounce RT you can do a sort pass in compute after each trace pass. Pure software coherency solution. But that requires writing and managing a lot of state yourself.

    The point isn’t that inline is inherently bad. The point is that developers need to prove that they can beat the IHVs at the optimization game.

    One interesting thing to note in this debate is that it doesn’t make any difference to traversal or intersection speed. So AMD needs to improve there no matter what.
     
  18. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,058
    Likes Received:
    3,116
    Location:
    New York
    Can you write out the results of a mesh shader to memory? I thought it was bound to the hw rasterizer just like vertex shaders are.

    https://microsoft.github.io/DirectX-Specs/d3d/MeshShader.html#streamout
     
    DegustatoR likes this.
  19. PSman1700

    Legend

    Joined:
    Mar 22, 2019
    Messages:
    7,118
    Likes Received:
    3,092
    Thing is that NV/intel gpu's can do both.
     
  20. Lurkmass

    Regular

    Joined:
    Mar 3, 2020
    Messages:
    565
    Likes Received:
    711
    I think I have an idea of what you're asking and the answer is no because mesh shaders are still tied to the graphics pipeline because it's output is fixed entirely for HW rasterizer consumption ...

    It'd be a different story if mesh shaders we're truly a part of the compute pipeline just exactly like compute shaders are ...
     
    DegustatoR likes this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...