AMD Radeon RDNA2 Navi (RX 6700 XT, RX 6800, 6800 XT, 6900 XT) [2020-10-28, 2021-03-03]

Discussion in 'Architecture and Products' started by BRiT, Oct 28, 2020.

  1. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    11,200
    Likes Received:
    1,748
    Location:
    New York
    This is really good point. Is it also running at medium on PS5?
     
  2. Dampf

    Newcomer

    Joined:
    Nov 21, 2020
    Messages:
    65
    Likes Received:
    138
    Yes, the settings are the same.

    Either Ubisoft didn't bother for PS5 specific optimizations or the PS5 also reserves a good deal of the 16 GB for the OS.
     
  3. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    2,193
    Likes Received:
    1,560
    Location:
    msk.ru/spb.ru
    Scaling is about equal until the RT pipeline on RDNA2 becomes a limiting factor for performance. The same pipeline becomes a limiting factor on Ampere under a considerably higher load. Nothing new or going against what was already shown for their comparative performance.
     
    pharma and PSman1700 like this.
  4. BRiT

    BRiT (>• •)>⌐■-■ (⌐■-■)
    Moderator Legend Alpha

    Joined:
    Feb 7, 2002
    Messages:
    18,768
    Likes Received:
    21,044
    Its unknown publicly what the PS5 System Reservations are.
     
  5. Dampf

    Newcomer

    Joined:
    Nov 21, 2020
    Messages:
    65
    Likes Received:
    138
    Yes, I was speculating.
     
    PSman1700 likes this.
  6. PSman1700

    Veteran Newcomer

    Joined:
    Mar 22, 2019
    Messages:
    4,520
    Likes Received:
    2,074
    We can almost say with for certain that there wont be 13.5GB available to GPU though. 2.5GB for the OS? Atleast, but im sure more. After that theres more to consider too (cpu for one). For the PS4, of its 8GB there was about 3GB to VRAM for most games, not much more. Thats not even half the total ram.

    Anywhere between 8 and 10 to VRAM will probably be the most common just for VRAM allocation, with in some cases 10 to 12GB.
     
  7. NightAntilli

    Newcomer

    Joined:
    Oct 8, 2015
    Messages:
    104
    Likes Received:
    131
    You people are such downers. Killers of research, enthusiasm and curiosity. It doesn't even register that this is the closest we've gotten to getting a 'cut-off' point for RDNA2... Dismissal when it is not necessary (and even stagnating) is way too common.

    If we can find out the details regarding the differences between the 'good' and 'high' settings for WoW Shadowlands, we have a much better idea where the current RDNA2 cards reach their RT efficiency limit. And with details I mean things like the amount of rays (per pixel), the amount of bounces etc. From there, we can figure out if it's an RA problem, a CU problem, a bandwidth problem, or something else...

    But nah... Never mind. The results are in line with everything else, so, nothing new can be gained here... :roll:
     
    ToTTenTranz likes this.
  8. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    2,193
    Likes Received:
    1,560
    Location:
    msk.ru/spb.ru
    It not "RDNA2 RT efficiency limit", it's the balance of workloads with their respective bottlenecks in this one game. A different game which doesn't even have anything but RT in its renderer would be a much better way of finding out anything of the sorts than a 20 years old MMO title.
     
    PSman1700 likes this.
  9. NightAntilli

    Newcomer

    Joined:
    Oct 8, 2015
    Messages:
    104
    Likes Received:
    131
    Again with the downer comment... At this point you're just arguing for the sake of arguing.

    Exactly because the game is so old, it puts very little strain on the rest of the GPU. And it's not like the game structurally changed with the different RT settings. The game is overall exactly the same, and the only variable is the RT setting, meaning it is perfect for testing the RT scaling, especially because there is a large performance drop-off after a very specific point, which we have not seen in any other game yet. Additionally, a game using only RT will obviously be well beyond the optimal settings if hybrid rendering is already proving too heavy for RDNA2.

    But forget it. You know better and I know nothing.
     
    no-X and ToTTenTranz like this.
  10. techuse

    Regular Newcomer

    Joined:
    Feb 19, 2013
    Messages:
    719
    Likes Received:
    420
    That's the only new 8 GB GPU that is a bit faster than a PS5 though. I understood his posts to mean the lack of VRAM will keep the GPU from maintaining its current performance/IQ relative to console. Buying a GPU that costs more than an entire console should have diff expectations than just "matching console performance in most games".
     
    ToTTenTranz likes this.
  11. tsa1

    Newcomer

    Joined:
    Oct 8, 2020
    Messages:
    52
    Likes Received:
    52
    Huge performance disparities between 720p and 4K in WoW might be related to substantial difference in how CPU-bottlenecked are the drivers for ATi and AMD. Yesterday I was testing DXMD both in DX11 and DX12 with my new 5900x and thanks to Vega overhead I was only getting about 200 fps in both at 800x600. If I used basically any comparable GeForce card, it'd be above 300 fps in dx11 and 200 ish in dx12 (thanks to how the game and drivers are coded). Considering that the CPU requirements go up for RTRT, it's not hard to imagine that CPU plays some role in that.
     
  12. HLJ

    HLJ
    Regular Newcomer

    Joined:
    Aug 26, 2020
    Messages:
    352
    Likes Received:
    589
    Was that not only the case in DX11?
    NVIDIA having a multithreaded driver, while AMD did not?
    Performance, Methods, and Practices of DirectX* 11 Multithreaded... (intel.com)

    I hope not that is also the case for DX12...because that would just be silly.

    On a side note...I have already seen posters on other forums say that the 12GB 3060 should make 10GB 3080 owners feel cheated /sigh
     
    PSman1700 and pharma like this.
  13. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,286
    Likes Received:
    1,551
    Location:
    London
  14. Lurkmass

    Regular Newcomer

    Joined:
    Mar 3, 2020
    Messages:
    292
    Likes Received:
    340
    AMD emulated deferred contexts on D3D11 ...

    The problem behind executing command lists on deferred contexts is that the rendering state gets reset every time a command list it is executed so some developers found out that they were paying for the cost of switching their render targets even though they were using the exact same render target from the previous command list on AMD HW ...

    D3D12 fixed this flaw by requiring command lists to specify a pipeline state object so that rendering states can be statically defined hence why D3D12/Vulkan are 'stateless' by design because those APIs make the cost of changing rendering states explicit ... (stateless also complicates shader recompilations)

    Here's a cost of changing rendering states on Nvidia HW for comparison:

    [​IMG]

    I suspect a part of the reason why D3D11 and deferred contexts might be the superior abstraction model compared to D3D12 for Nvidia HW is because changing rendering state on their HW is very cheap ...
     
  15. pjbliverpool

    pjbliverpool B3D Scallywag
    Legend

    Joined:
    May 8, 2005
    Messages:
    8,545
    Likes Received:
    2,888
    Location:
    Guess...
    I generally agree with your 25% estimation in rasterization (although there will be significant variations game to game obviously) but we should remember that will translate to more like 60% in games with RT depending on RT load which is likely to be used on a large proportion of multiplatforms this generation.

    I read it as more absolute than that; "8GB will be insufficient to maintain parity". But in either case I really just wanted to highlight that the VRAM in isolation won't necessarily prevent parity, or greater than parity as it hasn't in previous generations.

    I don't deny that the RAM in combination with other factors such as platform optimisations and waning driver support may eventually result in specifically the 3070 8GB performing worse than the consoles is many titles, at least were RT isn't involved, but if that does happen then I'd expect it to be a good 3-4 years away at least. But my feeling at the moment is that would be less down to it's VRAM allocation than the other factors. And that goes doubly so for the 3080 with it's 10GB. Performance comparisons at console settings of the 3070 8GB vs 16GB in a few years would be interesting. But alas I doubt anyone will actually do that.

    And I imagine anyone buying that GPU rightly does have very different expectations to merely matching console performance in most games. It seems able to comfortably exceed console performance in all games right now, even before RT and DLSS are considered and I expect that to continue for several more years yet, especially as RT becomes more prevalent.

    That said, I would temper my expectations around the longevity of such a GPU at this point in the consoles lifespan. If you want to keep it beyond the 5xxx Nvidia generation launch then you have have to be prepared to except an experience that has the potential to fall below what the consoles can offer in some titles. Nevertheless, even if it does, I'd expect it'll still be more than enough to see out the whole console generation with a highly comparable experience even in the later years.
     
    DegustatoR and PSman1700 like this.
  16. PSman1700

    Veteran Newcomer

    Joined:
    Mar 22, 2019
    Messages:
    4,520
    Likes Received:
    2,074
    The 3070, even with its 8GB vram, is not just 'abit' faster than a PS5. I'd rather have the 3070 with 'just' 8gb thats has twice the performance over one that as about 10 to 12GB with half the performance. See this post for the TF clarification: https://forum.beyond3d.com/posts/2188138/
    Current games tell not much to nothing. The much praised UE5 demo is going to favour compute throughput as we have moved since over 10 years.

    True, that 'paltry' 3070 will continue to do so (before RT) for the whole generation i guess, in special a higher vram one. Looking at benchmarks already now, it indeed outperforms PS5 in all titles, quite much so. And thats before games start to use engines like UE5.
     
  17. ToTTenTranz

    Legend Veteran

    Joined:
    Jul 7, 2008
    Messages:
    12,045
    Likes Received:
    7,005
    No, hitting VRAM capacity and forcing the GPU to swap critical data with system memory results in lost cycles, but it doesn't immediately kill performance.
    I don't think having 20% more VRAM is ever going to make a 12GB 3060 run faster than a 10GB 3080, considering the latter has over twice the resources for everything else.

    You're trying to associate to me this dishonest hyperbole you invented to diminish my previous statements, however in the particular case of the 3060 I was very clear in what card it might surpass in the long term, and it isn't the 3080 nor the 3070:

    I'm not going to look for a nonsensical comparison to pursue an argument that you made up.
    Yes the 4GB 5 TFLOPs GTX980 from 2014 is always faster than the 1.3 TFLOPs XBOne from 2013 when rendering at 1080p and below. What's the point here?
    I wrote the 4GB dGPUs didn't age well starting in 2015 onwards, and ever since then you've been trying to associate to me this completely fabricated and nonsensical claim that the 2013 consoles perform better than a PC with a GTX980.

    I'm not going to play that game.


    Failing to comply with the RT-performance-is-all-that-matterz narrative is going to get you harassed to no end in this forum.
    Just a friendly warning.


    1 - All non-OS RAM is shared and the proportion is dynamic, yes.

    2 - The GPU takes the bulk of the RAM available for the game. Feel free to visit the gamedev presentations, specifically the port mortem analyses to confirm.

    3 - In the SeriesX, the GPU also has access to the slower 3.5GB GDDR6 available for games. It's wrong to think of the 10GB as a hard limit for VRAM.

    3 - If you're aiming for parity between a PC equipped with a ~100W desktop CPU + at least 16GB DDR4 RAM + 300W 10GB VRAM / 30TFLOPs RTX6080 and a ~220W 10.28TFLOPs console, then you already support my argument.
    A high-end PC with a high-end RTX3080 will (or should) always be expected to render at higher render resolutions with higher-resolution textures with higher-LODs, higher everything - which will obviously consume more VRAM - then you agree that in the long term those 10GB are bound to become a bottleneck for the dGPU.
     
  18. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    2,193
    Likes Received:
    1,560
    Location:
    msk.ru/spb.ru
    So what you're saying is that 8GB and 10GB of VRAM will in fact be enough to run games from new console gen just fine? I'm confused.

    Let's simplify it then: you're expecting 3060 to perform better than 3060Ti in "higher resolutions" "in the long run"?
    What are "higher resolutions" and how do these two cards perform there right now?
    What is "the long run" and when will this happen?
     
    HLJ and PSman1700 like this.
  19. PSman1700

    Veteran Newcomer

    Joined:
    Mar 22, 2019
    Messages:
    4,520
    Likes Received:
    2,074
    What has a system equipped with a 3080 to do with parity to a PS5? A 3070 is comfortably outperforming the ps5 in everything, and thats before RT and reconstruction. Why anyone would want a 3080 for parity is beyond me.

    Someone wanting parity and then some would most likely go for a 3060 or a 6700XT, the latter being closer since its on the same architecture.
    Anyway, i highly doubt anyone buying a 3070 or higher class GPU is seriously going to neglect ray tracing forever while PS5 owners enable it or live with subpar ray tracing/performance in their games.

    Only by playstation fans, yes.
     
    #2219 PSman1700, Jan 15, 2021
    Last edited: Jan 15, 2021
  20. Ethatron

    Regular Subscriber

    Joined:
    Jan 24, 2010
    Messages:
    921
    Likes Received:
    356
    That is end-user documentation, and has nothing to do with the way drivers are designed.
    All drivers support deferred contexts.

    While DX11 drivers can be implemented in any way someone wants, AFAIU no IHV is allowed to multi-thread the internals of DX12, so that developers have predictable und manageable behaviour. There is one exception, which is shader compilation, where Microsoft introduced a flag in the API which specifically allows the utilized driver to engage in multi-threaded pipeline compilation, if and only if the developers permits it.
     
    Krteq likes this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...