AMD Radeon RDNA2 Navi (RX 6500, 6600, 6700, 6800, 6900 XT)

Discussion in 'Architecture and Products' started by BRiT, Oct 28, 2020.

  1. eastmen

    Legend Subscriber

    Joined:
    Mar 17, 2008
    Messages:
    13,878
    Likes Received:
    4,724
    Bulk of the video is that The 6800xts are as fast as the 30x0 series cards but RT performance isn't as good and no DLSS

    I think AMD really needed to have a DLSS option at launch , even if it was just a demo of it on one game.

    He seems to be more upbeat about the 6800 non xt .
     
  2. Rootax

    Veteran

    Joined:
    Jan 2, 2006
    Messages:
    2,401
    Likes Received:
    1,845
    Location:
    France
    I like the fact that he was "torn" about how to review cards since one has an upscale solution and the other doesn't. It was interesting having his opinion about that.
     
  3. Lightman

    Veteran Subscriber

    Joined:
    Jun 9, 2008
    Messages:
    1,969
    Likes Received:
    963
    Location:
    Torquay, UK
    The clock is as real as it gets. It's not any different to CPU turbo where on lighter loads (less parallel) the clock can hit higher frequency due to power and temperature headroom. My 6800XT can boost to 2720MHz in certain simpler tasks like some scenes from older 3D Marks or Unigine Heaven, but will limit itself to 2350MHz in really heavy scenes in other engines which use a lot of complex shaders and transparencies. In my case, I'm hitting power limit as by undervolting I'm moving that bottom clock upwards to almost 2400MHz with drop from 1.15v to 1.075v. On the other hand, this drop also limits my max. clock in light scenes to about 2600MHz.

    In real games like Asserto Corsa, Doom Ethernal or FarCry 4 I see my card averaging closer to upper bound of 2450-2650MHz at my overclock than to worst case scenario 2350MHz.

    Once BIOSes with higher power limits and voltages are out, I'm sure my card will reach higher average clocks at the cost of power, but I will WC it when my EK block arrives, ready for 400W heat source ;)
     
    Kej, PSman1700, tsa1 and 5 others like this.
  4. pjbliverpool

    pjbliverpool B3D Scallywag
    Legend

    Joined:
    May 8, 2005
    Messages:
    9,236
    Likes Received:
    4,259
    Location:
    Guess...
    I found the animated graphic about Infinity Cache +VRAM being equivalent of 1664GB/s bandwidth to be interesting. I assume that was from AMD themselves so I wonder how it's calculated. Taking the 2TB* the 4K hit rate of 58% plus the 512GB/s of the VRAM comes close.
     
  5. So the 1664GB/s are a worst case scenario, since resolutions lower than 4K should have a higher hit rate.
     
  6. manux

    Veteran

    Joined:
    Sep 7, 2002
    Messages:
    3,034
    Likes Received:
    2,276
    Location:
    Self Imposed Exhile
    Has anyone tried to make benchmarks to test what would happen with inifinity cache if there was a workload that tried to utilize full 16GB of ram? This could be done to simulate future games with potentially larger asset sizes and more memory accesses per frame? Maybe something like blender could be usable for this purpose(same scene with lower/higher quality assets)
     
    Man from Atlantis likes this.
  7. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    10,244
    Likes Received:
    4,465
    Location:
    Finland
    Unless you go over 4K ;)
    And even at 4K it's average of what AMD tested it with, not worst or best.
     
  8. I'd imagine the Infinity Cache is used for pixel/compute shaders that happen mostly on a per-pixel basis, so they depend directly on the amount of pixels to render.
     
  9. tsa1

    Newcomer

    Joined:
    Oct 8, 2020
    Messages:
    89
    Likes Received:
    97
    I truly hope the clocks are real and it's just memory bottleneck that limits the performance, but here's my thoughts:

    It's hard to see the clock uplift without actually measuring performance and comparing one result to another. For example, I can force my vega to run at 1700 ish clock with liquid edition SPPT (mostly for those 1.25Vmax), but the performance (as measured in Firestrike GS) would be actually lower (and my card is definitely not thermal throttling, it's very far away from hotspot tjmax). What happens, I guess, is that the GPU spends more time in p6 state rather than p7 state (which we can't see without special measuring tools like a very good oscilloscope as the frequency is updated 10000 times in a second) due to more micro-instabilities, which ultimately lowers the actual clocks without the software monitoring noticing it.
     
  10. tsa1

    Newcomer

    Joined:
    Oct 8, 2020
    Messages:
    89
    Likes Received:
    97
    Yeah, I wish reviewers actually measured real clocks on all GPUs, otherwise we get situations where a thermally throttling reference Vega 56 (that actually runs at 1.25ghz) is compared to a 2 Ghz Pascal GPU which routinely goes 100-250 mhz above it's actual max boost clocks . It's actually a shame that virtually no one is really interested now both in the arctitectural performance and hardware engineering stuff
     
  11. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,058
    Likes Received:
    3,116
    Location:
    New York
    There’s probably no money in the deep dive tech stuff like we got in the early 2000’s. Ad revenue determines what content we see and there’s nobody doing articles as a hobby anymore like young Anand back in the day.
     
    Man from Atlantis likes this.
  12. T2098

    Newcomer

    Joined:
    Jun 15, 2020
    Messages:
    55
    Likes Received:
    115

    That was explored a bit earlier in the thread here:
    https://forum.beyond3d.com/posts/2168954/
     
    pjbliverpool likes this.
  13. pjbliverpool

    pjbliverpool B3D Scallywag
    Legend

    Joined:
    May 8, 2005
    Messages:
    9,236
    Likes Received:
    4,259
    Location:
    Guess...
    Thanks, I'd forgot about that. It does seem to add up perfectly but then it also conflicts with the other AMD slides showing the IC providing ~2TB/s of bandwidth from a 1024bit interface at 1.94ghz.

    But then we know the IC can be overclocked automatically as needed so perhaps 1.94ghz is the max OC while the base speed is half the GPU game clock, I.e. 1.125Ghz.
     
  14. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    1.94 GHz is the max. boost value. Standard IC clock is at 1.4 GHz.

    Worst case scenario is 512 GByte/s (plus a minuscule amount from IC) when data sets massively exceed the 128 MByte (like Dagger Hashimoto). For gaming it depends on how close you get with the processing elements to your power limit and what priority the IC is given in this case.
    Assuming the core is running at the ASICs power limit and IC is not throttled, it should run at least at 1.43 TByte/s; ×0,58 = 831,5 GB/s; +512 GByte/s = ~1.34 TByte/s effective transfer rate in 4k gaming while at power limit.
     
    Lightman, Silent_Buddha and Jawed like this.
  15. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,714
    Likes Received:
    2,135
    Location:
    London
    I think the mistake you're making is to assume that AMD was stupid enough to continue to use the Vega architecture for power/clocking.

    AMD's done the work to catch up with, seemingly, Maxwell in terms of per ALU or per fixed function unit, power usage. We can argue the details, but AMD has finally arrived at a competitive position, something RDNA clearly wasn't, with 5700XT using as much power as 2080Ti for substantially less performance.
     
  16. BRiT

    BRiT (>• •)>⌐■-■ (⌐■-■)
    Moderator Legend Alpha

    Joined:
    Feb 7, 2002
    Messages:
    20,511
    Likes Received:
    24,411
  17. Voxilla

    Regular

    Joined:
    Jun 23, 2007
    Messages:
    832
    Likes Received:
    505
    T2098 likes this.
  18. Unknown Soldier

    Veteran

    Joined:
    Jul 28, 2002
    Messages:
    4,047
    Likes Received:
    1,669
    Watching now!
     
  19. gamervivek

    Regular

    Joined:
    Sep 13, 2008
    Messages:
    805
    Likes Received:
    320
    Location:
    india
    Review of 5700XT vs 6800XT, both overclocked, the latter doing 2.6GHz and above almost all the time, except for Division 2, at 1440p,

     
  20. digitalwanderer

    digitalwanderer Dangerously Mirthful
    Legend

    Joined:
    Feb 19, 2002
    Messages:
    18,990
    Likes Received:
    3,529
    Location:
    Winfield, IN USA
    If nothing else I am intrigued and enthused as hell about the new AMD cards, watched Jay 2 cents reviewing the XFX 6800 XT MERC 319 yesterday and was really impressed with the 3rd party cards.

    I hope price/availability becomes a bit better all around though, still too rich for my blood and I ain't got as many friends at AMD anymore. :(
     
    PSman1700 and Lightman like this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...