AMD Radeon RDNA2 Navi (RX 6500, 6600, 6700, 6800, 6900 XT)

Have any sites done esports style benches yet? 1080p low settings pushing max frames? I have a feeling these things will kill for CS players etc.

I know it's not 1080P low, but at my 1440P Highest settings I'm getting up to 800+ FPS in CS:GO Competitive matches :D (in very specific places)
I will run a bench in a bit, but fully expect my CPU to be limiting factor ...
 
Last edited:
I know it's not 1080P low, but at my 1440P Highest settings I'm getting up to 800+ FPS in CS:GO Competitive matches :D (in very specific places)
I will run a bench in a bit, but fully expect my CPU to be limiting factor ...

I suppose we're at a point where csgo is probably cpu limited on any high end gpu, but maybe not. Valorant might show more difference. Not sure.
 
Puget Systems Professional Application Reviews:

Adobe After Effects - AMD Radeon RX 6800 (XT) Performance

Written on December 1, 2020 by Matt Bach

Autodesk 3ds Max & Maya: AMD Radeon 6800 & 6800XT
Written on December 1, 2020 by Kelly Shipman

Unreal Engine: AMD Radeon 6800 & 6800XT
Written on December 1, 2020 by Kelly Shipman

Adobe Photoshop - AMD Radeon RX 6800 (XT) Performance
Written on December 1, 2020 by Matt Bach

Edit: Updated when additional reviews become available:

Adobe Premiere Pro - AMD Radeon RX 6800 (XT) Performance
Written on December 1, 2020 by Matt Bach

DaVinci Resolve Studio - AMD Radeon RX 6800 (XT) Performance
Written on December 2, 2020 by Matt Bach

Agisoft Metashape 1.6.5 - AMD Radeon RX 6800 (XT) Performance
Written on December 2, 2020 by William George

AMD Radeon RX 6800 (XT) Review Roundup
Written on December 3, 2020 by Matt Bach
 
Last edited:
I suppose we're at a point where csgo is probably cpu limited on any high end gpu, but maybe not. Valorant might show more difference. Not sure.


Yes, clearly GPU is only really hitting 300W limit when traversing through smoke on official FPS Benchmark scene in CS:GO, everywhere else CPU is holding it up to varying degree.
I've run two benchmarks just now, with my GPU set to 2712MHz max @1.12V and memory at 1100MHz Fast Timings. Obviously power was set to +15% ;)
CPU - Ryzen 5900X with PBO set to +250MHz (boosts ST to 5100MHz, but not in CS:GO) and 3800MHz CL 18 17 16 1T RAM (4x8GB DIMM)

1. 2560x1440 Highest details noAA = 565.47FPS (highs in 990 and lows in 140)
2. 1920x1080 Highest details 4xAA = 638.40FPS (highs in 1080 and lows in 140)
3. 1920x1080 Lowest details noAA = 701.52FPS (highs in 1120 and lows in 180)

CSGO1080-P-Lowest-Frame.png
 
Last edited:
I'm going to be perfectly honest, I haven't had the time lately to keep as up to date as I'd like with Navi2 (and plenty of other things...). The little I've seen so far seems to suggest that it's a rasterisation monster but lacking in RT and software features (DLSS).

DLSS aside, has anyone done an in-depth analysis of the Navi2 RT yet? To me the Navi route has always seemed a good trade-off. Being able to grow RT on-die with more CUs instead of dedicated RT cores. Functionality that should be improved "in-CU" in successive generations. As it stands I would be interested to know wether there are architecture specific pro's and con's between AMD and Nvidias RT solutions.
 
AMDs solution seems to integrate more organic into the CU and scales naturally with number of CUs. It is not as black-boxy as Nvidias solution, but the trade-off is that SIMDs have to handle BVH traversal, which is included in Nvidias black box.

Nvidia otoh seems to have invested more ressources into their RT cores, including a dedicated path to the L1. Being a semi-independent part of the SM, Nvidia could chose to scale them up (in numbers) independently from the rest of intra-SM ressources more easily than AMD in its current form, I guess.
 
AMDs solution seems to integrate more organic into the CU and scales naturally with number of CUs. It is not as black-boxy as Nvidias solution, but the trade-off is that SIMDs have to handle BVH traversal, which is included in Nvidias black box.

Nvidia otoh seems to have invested more ressources into their RT cores, including a dedicated path to the L1. Being a semi-independent part of the SM, Nvidia could chose to scale them up (in numbers) independently from the rest of intra-SM ressources more easily than AMD in its current form, I guess.

Doesn't bode all that well for lower/mid end parts then, as they are less CU's.
 
That's really no different than with a GeForce right now. There's also lower and mid-end parts that have lower number of SMs and thus lower RT-core count (apart from the GTX-line having to use shader-based BVH traversal and intersection checks).
 
That's really no different than with a GeForce right now. There's also lower and mid-end parts that have lower number of SMs and thus lower RT-core count (apart from the GTX-line having to use shader-based BVH traversal and intersection checks).

Yes exactly. A geforce isnt affected as much or by performance segment like AMDs gpus are, usually lower end means less CU/cores, which automatically means both lower rasterization performance and RT performance, they scale accordingly.
 
Yes, absolute RT performance with most current titles seem to favor GeForces. I guess we'll see, how many future console ports will have a more Radeon-friendly RT implementation/level of RT. If there's not also a point in BVH complexity where Nvidias black box starts to fall off a cliff and Radeon scales more gracefully.
 
Ampere will always perform better in RT, and to some extend even rasterization. Where it gets intresting is RDNA3 and later, AMD is slowly getting back on their horse again. Consoles RT will mostly be minimal as was pointed out awhile ago.
 
Back
Top