AMD Radeon RDNA2 Navi (RX 6500, 6600, 6700, 6800, 6900 XT)

What do you mean? Ray tracing uses compute.

In Rys video you see @7:30 the VS Shader Stage is activated. But if you listen to 0x22h normaly everything which is going to rdna2 should not have VS Shader activated. Compare the picture from the Top wit the Video @7:30. At the picture you dont se VS Shader but PrimS Shader. In the Video you don't see PrimS Shader but VS Shader.
 
What do you mean? Ray tracing uses compute.

He's looking at the profiler view and in one picture the wavefrom occupancy shows PrimS Wavefronts and in the video for DXR it only shows VS and GS Wavefronts, so he's speculating that primitive shaders are not working with DXR.

Edit: The thing is, in the video he's selected a dispatch rays event, which is not going to have any vertex or primitive shaders. So the wavefront occupancy chart may not show it because there aren't any. The selected event is 100% compute shader wavefronts. If it was a vertex shader event, maybe it would update the graph and show PrimS Wavefronts. I don't have any experience with the profiler to know the behaviour.
 
Last edited:
In Rys video you see @7:30 the VS Shader Stage is activated. But if you listen to 0x22h normaly everything which is going to rdna2 should not have VS Shader activated. Compare the picture from the Top wit the Video @7:30. At the picture you dont se VS Shader but PrimS Shader. In the Video you don't see PrimS Shader but VS Shader.
That version of RGP isn't showing PrimS. Maybe it was dropped to only show the API shaders.
 
FWIW primitive shaders versus legacy pipelines is a driver decision, they work perfectly fine on Navi 10. With the smaller GPU you were often just not geometry limited enough to get any benefit out of it. However on Linux the drivers like to use primitive shaders but just didn't enable the shader based culling in most cases, depending on the existing HW based fixed function culling.

Another thing is that while mesh shaders remove most of the geometry pipeline in most APIs, the hardware implementation in RDNA2 uses primitive shaders. As such there isn't really any hardware limitation that you're skipping. What is enabled though is a programming model where the application gets more space to do smart stuff with e.g. vertex data encoding.

Furthermore the other part of the D3D mesh shader feature is amplification shaders (task shaders in nvidia language) which really enable high level culling or other adjustments of the amount of geometry. (pipeline is : amplification shader -> mesh shader -> pixel shader) This is where you're really losing the ability to integrate neatly with a legacy pipeline. Not quite sure how these are implemented in RDNA 2 though. Looks like a plain old compute shader running in a compute ring that writes to a ringbuffer triggering mesh shader dispatches.
 
Too conspiracy-y for me. I'm sure at worst game devs will spend more time optimizing for one architecture than another, but I doubt they'd deliberately tank performance at the whim of Nvidia or AMD.
Well... Maybe... But there is something to it. We do have the history of x64 tessellation.

If you take the Witcher 3 for exmaple, there is marginally any difference between x8 tessellation and x64 tessellation, and with x16 tessellation it was pretty much identical. Yet, HairWorks was "optimized" for x64 tessellation. AMD was weaker at tessellation at the time, so their performance dropped a lot, even though such huge performance drops were not necessary on either vendor's cards. Would it be too conspiracy-y to say that this was done on purpose, especially since there is no visual difference between x16 and x64 and nVidia had an advantage over AMD in this specific tech...?

nVidia is always in these controversies. What's funny is that one of Buildzoid's first comments about the 6800 series, was him wondering if nVidia would figure out a way to overtax or overflow the infinity cache so that performance on these 6800 cards crash.

I like all the open standards AMD goes for, and how they publish a lot of stuff for free for developers. Ultimately it does seem like a disadvantage for them in certain cases, like how TressFX was hijacked and rebranded as PureHair for example.
 
Puget Systems Professional Application Reviews:

Adobe After Effects - AMD Radeon RX 6800 (XT) Performance

Written on December 1, 2020 by Matt Bach

Autodesk 3ds Max & Maya: AMD Radeon 6800 & 6800XT
Written on December 1, 2020 by Kelly Shipman

Unreal Engine: AMD Radeon 6800 & 6800XT
Written on December 1, 2020 by Kelly Shipman

Adobe Photoshop - AMD Radeon RX 6800 (XT) Performance
Written on December 1, 2020 by Matt Bach

Edit: Updated when additional reviews become available:

Adobe Premiere Pro - AMD Radeon RX 6800 (XT) Performance
Written on December 1, 2020 by Matt Bach

DaVinci Resolve Studio - AMD Radeon RX 6800 (XT) Performance
Written on December 2, 2020 by Matt Bach
Does AMD have another category of gpu's for prosumers that haven't been released yet?
 
Well... Maybe... But there is something to it. We do have the history of x64 tessellation.

If you take the Witcher 3 for exmaple, there is marginally any difference between x8 tessellation and x64 tessellation, and with x16 tessellation it was pretty much identical. Yet, HairWorks was "optimized" for x64 tessellation. AMD was weaker at tessellation at the time, so their performance dropped a lot, even though such huge performance drops were not necessary on either vendor's cards. Would it be too conspiracy-y to say that this was done on purpose, especially since there is no visual difference between x16 and x64 and nVidia had an advantage over AMD in this specific tech...?

nVidia is always in these controversies. What's funny is that one of Buildzoid's first comments about the 6800 series, was him wondering if nVidia would figure out a way to overtax or overflow the infinity cache so that performance on these 6800 cards crash.

I like all the open standards AMD goes for, and how they publish a lot of stuff for free for developers. Ultimately it does seem like a disadvantage for them in certain cases, like how TressFX was hijacked and rebranded as PureHair for example.
Gameworks titles are continually decreasing in number so at least we are getting less and less of Nvidia’s shenanigans.
 
Gameworks titles are continually decreasing in number so at least we are getting less and less of Nvidia’s shenanigans.
They are currently integrated with Unreal engine and seems developers use Flow, WaveWorks, Blast, Volumetric Lighting, FaceWorks and a few others that are currently open source and run on AMD hardware ( and Intel likely). Unless there is a perceived value by not open sourcing (like Radeon Rays), I imagine most will transition to github over the next few years.
 
Last edited:
.
They are currently integrated with Unreal engine and seems developers use Flow, WaveWorks, Blast, Volumetric Lighting, FaceWorks and a few others that are currently open source and run on AMD hardware ( and Intel likely). Unless there is a perceived value by not open sourcing (like Radeon Rays), I imagine most will transition to github over the next few years.

Most of these aren't used at all, at least on anything you've actually heard of. They're shiny and stuff but apparently when you try to actually use most of it the whole thing becomes impractical and devs are better off with other solutions, or UE does it better already anyway.

Not that AMD is getting off this train either. Not only is Godfall's raytraced shadows an AMD thing but they're adding other such stuff. Far Cry 6 already announced a partnership with AMD for raytraced reflections, and while I appreciate the neat realtime denoising and reprojection slides AMD put out (which Watch Dogs Legion could really use, etc.) it still wouldn't surprise me if it didn't run the best on Nvidia or some such.

I really wish game dev companies would treat their developers better and pay them more. It seems like a large number of high end graphics devs have left for Nvidia or possibly AMD; if they haven't left for Epic or Unity already. At least at the last two they'll actually produce games; and not stuff like "RTX GI" which can end up like the above. Seemingly a good idea until you actually look into it and realize you can't just plug some highly integrated, complex solution into your own custom engine without a good amount of work that could be used to make your own similar solution that works better for your title anyway. (Sorry RTXGI guys)
 
Most of these aren't used at all, at least on anything you've actually heard of. They're shiny and stuff but apparently when you try to actually use most of it the whole thing becomes impractical and devs are better off with other solutions, or UE does it better already anyway.

Not that AMD is getting off this train either. Not only is Godfall's raytraced shadows an AMD thing but they're adding other such stuff. Far Cry 6 already announced a partnership with AMD for raytraced reflections, and while I appreciate the neat realtime denoising and reprojection slides AMD put out (which Watch Dogs Legion could really use, etc.) it still wouldn't surprise me if it didn't run the best on Nvidia or some such.

I really wish game dev companies would treat their developers better and pay them more. It seems like a large number of high end graphics devs have left for Nvidia or possibly AMD; if they haven't left for Epic or Unity already. At least at the last two they'll actually produce games; and not stuff like "RTX GI" which can end up like the above. Seemingly a good idea until you actually look into it and realize you can't just plug some highly integrated, complex solution into your own custom engine without a good amount of work that could be used to make your own similar solution that works better for your title anyway. (Sorry RTXGI guys)
Ya gameworks thankfully is mostly relegated to budgetware few people will ever play. AMD tech typically fares fine on Nvidia cards. TressFX(required 1 patch that came very shortly after launch) and PureHair for example ran fine on Nvidia GPUs.
 
Last edited:
I really wish game dev companies would treat their developers better and pay them more. It seems like a large number of high end graphics devs have left for Nvidia or possibly AMD; if they haven't left for Epic or Unity already. At least at the last two they'll actually produce games; and not stuff like "RTX GI" which can end up like the above. Seemingly a good idea until you actually look into it and realize you can't just plug some highly integrated, complex solution into your own custom engine without a good amount of work that could be used to make your own similar solution that works better for your title anyway. (Sorry RTXGI guys)
MS or Sony is going to buy them all so there is that. :yes:
 
Does AMD have another category of gpu's for prosumers that haven't been released yet?
There are no Radeon Pro cards based on Navi2x.

Regardless, those results stem from unoptimized drivers, particularly considering the fact that the 6800XT is getting barely 30% above the 5700XT in most cases.
 
AMD Radeon RX 6900 XT Geekbench OpenCL score leaks
The Radeon RX 6900 XT has been discovered in the Geekbench OpenCL benchmark database. This is the full-fat Navi 21 XTX model featuring 80 Compute Units (5120 Stream Processors). Just like the other two Navi 21-based models, the RX 6900 XT comes with 16GB GDDR6 memory across a 256-bit bus. The card is also equipped with 128MB of Infinity Cache.

The RX 6900 XT features similar frequencies to RX 6800 XT with a boost clock of 2250 MHz. This clock is easily achievable and most partner cards (which are expected) should exceed this frequency. The model also has the same power limit as RX 6800 XT of 300W, which might be interesting for a future apple-to-apple comparison.

With a score of 169779 OpenCL points, the graphics card is 12% faster than RX 6800 XT and 35% faster than RX 6800 non-XT. Interestingly the RX 6900 XT features 11% more Stream Processors than RX 6800 XT and 33% more than RX 6800 non-XT, so the scaling is almost linear, as expected. The graphics card would still lose against NVIDIA Geforce RTX 3080 and RTX 3090 which are respectively 4% and 19% faster than RX 6900 XT.
AMD Radeon RX 6900 XT Geekbench OpenCL score leaks - VideoCardz.com
 
Yes, hopefully AMD also allows slightly more Power limit for OC, as that 15% is holding this chip quite badly ...

We had a 6900XT example before here, running at 3ghz. essentially a +30TF full fat RDNA2 GPU with 128mb infinity cache. Sounds like a damn impressive GPU to me. Yes ray tracing isnt what Turing and ampere do, but at those speeds and amount of CU's, it could be good enough for many. 6800 might be so-so in RT; but a 6900XT at those speeds?
RDNA3 is where things start to be even more intresting i think. NV is going to have to work if they want to stay ahead next year.
 
Back
Top