AMD Radeon RDNA2 Navi (RX 6500, 6600, 6700, 6800, 6900 XT)

Weird question perhaps, but would you gals and gents know wether or not RDNA is better suited to Apple's Metal API thanks to the infinity cache and other architecture enhancements? My mind is drawing a connecting between infinity cache and the likes of eDRAM for tiled rendering, which Metal is geared towards. Just a spurious thought.
 
Weird question perhaps, but would you gals and gents know wether or not RDNA is better suited to Apple's Metal API thanks to the infinity cache and other architecture enhancements? My mind is drawing a connecting between infinity cache and the likes of eDRAM for tiled rendering, which Metal is geared towards. Just a spurious thought.
Immediate mode GPU is still immediate mode in Metal. TBDR support at API level is a reflection of the graphics pipeline implementation in the hardware, not the other way round.

So no. You still won’t get stuff like memoryless render targets and imageblocks (abstractions for the TBDR tile memory) when using RDNA2 on Metal, even if RDNA2 is supported by the macOS AMD GPU driver.

TBDR tile memory also isn’t a transparent hardware cache like Infinity Cache. It is a tile-private scratchpad memory, which is an alien concept to an immediate mode pipeline.
 
Last edited:
How exactly does Nvidia tank performance on AMD GPUs?

The blame needs to go on the developers of the games.
NVIDIA will offer development support to developers, often working in the developers office to "optimize" for nvidia hardware. What affect their "optimizations" have on AMD hardware could vary I would guess.

Nvidia has a lot more money to offer developer support than AMD does.

I'm more of a lurker, but I figured I'd chime in on raytracing performance on AMD cards, specifically minecraft which is horrible on rdna2. Nvidia has spent a lot of time/money developing and reworking the render pipeline in minecraft to work well with RTX cards:

It's a good listen since it is an interview with 4 Nvidia developers who are working full time on minecraft rtx, especially if you listen to it from the perspective of "does it just work?", or are optimizations required to work with specific hardware architecture. You'll find that, even though it's pathtraced, there are certain things that needed to be done, and still need to be done to make it work best with RTX hardware. Obviously, the same would be true with RDNA2, though the optimizations are being done by NVIDIA staff, on Nvidia hardware, I think it's obvious why performance stinks on RDNA2.

Once it's out of beta, and perhaps even has a console release, I wonder if the console RDNA optimizations could be ported over to PC, and improve the performance.
 
How exactly does Nvidia tank performance on AMD GPUs?

The blame needs to go on the developers of the games.
As posted above you cannot force someone to believe in plausible deniability when we actually know you cannot plausible deny it.
:D
 
Last edited:
How exactly does Nvidia tank performance on AMD GPUs?
Same way AMD tanks performance on Nv GPUs - by providing 3rd party developers with solutions to problems which are best suited to their h/w.

The blame needs to go on the developers of the games.
In a perfect world of unlimited time and money budgets? Sure.

The point was more about the laughable idea that Nv needs something like a post on GPUOpen to find out the weaknesses of competitors products though.
 
Same way AMD tanks performance on Nv GPUs - by providing 3rd party developers with solutions to problems which are best suited to their h/w.

First hand experience ? Or examples please...

My experience as a AAA dev is the opposite, AMD providing code working well on NV/AMD but NV providing code working well on NV and being slow on AMD. (And changes could be made to improve AMD perf with little to no impact on NV perf...)
 
First hand experience ? Or examples please...

My experience as a AAA dev is the opposite, AMD providing code working well on NV/AMD but NV providing code working well on NV and being slow on AMD. (And changes could be made to improve AMD perf with little to no impact on NV perf...)
Okay. What I've heard from AAA devs is the opposite of what you're saying.
And as for examples - where's the D3D11 renderer in Valhalla? Which D3D12 exclusive features does this game use?
Now could you give me an example of the opposite sort?
 
First hand experience ? Or examples please...

My experience as a AAA dev is the opposite, AMD providing code working well on NV/AMD but NV providing code working well on NV and being slow on AMD. (And changes could be made to improve AMD perf with little to no impact on NV perf...)
Usually some outlier results that buck the trend makes you wonder if there is something else going on.
I do want to highlight that AMD Radeon cards are doing very well in DmC 5. Noteworthy examples are the RX 570 beating the GTX 1060 6 GB (something even the RX 580 can't do most of the time). Another highlight for AMD players is that the Radeon VII matches RTX 2080 performance at 4K, whereas usually AMD's latest card is almost 20% behind. Last but not least, first generation Vega 64 is winning the fight against the RTX 2060 and GTX 1080.
Devil May Cry 5 Benchmark Performance Analysis | TechPowerUp
 
Isn't RDNA 3 planned for 2022?
AMD says the years in their roadmaps are inclusive (so it could be 2022), but so far every single architecture under current roadmap style (both CPU and GPU) has released as if it was exclusive (which would suggest 2021).
Also Wang or some other Radeon bigwig said (or even promised?) they'd deliver new products every year, be it new architecture or tweaked architecture or new process
 
Somewhere in the end of next year Exynos with RDNA should be ready for industrialization for the next-gen Galaxy in the following spring. Although unclear if Samsung wants the latest and greatest RDNA at that point in time.
But for a mobile form factor, they better be having more than that 50% perf/watt they are aiming, 5nm will help. So most likely RDNA3 should be ready then. Probably launch into early 2022.
 
First hand experience ? Or examples please...

My experience as a AAA dev is the opposite, AMD providing code working well on NV/AMD but NV providing code working well on NV and being slow on AMD. (And changes could be made to improve AMD perf with little to no impact on NV perf...)
It's worse than that. Nvidia code only runs well on their latest GPU. Their own previous GPUs run poorly too. Just see Pascal in the majority of titles since Turing released.
 
The point was more about the laughable idea that Nv needs something like a post on GPUOpen to find out the weaknesses of competitors products though.
They definitely don't need it. But it can make it a lot easier. I do hope that this is not the case this time, although as someone mentioned previously, Minecraft RTX says quite a bit. And it's actually interesting that we already saw path traced Minecraft running on an Xbox Series X, but somehow the 6800 cards perform abysmally in comparison...
Hopefully the newer APIs will finally get on their feet on PC, so that more optimizations from RDNA2 in the consoles are translated to RDNA on the PC.
 
Okay. What I've heard from AAA devs is the opposite of what you're saying.
And as for examples - where's the D3D11 renderer in Valhalla? Which D3D12 exclusive features does this game use?
Now could you give me an example of the opposite sort?

What on earth are you even talking about? Valhalla has pretty much no vendor code anywhere. The only partnership was for marketing with AMD CPUs.
 
My experience as a AAA dev is the opposite, AMD providing code working well on NV/AMD but NV providing code working well on NV and being slow on AMD.
Could you elaborate on this a little bit more?
What are you doing as a AAA dev?
What AMD code was working well on NV and what NV code was slow on AMD?

Valhalla has pretty much no vendor code anywhere.
Vendor code is not required at all.
In Pascal time frame, it was enough just to port code from consoles without aligning structured buffers to tank Pascals performance by 5-10% in DX12 games - https://developer.nvidia.com/pc-gpu-performance-hot-spots
By aligning these structures or better replacing them on constant buffers one could easily get 5-10% gains out of Pascals in overall performance with 0 visual impact of course.
Now, it's enough just to port code from consoles without optimizing constants and descriptors to tank performance on every GPU w/o CPU-writable video memory support eighter in hardware or via drivers profiles.
SAM allows to write more descriptors into video memory and the gain from SAM is a good indicator on how bad devs have optimized constants and descriptors usage on PC.
Do you know why, all of a sudden, recent AMD-aligned titles all have integrated benchmarks (all were pushed by AMD in recent reviews), all benefit from SAM and all are being performance outliers (RX 5700 XT is 8-10% slower than 2070 Super when you test something other than AMD titles) sometimes to ridiculous extend when RX 5700 XT is being cabable of competing with way more faster RTX 2080 Ti in those titles with pretty much no vendor code anywhere?:rolleyes:
Doesn't it look suspicious to you?
 
Last edited:
Well Valhalla clearly doesn't run as good as it could be on Nvidia hardware, the 5700XT reaching 2080Ti performance is very suspicious and Dirt5 as well as Godfall underperform as well. I have never seen a 5700XT perform under a regular 2060 (without DX12U features, DLSS and RT of course) in Nvidia titles, which would be equivalent of this behaviour for AMD on the Nvidia optimized side...
 
Back
Top