AMD Radeon RDNA2 Navi (RX 6500, 6600, 6700, 6800, 6900 XT)

I didn't count DLSS, because AMD is working on its own equivalent, which I assume is not going to take more than a year to implement.
They already said that it will work on all GPUs, so unless there will be strides applied again to make it work especially bad on everything but Radeons all GPUs should get similar improvements with it.

As for RT, well, despite being hardware accelerated on the 2060Ti, the performance drop on it is still too big.
Yeah, well, it's less than on the cards which this thread is about.
 
I didn't count DLSS, because AMD is working on its own equivalent, which I assume is not going to take more than a year to implement.
You are correct about Mesh Shaders being able to give a performance boost, but it still not likely be mandatory still. And it's questionable whether mesh shaders alone will really give an RTX 2060(S) card the boost it needs to match or surpass a 5700(XT) in games.

As for RT, well, despite being hardware accelerated on the 2060Ti, the performance drop on it is still too big. Even with DLSS you barely get framerates that are considered playable.

More features doesn't necessarily mean a card ages better. It certainly can, but there comes a point where a feature hampers its performance too much anyway, which I think will happen with RT and the 2060 cards. The 5700 cards being naturally faster and having performance that stands out in some recent games, makes me believe that even missing some features, the performance will be acceptable for longer than the RTX 2060 cards. And inevitably, the 5700 cards will see some benefit from the consoles as well, although obviously the 6000 series cards even more so.

I can be wrong. But we will see.

We don't even know how Super Resolution will look and perform, chances are it won't be on par with DLSS due to it not using machine learning that is accelerated by tensor cores, sadly. My hopes are on Microsoft to deliver a ML based reconstruction technique that works on next gen consoles and modern GPUs. Unfortunately, the 5700XT would not benefit from it as it doesn't support INT8 calculations.

Nah, the 2060 is fine for Raytracing. For example, Control runs at 1440p60 thanks to DLSS and RT set to medium without any issues and it looks a lot better thanks to the reflections. Because DLSS delivers more performance than moderate RT settings cost, it runs faster than a 5700XT without RT. How is it going to age? We already know the 2060 Super is a little faster in RT than a Series X without DLSS, and the 2060 is not so far off that, it is certainly much faster than the Series S as well. So you'd be more than fine for the whole generation at next gen console level graphics, which will look fantastic enough for most people. The 5700XT can't even compete with the Series S then, as the Series S has full support for DX12Ultimate and DirectStorage. As I said, if you set Raytracing to moderate levels, the 2060 will do just fine for Raytracing. And I'm not even counting DirectML Reconstruction / DLSS.

Yes, it absolutely does mean that. Hardware features are extremly important. If Sampler Feedback is used, even a 2060 has a lot more effective VRAM than a 5700XT. You forgot that most DX12U features massively increase performance and visual fidelity at the same time.

Low end RDNA2 cards will benefit from that as well. I assume a RX 6300 will easily beat the 5700XT in next generation titles.
 
Last edited:
And it's actually interesting that we already saw path traced Minecraft running on an Xbox Series X, but somehow the 6800 cards perform abysmally in comparison...
Hopefully the newer APIs will finally get on their feet on PC, so that more optimizations from RDNA2 in the consoles are translated to RDNA on the PC.
I'm not sure how a research branch of minecraft with RayTracing to run on Xbox Series X, would have any affect on 6800 performance for end users/reviewers (until it was merged/released).

But yes, I would assume that once console versions get raytracing optimizations, those games/engines could offer that raytracing configuration/path to desktop RDNA2 users.
 
Last edited:
AMD says the years in their roadmaps are inclusive (so it could be 2022), but so far every single architecture under current roadmap style (both CPU and GPU) has released as if it was exclusive (which would suggest 2021).
Also Wang or some other Radeon bigwig said (or even promised?) they'd deliver new products every year, be it new architecture or tweaked architecture or new process
That's why I expect them to launch RDNA2-refresh in 2021. They promised new product every year, not new architecture every year (well, even Navi 23 by itself would fulfill the promise…)

Launching RDNA3 in 2021 would mean, that they reduced their development cycle to 12 months. It was longer even in the VLIW-5 era (~15 months) and they switched to 18 months later. RDNA to RDNA 2 was 16 months, but RDNA was considered late, so maybe the original plan was to switch back to 18 months (after the mess which began after the departure of Eric Demers).

According to Rick Bergman AMD plans to boost energy efficiency of RDNA 3 in a similar way to RDNA 2. I cannot imagine how could they do it in 12 months or less.
 
That's why I expect them to launch RDNA2-refresh in 2021. They promised new product every year, not new architecture every year (well, even Navi 23 by itself would fulfill the promise…)

Launching RDNA3 in 2021 would mean, that they reduced their development cycle to 12 months.

Or they had more concurrent development teams active?
 
Isn't the entire game CPU limited?
Yes, RT actually makes the game even more CPU limited, but heavy RT scenes with lots of reflections quickly becomes GPU bound.

CPU starting to become more of a bottleneck? Or RT acceleration not getting same gains as those doing everything in software?
They tested the game at High settings, which means relatively low amount of RT, while also depriving RTX cards of their hardware acceleration prowess "which work only if you select Can it run Crysis?". The whole test is meaningless GPU wise, it only shows that there was a dramatic improvement to the game CPU wise.
 
That's why I expect them to launch RDNA2-refresh in 2021. They promised new product every year, not new architecture every year (well, even Navi 23 by itself would fulfill the promise…)

I'm almost sure they confirmed a refresh for next year already. Either way the same leaks that proved out the specs for this year mentioned codenames for next year as well.

A 6nm refresh with higher GDDR6 speeds and higher clocks, as the cards can obviously do but are limited by yields, seems perfectly in line.
 
I'm almost sure they confirmed a refresh for next year already. Either way the same leaks that proved out the specs for this year mentioned codenames for next year as well.

A 6nm refresh with higher GDDR6 speeds and higher clocks, as the cards can obviously do but are limited by yields, seems perfectly in line.

And hopefully chuck another 64MB of L3 on the side of the chip, otherwise the problem at 4K will be exacerbated.
 
Probably a second team is working on RDNA3. They could have started development asap with Samsung money while RDNA2 was being developed with Sony's money.
From Korean rumors, Samsung should be producing a SoC with RDNA3 by next year end. So there is probably a second team.
Also the collaboration with CPU teams should help with physical design and process related stuffs.
 
Last edited by a moderator:
larger bus could do the trick too couldn't it ?

It could, but it would eat into the power budget they need to achieve higher clocks (and incur extra miscellaneous costs such as PCB redesigns).

TSMC has touted the 18% density improvement of N6 over N7 using the same design rules, but has remained silent on the degree of power improvement.
 
Things really must have changed. I remember vividly, how AMD told everyone, that Hawaii's 512 bit bus was more power efficient and more space saving than Tahiti's 384 bit, because they could trade the relatively high clocks against a wider interface (power) and have much smaller drivers because they didn't need the clocks to go that high (area). I still like the idea of a large memory bus. It gives you more fine-grained accesses and it scales more easily to larger memory capacity.
 
Last edited:
AMD's plans for RDNA3 are probably to release by the end of 2021, although they don't want to compromise publicly due to possible delays.

5C1VEat.jpg


Anandtech swears the last years in these slides are always inclusive (what AMD told them at least), though the same slides that had Zen3 and RDNA2 at the end also showed 2021 at the right of the x axis.

It could be that RDNA3 is mostly RDNA2 at 5nm with some smaller improvements.


Things really must have changed. I remember vividly, how AMD told everyone, that Hawaii's 512 bit bus was more power efficient and more space saving than Tahiti's 384 bit, because they could trade the relatively high clocks against a wider interface (power) and have much smaller drivers because they didn't need the clocks to go that high (area).

Both could be true, though. Hawaii's wide and slow memory controllere may indeed be more power efficient than Tahiti's. Truth be told, AMD isn't using super fast GDDR6 this time around either.
 
Are there results for Navi 21 with memory overclocking while core overclocking is not used?
Memory overclocking is highly constrained and most I have seen is around 100-150 MHz more. Overclocking beyond that can lead to worse results due to errors and crashes.
I have not seen memory only overclocking.
 
Back
Top