AMD Radeon RDNA2 Navi (RX 6500, 6600, 6700, 6800, 6900 XT)

Really the 6900 is nowhere in the charts. ur much better with a custom 6800xt with OC. and much cheaper once the MSRP gets real.

So 128bm of IC is not suitable for 4k gaming...

I think it's quite the feat that the 128MB of IC work so well on day 0, considering many tech reviewers/analysts seemed so skeptical of its use when these were announced. Also, frametime consistency on Navi 21 cards is unparalleled which translates to better gaming experiences.
There's probably a bit of a headroom in driver and game optimization, also.

Regardless, 4K is an absolute overkill for 99.99% of the cases and AMD needs to put out their new upscaler ASAP so that people can play at 1440-1800p + upscaling.

Maybe with RDNA3 at 5nm we'll see 196/256MB of IC inside the next top-end GPU and that could reach a significantly higher hitrate. That slide from AMD with the cache/resolution curves show that the 4K curve isn't anywhere near flatlining at the maximum 150MB they have in the scale.
Though I guess eventually AMD will also need to use faster VRAM, of course.
 
Mostly according to reviews testing like 5-8 games. Review at ComputerBase, which tested 17 games, shows the same RX 6900XT-RTX 3090 performance delta (7 %) in both 1440p and 4k.
That's because once you get out of your typical modern benchmarking suite Navi 21 tend to show weaker performance in 1440p. Probably had something to do with driver optimizations for effective use of IC. I won't be surprised if Navi 21 performance will improve in 4K generally over time.
 
Don't think so. Competetive gamers mostly do not care about graphical fidelity enough to buy such expensive cards anyway.

Competitive gamers have been running overclocked 2080tis with game settings on low at 1080p. They buy whatever can push the most frames. I think benchmarks might show AMD pulling ahead there.
 
I'm almost sure they confirmed a refresh for next year already. Either way the same leaks that proved out the specs for this year mentioned codenames for next year as well.

A 6nm refresh with higher GDDR6 speeds and higher clocks, as the cards can obviously do but are limited by yields, seems perfectly in line.

They did say they wanted to do yearly refreshes but it's not clear whether that means a Zen 2 XT style refresh or something else. Given that they apparently have 4 GPUs in the RDNA2 stack (N21, N22, N23 and N24), and the entire stack will launch only by Q2'21, I have a hard time seeing RDNA3 by Q4. If they truly are targeting a 50% perf/w improvement again, it's a bigger effort than Zen 3 vs Zen 2, and that took 16 months, with a much smaller die. Even Nvidia has taken ~2 years between Pascal, Turing and Ampere. I would still bet on RDNA3 only in 2022.

Samsung is using RDNA2, not RDNA3 (of course this could change by the release but that's the official story so far)
RDNA2 wasn't developed with "Sony's money" any more than it was developed with "Microsoft's money", it would have been developed regardless of either console manufacturer and most of what Sony and MS pay are just now starting to roll in as they're actually buying the chips from AMD.
Considering that each architecture is a multi-year project really, of course they have overlapping development by different teams.

Is it actually official that Samsung is using RDNA2 and not 3? I haven't read that anywhere? If I remember correctly the rumour was that they're using RDNA3.

Regarding R&D, it is my understanding that MS & Sony pay a significant part of the R&D up front/during development in exchange for lower royalties later and this is the route AMD has been taking with the consoles. so saying that RDNA2 was partly funded by MS/Sony is for the most part correct.
I think it's quite the feat that the 128MB of IC work so well on day 0, considering many tech reviewers/analysts seemed so skeptical of its use when these were announced. Also, frametime consistency on Navi 21 cards is unparalleled which translates to better gaming experiences.
There's probably a bit of a headroom in driver and game optimization, also.

Regardless, 4K is an absolute overkill for 99.99% of the cases and AMD needs to put out their new upscaler ASAP so that people can play at 1440-1800p + upscaling.

Maybe with RDNA3 at 5nm we'll see 196/256MB of IC inside the next top-end GPU and that could reach a significantly higher hitrate. That slide from AMD with the cache/resolution curves show that the 4K curve isn't anywhere near flatlining at the maximum 150MB they have in the scale.
Though I guess eventually AMD will also need to use faster VRAM, of course.

Tbh 4K isn't really a popular gaming resolution for PCs (maybe for consoles as far more people have 4K TVs than 4K monitors). I just checked the latest steam survey for November 2020, and 4K is a whopping 2.25% of the install base. The vast majority of gamers are on 1080p and 1440p and this is unlikely to change much in the near future (especially for mobile which is mostly 1080p and a sizeable chunk of the gaming market today). I don't see a significant ROI for AMD in increasing the IC as much as you suggest, especially early on in the 5nm process lifecycle.
 
I think it's quite the feat that the 128MB of IC work so well on day 0, considering many tech reviewers/analysts seemed so skeptical of its use when these were announced. Also, frametime consistency on Navi 21 cards is unparalleled which translates to better gaming experiences.
There's probably a bit of a headroom in driver and game optimization, also.
It's a cache, so conventionally it should work about as well now as it can work in general.
At least for now, the RDNA2 doc regards it as architecturally invisible, and at the driver level the publicly seen changes are mostly limited to a handful of flags related to not using the cache, most of which don't seem to be performance-critical. Some, like making metadata for things like HiZ and compression would seem like they would cost performance, but that's speculation on my part.
Flagging specific pages to skip the cache is something that the virtual memory entries for Sienna Cichlid have as an option, but trying to hunt down resources to avoid seems like it's approaching from a less-efficient direction. It's not clear yet if anything but the driver can make this change, and whether there are that many opportunities to use it.

Perhaps we'll find out that some of the offenders haven't been filtered by the drivers yet, or there's a better way to massage access patterns and cache use.

Maybe with RDNA3 at 5nm we'll see 196/256MB of IC inside the next top-end GPU and that could reach a significantly higher hitrate. That slide from AMD with the cache/resolution curves show that the 4K curve isn't anywhere near flatlining at the maximum 150MB they have in the scale.
That graph seems to be leaving some context off, such as why there are so many points on the graph (3 resolutions, more than 2 colors for 1440p, dots and x's). There are endnotes mentioned for it, which I've seen to indicate part of the extrapolation is based on CU count. How exactly that maps to what's on display is unclear to me.
4K seems to still be in the "steep" part of its curve that the other had prior to leveling off. So far the pattern has been that a higher resolution will level off before meeting the other curves, so 4K's steepest rise is somewhat unpromising. 1440 has a more pronounced flattening at the far end, which I wonder if there's a later point where it might start to slightly rise like HD does.

Competitive gamers have been running overclocked 2080tis with game settings on low at 1080p. They buy whatever can push the most frames. I think benchmarks might show AMD pulling ahead there.
I wonder if there's some ancient games or aggressively modded older games that could be tweaked to fit their graphics contexts wholly on-die. It'd bottleneck on something else far earlier, but it would be a somewhat funny data point for a game that fit in VRAM for ATI 9800-era hardware to achieve a practical minimum in memory bandwidth consumption.
 
I wonder if there's some ancient games or aggressively modded older games that could be tweaked to fit their graphics contexts wholly on-die. It'd bottleneck on something else far earlier, but it would be a somewhat funny data point for a game that fit in VRAM for ATI 9800-era hardware to achieve a practical minimum in memory bandwidth consumption.

CS 1.6, old RTS games, Quake 3 (maybe) see competitive play still. Would definitely be an interesting experiment.
 
They did say they wanted to do yearly refreshes but it's not clear whether that means a Zen 2 XT style refresh or something else. Given that they apparently have 4 GPUs in the RDNA2 stack (N21, N22, N23 and N24), and the entire stack will launch only by Q2'21, I have a hard time seeing RDNA3 by Q4. If they truly are targeting a 50% perf/w improvement again
They could pull the same staggered launch, or launch only a new halo product above Navi 21 without dropping it.
 
They are leaving a conspicuous gap between the 80cu and 40cu dies.

There is the 60CU salvage die of course. And they're clocking the 40CU die ~10-15% higher apparently, while giving it 75% of the memory bandwidth of the 80CU die. Should result in it being within ~20% of the 60CU die.
 
Is it actually official that Samsung is using RDNA2 and not 3? I haven't read that anywhere? If I remember correctly the rumour was that they're using RDNA3.
Had to double check, official wording is "custom IP based on RDNA architecture" with no numbers mentioned. But I'm pretty sure when this came out RDNA2 was mentioned somewhere
https://www.amd.com/en/press-releas...ce-strategic-partnership-ultra-low-power-high
https://news.samsung.com/global/amd...-power-high-performance-graphics-technologies
 
Competitive gamers have been running overclocked 2080tis with game settings on low at 1080p. They buy whatever can push the most frames. I think benchmarks might show AMD pulling ahead there.
If this was the case the AMD marketing team would base the whole launch theme on that fact, since presenting the cards as 4k game changers would be silly and incompetent. Oh wait...
 
Another view to 6900xt, scalping, cyberpunk, ray-tracing,... I really like pcworld fullnerd podcast. I hope others like it as well

 
Back
Top