AMD Radeon RDNA2 Navi (RX 6500, 6600, 6700, 6800, 6900 XT)

We had 512 and 448 bit bus as one point, years ago. Why it is out of the question now ? Price ? It wasn't a massive problem before...

As your data transfer rates get higher signal integrity gets more complex. If you have a wider bus, that compounds the issue as you have to deal with not only signal integrity but data integrity (synchronizing it so all the data arrives when it should arrive). Microsoft talked about this a bit in their Hot Chips talk for the Xbox Series X.

So, PCB layout becomes a nightmare because all traces have to be the exact same length. But due to the high transfer rate you also want each trace to be as short as physically possible.

Thus it's a cost + data integrity issue.

Regards,
SB
 
We had 512 and 448 bit bus as one point, years ago. Why it is out of the question now ? Price ? It wasn't a massive problem before...
HBM could do the job I suppose. 24Gbps GDDR6X and a 512-bit bus doesn't even get to 66% more bandwidth.

Anyway, I think at the end of 2022 all card reviews are going to be about ray tracing by default. Raw bandwidth may be much less of a factor in ray tracing performance than conventional games. So maybe NVidia cards won't really be affected.

We don't seem to have any good "bandwidth sensitivity" measurements for ray tracing as far as I can tell.

We don't even seem to have XSX/PS5 versus say 6600XT ray tracing performance comparisons to see how "bandwidth-sensitive" RDNA is when doing ray tracing. Ah well.
 
Also, there's a difference in how AMD and Nvidia GPUs underclock, typically radeons ramp their core frequency quite agressively (it drops to almost 2d clocks if GPU load is lower than 70% or 80%) while GeForces typically stay at almost maximum clocks until load drops to 40% or so (people who own a nV gpu can correct me on that)

RDNA2 chips downvolt very aggressively, even with high loads of benchmarks. See around 13min mark here:


 
What kind of architectural differences explain the fact that RDNA2 consumption drops so low compared to Ampere when limiting performance? Infinity Cache eliminating memory accesses?

Most of it is probably just Navi 21 being pushed further from its optimal frequency ranges than GA102, at default clocks.
The RDNA2 GPUs in the consoles also get incredible efficiency (~200W at the wall for 3.5-3.7GHz 8-core Zen2 plus 10-12TFLOPs GPU plus GDDR6 plus SSD etc.), and Navi 23 / 6600XT gets the prize for most power efficient GPU in the market for ETH hashing despite the narrow 128bit bus.
 
RDNA2 chips downvolt very aggressively, even with high loads of benchmarks.
Now this is an interesting point. My 3080Ti benefited greatly from a lot of effort in a well-validated undervolt. I wasn't specifically out looking for power efficiency per-se, instead I was trying to reduce the heat load of all the equipment I have stuffed into a tiny Silverstone PS07B uATX case while attempting to avoid forfeiting performance. Turns out, I can get away with easily a 12% drop in voltage to achieve incredilbly similar maximum clocks. It's actually too bad the voltage / frequency curve manipulation stops at "only" 750mv on this platform.
 
I am starting to see the flaws of RDNA2's cache startegy, while it works great for top dog GPUs, it scales down miserably to low end GPUs, making them only fit for specific resolutions. The RX 6600 is suitable only for 1080p gaming, going 1440p makes it fall behind it's RDNA1/Turing/Ampere competitors, Ray Tracing is absolutely trash with the 6600 and 4K is a no go. These GPUs will probably be a disaster in regards to future proofing your purchase, as they only work within a narrow band of performance. It's also why I think consoles didn't go that route, and opted for regular old methods instead.

Also they are not terribly power efficient considering their 7nm origin and narrow memory bus, compared to their Ampere competitors.
 
Last edited:
I am starting to see the flaws of RDNA2's cache startegy
The main flaw with lower end RDNA2 parts is that they cost way too much.
6600 has the same MSRP as 3060 while being worse than the latter in almost every way.
If not for the current market situation (thanks, crypto) most RDNA2 parts would be DOA at their prices.
 
Your proof that resolution scaling is IC-related, is…?
Navi 33's target is 1080p. It's by far the most power-efficient GPU for 1080p. It has 28 % better power efficiency than GA106:
https://www.computerbase.de/2021-10...3/#abschnitt_energieeffizienz_in_fps_pro_watt
Has to be 40 % over Ampere to be convincing or what? :)
Funny how it becomes less efficient compared to the competition as you increase resolution.
Will be interesting to see next gen numbers when they are on the same node and likely using the same foundry.
 
It's by far the most power-efficient GPU for 1080p.
That "by far" happens only until you've turned on RT:

Screenshot2021101319.png


At which point it is actually worse than 3060 even in power efficiency.
 
The main flaw with lower end RDNA2 parts is that they cost way too much.
6600 has the same MSRP as 3060 while being worse than the latter in almost every way.
If not for the current market situation (thanks, crypto) most RDNA2 parts would be DOA at their prices.

That's the thing , they wouldn't price these cards that way. This is in response to the shortages. Don't worry as the shortages dry up and the 7x00 parts come out or the 40x0 geforce series hits suddenly the prices will go back to what they should be. Then nvidia and amd will make a huge deal about it . Although if shortages last into 2022 we may have to wait for the series after
 
I am starting to see the flaws of RDNA2's cache startegy, while it works great for top dog GPUs, it scales down miserably to low end GPUs, making them only fit for specific resolutions. The RX 6600 is suitable only for 1080p gaming, going 1440p makes it fall behind it's RDNA1/Turing/Ampere competitors, Ray Tracing is absolutely trash with the 6600 and 4K is a no go. These GPUs will probably be a disaster in regards to future proofing your purchase, as they only work within a narrow band of performance. It's also why I think consoles didn't go that route, and opted for regular old methods instead.

Also they are not terribly power efficient considering their 7nm origin and narrow memory bus, compared to their Ampere competitors.

This is some of the most terrible logic,
How does its 1080p performance today indicate that in the future its going to have relatively speaking worse 1080p performance then its direct competitors?
How does its 1440p performance today indicate that its future 1080p performance will be worse then its current day direct competitors?
Got anything to back up the notion of in the future increased memory per FLOP/pixel that wont cache well?
Are you insinuating that in the future games will have lower requirements and thus we will start increasing resolution and thus over time making its relative performance worse?

That "by far" happens only until you've turned on RT:

Screenshot2021101319.png


At which point it is actually worse than 3060 even in power efficiency.

So what your trying to saying is that its more power efficient in 99.99**% of the Games available on the market ?


Why do you two come into every AMD thread and post the same boring diatribe, over and over and over?
COVID's almost over , got outside , do something productive and help improve the SNR of this forum like 3 fold**........


** number pulled out of thin air
 
Why do you two come into every AMD thread and post the same boring diatribe, over and over and over?
It's seemingly their job, the Nvidia Defense Force. There's no improving the place unfortunately for others but it's much better with them on ignore. You can tell when they're at it every time though lol
 
Back
Top