AMD Radeon RDNA2 Navi (RX 6500, 6600, 6700, 6800, 6900 XT)

I'm still not sure why you feel the need to cut your reply into dozens of single sentences. It's distracting at best; you can just reply without the quote button at all because we're right here having the conversation together :) Notice you know I'm talking to you because our replies are so closely knit, right?

Back to real life: If winner-take-all was true, AMD would have been long gone after the decade (or more) they lost to Intel between the Core series of processors finally saving Intel's sorry P4 heat, power and performance woes until AMD finally got back into aa compettitive game with the Ryzen line. Hell, they'd be gone for as long as they haven't been fully equivalent to NV on the graphics front either.

Like many things, such as the feelings- and confirmation bias-fueled arguments for GPU mining taking "100% of the GPU market" arguments being made in the NV Ampere thread, real life has a lot more nuance to it. The data we have suggests winners aren't taking all, and instead there are people who have their own definition of "winning." For many, AMD makes more sense to them as a price vs performance metric rather than an absolute performance only stance. The RDNA2 parts are selling just fine, as many as they can make, and I suspect RDNA3 will have similar results.

Even if RDNA3 is only "ok" in a pure dGPU world, the underlying technology fuels the companys iGPU components and their contract wins in the console space. Their definition of winning doesn't really have to line up with yours to still actually result in a lot of profit coming in.
 
If winner-take-all was true, AMD would have been long gone after the decade
They basically were.
What happened with AMD is nothing short of a small-time miracle coupled to a chain of Intel fuckups.
For many, AMD makes more sense to them as a price vs performance metric
Well too bad, AMD will smash that perception much the same way they did in CPUs.
Even if RDNA3 is only "ok" in a pure dGPU world, the underlying technology fuels the companys iGPU components and their contract wins in the console space
Okay is never enough.
And they know it.
They're fighting Qualcomm and Apple in phones by proxy; nothing is ever enough against at least the latter.
 
So, what and is it RDNA2 that will be in samsungs next mobiles?
Yeah.
#2 then #3 and so on and so forth.
This year is a e1080 replacement with RDNA and I think a WoA small part.
AMD puts out IP and SS integrates it.
It'll be like that for quite some time to come since SS wants differentiation in a world where their custom ARM cores flopped.
 
you can just reply without the quote button at all because we're right here having the conversation together :) Notice you know I'm talking to you because our replies are so closely knit, right?
Please no, the reply/quote should generally always be used if you're directly replying to a specific comment in a post.
 
That's sad for AMD. I was expecting RDNA2 to finally beat Nvidia in power efficiency this round, but no, it fails short. One full node advantage with the best world's foundry was not enough.

You might want to read reviews again. And the nodes aren't that far apart.
 
You might want to read reviews again. And the nodes aren't that far apart.
Samsung 8N is more or less a half-node only above 10 nm. The jump to 7 nm seems much more pronounced, matching TSMCs 7nm and Intels 10nm in terms of transistor density:
https://semiwiki.com/semiconductor-...undry/7442-samsung-10nm-8nm-and-7nm-at-vlsit/

But AMD also chose to invest in a high(er) frequency design compared to Nvidia (+25-ish %), that's gonna cost some and they are competing against a much larger die, so they need to agressively clock their chips which leads to a worse spot on the perf/power curve. There are probably points in that curve, where you can see AMD being better than Nvidia, but much of it also depends on the metrics of your choice.
 
Dudebro one-liner stream of consciousness posts are awesome. Just as good as last time.

Well, some common sense and balance was needed, really. Its amusing seeing the resident grass eating adepts not yet realizing they reached audience saturation.
 
Which doesn't have better perf/watt than Ampere, doesn't equal it in RT and doesn't seem to present any threat to Nv GPUs whatsoever anywhere at all.
That's the harsh reality of current day at least.
Now please continue with the fairy tales.

Tell us how you really feel.

lol
 
But AMD also chose to invest in a high(er) frequency design compared to Nvidia (+25-ish %), that's gonna cost some and they are competing against a much larger die, so they need to agressively clock their chips which leads to a worse spot on the perf/power curve. There are probably points in that curve, where you can see AMD being better than Nvidia, but much of it also depends on the metrics of your choice.

It's more they chose to have an high clock design in order to keep the die size small enough to be economically viable on a high cost process. Perf/power curve is a result of the design, not only of the process. AMD managed to keep the power consumption quite low considering the high clocks reached, the only part being pushed a lot to me seems the N22, because they needed to achieve a certain level of performance with only 40CU. Ampere cards are quite pushed on the edge, too, as we see easily OC parts with power limits quite above the FE models, but with modest performance increase.
 
Last edited:
It's more they chose to have an high clock design in order to keep the die size low to be economically viable on a high cost process. Perf/power curve is a result of the design, not only of the process. AMD managed to keep the power consumption quite low considering the high clocks reached, the only part being pushed a lot to me seems the N22, because they needed to achieve a certain level of performance with only 40CU. Ampere cards are quite pushed on the edge, too, as we see easily OC parts with power limits quite above the FE models, but with modest performance increase.
Yeah, that's probably the proper line of reasoning for Navi 2x, which indeed seems pretty efficient given the clock speed they operate at. And I agree, N22 is almost as close to the edge as Ampere seems to be. Whereas the higher end Amperes seem to suffer additionally from G6X power profile as can be guessed from the +70 watt going from 3070 to 3070 Ti for only very small increases in compute.
 
It's more they chose to have an high clock design in order to keep the die size small enough to be economically viable on a high cost process. Perf/power curve is a result of the design, not only of the process. AMD managed to keep the power consumption quite low considering the high clocks reached, the only part being pushed a lot to me seems the N22, because they needed to achieve a certain level of performance with only 40CU. Ampere cards are quite pushed on the edge, too, as we see easily OC parts with power limits quite above the FE models, but with modest performance increase.
Navi2x clocks are quite insane, as high as many server CPUs.
If N5P perf and efficiency as advertised by TSMC can be utilized well, Navi3x will get a decent perf boost just from clocks alone before adding anything on top of N2x. Then add V-Cache. RX7700 VXT, the SKU name sounds formidable already.
Base clocks of 2.5GHz on mid range would be bonkers.
 
Navi2x clocks are quite insane, as high as many server CPUs.
If N5P perf and efficiency as advertised by TSMC can be utilized well, Navi3x will get a decent perf boost just from clocks alone before adding anything on top of N2x. Then add V-Cache. RX7700 VXT, the SKU name sounds formidable already.
Base clocks of 2.5GHz on mid range would be bonkers.
The mid range rx6700 already has a base clock of 2.321mhz, with a game clock of 2424mhz with boost above 2500mhz. So, what you say will be almost a given if nothing else changes except for the node.
 
Whereas the higher end Amperes seem to suffer additionally from G6X power profile as can be guessed from the +70 watt going from 3070 to 3070 Ti for only very small increases in compute.
Likely has nothing to do with G6X. 1070Ti seems to run on quite a bit higher voltages while achieving the same clocks which to me suggests that they've tried to salvage GA104 chips which wouldn't run properly on voltages typical for other GA104 based products.
 
Likely has nothing to do with G6X. 1070Ti seems to run on quite a bit higher voltages while achieving the same clocks which to me suggests that they've tried to salvage GA104 chips which wouldn't run properly on voltages typical for other GA104 based products.
If it was an outlier product, I'd be inclined to agree. But when you look at the bigger picture, we have 3 out of 4 G6X products at or above 320 watts, which in itself is a new dimension in power consumption. And the fourth G6X SKU is just a tiny bit below 300 Watt. 70 watts extra for the GPU alone seems a bit much, just in order to bin a few more chips - especially if you can sell those in the CMPs as well directly. Then we have heat problems with the G6X cards on the memory while mining ETH. That the RTX A6000 only sports G6 without the X, and has 50 watts less TDP than RTX 3090 while being the full GA102 config, probably also has binning reasons.
 
There was speculation that the Navi21 would make it into laptops since it only utilizes 256-bit bus. That should have worked out much better than half the chip trying to catch up to a bigger nvidia part and also given them outright performance lead in laptop space.

The cache has ballooned the chip sizes for AMD, but otherwise it's basically RX480 vs. GTX1080 in the laptops, a situation almost unconceivable when Pascal dropped with 2GHz clocks. RDNA2 has been wildly successful in that regard for AMD, after a long time they've the clockspeed lead and a part competing at the top instead of just struggling against the x04 chip. Though of some that is also down to nvidia's regression in clocks with Ampere.

It's very reminiscent of the Maxwell magic from nvidia when the chips were really efficient and had a lot of clockspeed headroom that was later exploited in Pascal. The RDNA2 chips can overclock close to 3GHz with enough voltage, but the stock Navi21 cards usually run at <2.3GHz. So RDNA3 has a lot of free performance on the table even if it's just RDNA2 iteration.
 
Likely has nothing to do with G6X. 1070Ti seems to run on quite a bit higher voltages while achieving the same clocks which to me suggests that they've tried to salvage GA104 chips which wouldn't run properly on voltages typical for other GA104 based products.
It has something to do with it, not sure exactly how much, but it's more than nothing.
GDDR6X uses only slightly less power per transferred bit than GDDR6, but runs at higher speeds and ends up consuming notably more in practice.
 
Back
Top