AMD: RDNA 3 Speculation, Rumours and Discussion

Status
Not open for further replies.
2.7x a 6900xt across a general performance summary? Very hard to believe in 2022 or even 2023.

Yeah, its nonsense. Apple delivers 30% more GPU performance with their SoCs but AMD will just increase performance by 2.7x. I dont know why every time these rumors get such attention. Voltage scaling has come to an end. But AMD will deliver what nobody else could.

RDNA2 is only 30% more efficient with 70% more transistors than RDNA1. RDNA2 is an inefficient architecture for calculations. But RDNA3 delivers nearly 3x more compute performance within the same power envelope.
 
AMD will just increase performance by 2.7x.
Yes!
Unironically.
But AMD will deliver what nobody else could.
Yes!
Unironically.
No one else has quite the ballsack to tinker with really funky 3DIC setups.
RDNA2 is only 30% more efficient with 70% more transistors than RDNA1
?
N23 is only 2 days away.
But RDNA3 delivers nearly 3x more compute performance within the same power envelope.
A bit less ISO power but yes!
Real world™ FP32@mm^2 is a big-big gfx11 gimmick.
 
Well AMD needs to sort out ray tracing performance. Less than 2x gain over Navi 21 will be seen as a major fail and consoles are not a useful benchmark for ray tracing performance.
I disagree. The bulk of the development effort is going into consoles and RDNA2's raytracing performance.
On multiplatform titles the software sales numbers don't lie, and the great majority of sales will be coming from Xbox + Playstation. If that's where the money is coming from then that's where the dev money will be thrown at.

Consoles are by far the most useful benchmark for ray tracing performance.

I'm also not really counting on the (IMO inevitable) mid-gens applying a great level of effort into more relative RT performance-per-TFLOP.
So really the best bet for the majority of games will be to offer a proportional performance boost of X times PS5/SeriesX.


Also, leet console dev ray tracing voodoo is going to make Navi 31 look worse anyway, so the pressure is on.
I'm not sure I get this.. You're suggesting that low-level optimizations for raytracing on RDNA2 consoles aren't going to benefit RDNA3 PC GPUs?
Isn't AMD providing ISA compatibility between RDNA3 and RDNA2?
 
That's not how it works, they need to have cheap options to make even majority happy. If there's only expensive options, no matter how fast, it's not going to make everyone or even majority happy

Well, as a gamer I would like it was this way but getting real a 5nm wafer will cost +50% and more with respect to a 7nm wafer, and we are looking at a 650+mm^2 of 5n silicon, plus the I/O and cache (probably on 6nm). The MCM will not be cheap, either. If a 6800XT has a MRSP of 650$ there is no chance for a N31 to be sold at less of 1200-1400$ or even higher even by keeping margins low. Same for N32. N33 will be a "cheaper" option but we will anyway look at a 6nm 400+mm^2 GPU... that is, in the 500+$ range.. "Budget" option would be a N34, and there we will have probably something in the N23 price range... being that already 299-349$.
 
N33 will be a "cheaper" option but we will anyway look at a 6nm 400+mm^2 GPU... that is, in the 500+$ range..
Ehh, ballpark ~440mm^2 but it's also less mem than N22.
Feasiable for 450 buck.
The <$350 segment is likely to be completely dead with no products, AMD including IGP on all future Zen4 CPUs likely covers the "cheap" segment builds.
Raphael iGP is GT1-tier config for office boxes.
But yeah, sub $300 is dying out at a rapid pace.

Chopped N23 is already $299.
 
Yeah, its nonsense. Apple delivers 30% more GPU performance with their SoCs but AMD will just increase performance by 2.7x. I dont know why every time these rumors get such attention. Voltage scaling has come to an end. But AMD will deliver what nobody else could.

RDNA2 is only 30% more efficient with 70% more transistors than RDNA1. RDNA2 is an inefficient architecture for calculations. But RDNA3 delivers nearly 3x more compute performance within the same power envelope.
A 500W+ part with 50% better performance per watt wouldn't be out of the question. It would just be like the old "X2" parts from AMD. Think about the 3870 vs 4870 X2, but with better scaling.
 
That's not how it works, they need to have cheap options to make even majority happy. If there's only expensive options, no matter how fast, it's not going to make everyone or even majority happy
And on top of that, they need volume.
People will only be "happy" if they can actually get their hands on the GPUs, preferably without being scalped into oblivion.


The <$350 segment is likely to be completely dead with no products, AMD including IGP on all future Zen4 CPUs likely covers the "cheap" segment builds.
Unless post-Rembrandt APUs are able to offer close to PS5-level performance, what I take from here is that a large chunk of the PC gaming market will be driven to consoles.

Covid isn't going to last forever and people won't be forever stuck at home, more willing to spend an increasingly larger portion of their income in a discrete GPU. Much less when post-covid inflation hits and their disposable income gets reduced.
I get the feeling that both AMD and Nvidia are biting more than they can chew, in this relentless quest for higher ASP and record margins QoQ out of the same market they've been serving for 20 years.


Chopped N23 is already $299.
But just like N22 before it, the N23 release MSRP is severely affected by the current era of IC shortage + mining craze + scalping.
I have a hard time believing AMD planned for N23 to release at $300-380, when they originally laid out their plans for the chip back in 2018-2019.
 
I disagree. The bulk of the development effort is going into consoles and RDNA2's raytracing performance.
On multiplatform titles the software sales numbers don't lie, and the great majority of sales will be coming from Xbox + Playstation. If that's where the money is coming from then that's where the dev money will be thrown at.

Consoles are by far the most useful benchmark for ray tracing performance.

I'm also not really counting on the (IMO inevitable) mid-gens applying a great level of effort into more relative RT performance-per-TFLOP.
So really the best bet for the majority of games will be to offer a proportional performance boost of X times PS5/SeriesX.

Thats not what were actually seeing though. PC versions seem to always have the most robust/performant/fidelity RT support. Consoles probably are the worst way to benchmark ray tracing, as they offer the lowest performance RT of any hardware available today for the gaming market.
Most money is probably made from Steam/PC aswell.

Unless post-Rembrandt APUs are able to offer close to PS5-level performance, what I take from here is that a large chunk of the PC gaming market will be driven to consoles.

Seems its the other way around.
 
A 500W+ part
But it's not.
with 50% better performance per watt
Baby shit.
Think about the 3870 vs 4870 X2, but with better scaling.
Same niche yes radically different things also yes.
that a large chunk of the PC gaming market will be driven to consoles.
Ta-da!
Lisa wins either way.
I get the feeling that both AMD and Nvidia are biting more than they can chew, in this relentless quest for higher ASP and record margins QoQ out of the same market they've been serving for 20 years.
AMD margin expansion is strictly driven by revenue share gains in laptops and datacenter.
NV margins are the same?
Or even lower.
I have a hard time believing AMD planned for N23 to release at $300-380, when they originally laid out their plans for the chip back in 2018-2019.
Ehhh it could've been $329 tops instead of say $349.
Most money is probably made from Steam/PC aswell.
Nope.
Consult $ATVI ERs or idk
 
I disagree. The bulk of the development effort is going into consoles and RDNA2's raytracing performance.
On multiplatform titles the software sales numbers don't lie, and the great majority of sales will be coming from Xbox + Playstation. If that's where the money is coming from then that's where the dev money will be thrown at.

Consoles are by far the most useful benchmark for ray tracing performance.

I'm also not really counting on the (IMO inevitable) mid-gens applying a great level of effort into more relative RT performance-per-TFLOP.
So really the best bet for the majority of games will be to offer a proportional performance boost of X times PS5/SeriesX.

I'm not sure I get this.. You're suggesting that low-level optimizations for raytracing on RDNA2 consoles aren't going to benefit RDNA3 PC GPUs?
Isn't AMD providing ISA compatibility between RDNA3 and RDNA2?
I'm suggesting two things in terms of ray tracing in games:
  1. fuzzy (low res, low update rate) but amazingly comprehensive console ray tracing with low-level voodoo
  2. comprehensive brute force pristine PC ray tracing
NVidia won't be standing still and AMD needs >2x uplift to avoid looking bad versus NVidia and what ever wizadry the consoles have.
 
What I can imagine is something like Fury Nano with extra good silicon running at the peak efficiency clocks, that'd be interesting to see.

Long before the release of the M1, we knew that the CPU performance of Apple's A-series processors overtook their x86 counterparts (when normalised for core count and power consumption). Anandtech has a series of articles documenting this.
I don't get the ARM craze, it's like suddenly everyone decided to believe Apple's marketing team (remember their "stellar" GPU presentation) and some toy tests from an ARM fan at Anandtech. Pretty sure if it was so good, the big boys would already be doing it (and it seems Keller's K12.3 didn't quite pan out, so the magic ARM performance and efficiency wasn't really there). Well, it's good for what it is, but it's too wide to be scalable to proper HEDT/enterprise level, it seems.

Ehh, ballpark ~440mm^2 but it's also less mem than N22.
Feasiable for 450 buck.
Kinda meh that ATi again abandons the middle-range market, hopefully the potentially successful RDNA3 won't be followed by R600 2.0...
 
What I can imagine is something like Fury Nano with extra good silicon running at the peak efficiency clocks, that'd be interesting to see.
Cool but probably too niche these days.
Those small mITX-friendly Pascal designs also died out.
I don't get the ARM craze
People aren't actually stroking ARM, they're stroking Apple. Which indeed makes pretty cool h/w.
ARM shills actual are just an off-breed type of semis mutt that used to shill, say, POWER in eons beforehand.
Kinda meh that ATi again abandons the middle-range market
It's no ATi anymore, but AMD.
The new, spoopy kind of.
Lisa wants her >50% margins and she's gonna get them.

Also the midrange is just shifting up due to climbing semis costs and all.
Best value recent GPU on the market (3060ti) is $400.
hopefully the potentially successful RDNA3 won't be followed by R600 2.0...
Nah.
RDNA4 is a pretty fast follow-up either way.
 
That's only talking FLOPS but in games it's a little different: if the capabilities of those SMs would be similar to Ampere, then in rasterization there is not much difference between an Ampere SM and a RDNA2 CU (with a slight advantage for Ampere so far), looking at what a 3080 and a 6800XT can do with 68SM vs 72 CU (In Ray tracing it's different but next gen is unknown so this is a great X). So if we go for ALU count and what these ALU to for actual performance, you'll need 144 Ampere SM to match/slightly beat the equivalent of 160CU-10240 FP32 unit (that being probably N32 while N31 seems going for the 60 WGP-15360 shader units). Incidentally, the rumors about Lovelace seems to point exactly at that count on 5nm. But then there is the issue of feeding all those ALUs - that is, bandwidth. AMD is throwing tons of cache at the problem. It must be seen, if for Lovelace Nvidia will do the same. They can go again for a chip near the reticle limit, but if AMD delivers what N31 seems to be, I have difficulties to see it competing with those sheer numbers. Of course, there will be arch improvements, but that goes both ways.

The default assumption is that an RDNA 3 WGP is at least as fast as 2x RDNA 2 WGPs in flops dependent workloads. How games will scale though is a different matter. Ampere doubled flops and L1 bandwidth per SM but that didn’t result in 2x gaming performance. The 46SM 3070 is only 30% faster than the 46SM 2080.

RDNA 2 scaled very well with clock speed vs RDNA 1. Comparing the similar 40CU configs of the 6700xt and 5700xt there was a 35% improvement on paper due to higher clocks and actual results in games were pretty close to that number. This is a great result especially considering the lower off-chip bandwidth on the 6700xt. Scaling up RDNA 3 didn't quite hit the same mark. Comparing the 40CU 6700xt and 80CU 6900xt there was a 75% improvement on paper but only 50% in actual gaming numbers. This leads me to believe the 6700xt is benefiting from higher clocks on its fixed function hardware or the 6900xt is hitting a bandwidth wall. As mentioned earlier in the thread it's going to be interesting to see how AMD feeds such a beast.
 
This leads me to believe the 6700xt is benefiting from higher clocks on its fixed function hardware or the 6900xt is hitting a bandwidth wall. As mentioned earlier in the thread it's going to be interesting to see how AMD feeds such a beast.
I think it is possible to check it now with N21 xtxh SKUs, which apparently have memclock limit of 2450 mhz instead of 2150 mhz (although it seems that either memory chips themselves or IMC can't do much more than 2170 mhz or so)
 
I'm suggesting two things in terms of ray tracing in games:
  1. fuzzy (low res, low update rate) but amazingly comprehensive console ray tracing with low-level voodoo
  2. comprehensive brute force pristine PC ray tracing
NVidia won't be standing still and AMD needs >2x uplift to avoid looking bad versus NVidia and what ever wizadry the consoles have.

I don't expect anyone to stand still.
I'm fully expecting for Nvidia to use their richer developer influence to push for #2 as hard as they can because that's where they have an architectural advantage, and for AMD to focus on "console-multipliers" in the expectation that #1 is widely adopted.
Not much different than Nvidia pushing for more geometry\tessellation in PC games during the Kepler + Maxwell + Pascal eras, while AMD iterated relatively little from GCN1 to GCN4 because the optimization for both consoles was on their side.

I'm aware that AMD "lost" with their strategy, but I don't think it was the strategy's fault. The HD 7970 eventually did leapfrog over the the GTX 680 in multiplatform game performance, despite the latter having a massive advantage in geometry performance.
It's just that AMD's execution on chip performance (clocks) and release dates was pretty terrible compared to Nvidia's. They failed to do >1GHz on TSMC 28nm and then with Globalfoundries' 14nm they screwed up clock performance pretty badly, at least compared to Nvidia+TSMC.


Comparing the 40CU 6700xt and 80CU 6900xt there was a 75% improvement on paper but only 50% in actual gaming numbers. This leads me to believe the 6700xt is benefiting from higher clocks on its fixed function hardware or the 6900xt is hitting a bandwidth wall.

Probably both but more of the latter? The 6700 XT clocks ~12% higher than the 6900 XT on average. The VRAM bandwidth-per-WGP and LLC-amount-per-WGP (and probably the LLC bandwidth too) are all 50% higher on Navi 22 vs. Navi 21.
OTOH, it doesn't look like Navi 22 is losing all that much from halving the number of Shader Engines, which might be an indicator why Navi 3x is reducing the SEs in general (or increasing the WGPs per SE).
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top