AMD RDNA4 Architecture Speculation

Sigh, waiting on nvidia prices mean it's the same song and pony dance again. If they were going to aggressively price these parts they wouldn't need nvidias prices to slot behind.
 
Well they'd better be looking to price the 9070XT at $399, $450 max. Honestly I don't see how anyone with a 6800XT or higher could even consider it. That's a huge chunk of their previous buyers with no upgrade path without going Nvidia.
 
That's very logical indeed. However, reviewers and buyers should also apply the same logic with DLSS vs FSR comparisons. A 4080 with DLSS Performance should be compared to a 7900XTX running at native FSR or FSR Quality.


What would be the upgrade path for a 7900XTX owner?

Waiting for RDNA5.

Upgrading high-end GPU on every generation makes very little sense.

For consumers/gamers, there are two sane strategies, to keep the computer at relatively good performance level
1) buying high-end components (or totally new high-end computer) rarely
2) buying cheaper components more often

But for those who have huge amount of extra money and want to buy a super-high-end gaming machine during 2025 or beginning of 2026, nvidia is the only option. But this is a very small market.

For professional use where all extra power gives productivity increase the situation is of course different.
 
Correction. Make that $400.
We don't know the comparative performance yet.
9070 will probably be below 5070 but not by much.
9070 XT will probably be at about 5070.
Considering that there is feature parity now and they have +4GB of VRAM $400 seems a bit too low.
I'd say $400 for 9070 and $500 for XT are best bets.
This of course could mean that their margins will suffer if their plan was to price these higher.
 
Sigh, waiting on nvidia prices mean it's the same song and pony dance again. If they were going to aggressively price these parts they wouldn't need nvidias prices to slot behind.
That's how I'm interpreting this as well. If they were confident in having decided on some 'aggressive' pricing, they wouldn't have held off. They probably got word of Nvidia's 50 series pricing, and then realized they were not nearly as well positioned as they thought they'd be.

It's telling that in their interviews they did after the event, it really sounded like their talk of the importance of good pricing and learning lessons and all that was all in relation to their competition. That's probably obvious from a business standpoint, but from a consumer standpoint, that's still very much a "We're not really trying to provide great standalone value, only make it seem like good value when compared against our super high priced competition" giveaway here, and goes against what consumers are thinking when we talk about aggressive pricing.
 
That's how I'm interpreting this as well. If they were confident in having decided on some 'aggressive' pricing, they wouldn't have held off. They probably got word of Nvidia's 50 series pricing, and then realized they were not nearly as well positioned as they thought they'd be.
EVGA complained they've been getting prices from NVIDIA practically when Jensen is on stage already, what makes you think AMD gets the prices early?
 
You guys see the latest from CES?

RX 9070’s with triple slot coolers and either 2x8 power or 12V-2X6. 375W.

I fear this is going to be Vega 2.0.
 
RX 9070’s with triple slot coolers and either 2x8 power or 12V-2X6. 375W.

Who is stating 375w? 2x8pin is pretty standard for mid range these days, that by itself doesn't mean high power use at all.

Regarding pricing. Since 7800XT was $499 RRP I don't see why a next gen 70 class shouldn't be cheaper than that. If they were planning another $499 RRP I don't know what they were thinking.
 
AMD seem to be alway one step behind atm.
They will introduce tensorcores, so they can do FSR4 like NVIDIA does DLSS3 or Intel does XeSS 2.
But NVIDIA then introduced DLSS4 (4x FG) and Reflex 2.
Meaning those are the targets AMD has to chase next generation.
While NVIDIA prepares to go on to the next thing with the 60 series.

Going to be hard for AMD to get out of that "hole".
And Intel will have to not make any failures in order to cling to the bottom of the market.
It has been always like that. Market leader (especially with market share of near 90%) defines the features that game studios will implement just due the huge user base. If underdog comes up with something different and new, it has to be in scale of R300 vs nv25 to get it pushed thru and that is huge risk which probably no one is willing to take any more.

if nVidia loses market share, even if it would be in scale of 10-15 %, has really almost no effect at all the market situation. AMD again can't afford losing much more, so Intel's actions are 10 times bigger threat to AMD GPU division than nVidia just because the market situation is what it is.
 
Either we are in for a big surprise from AMD or their power efficiency went backwards so much they discovered time travel.
Reference is 2x8-pin.
To quote TPU, "The card calls for three 8-pin PCIe power connectors. We've only seen one other custom RX 9070 XT come with three connectors, and that is the XFX RX 9070 XT Merc 319 Black. ... Most other cards, including the PowerColor Red Devil, come with just two 8-pin connectors (375 W)"
Yes, there's cards (more than those 2 they had seen at that point) with 3x, but there's nothing new in AIBs putting extra power connectors, heck, there's even one with 12V-2x6, but it doesn't mean the card is drawing 600W
 
That would be strange.

Navi 32 with 60CU's had total die area of 346mm², and that obviously included four 6nm MCD's totaling about 146mm².

Navi 48 is supposed to be 64CU's, but there should be die savings from moving to 4nm and putting the memory bus and L3 IC on the main 4nm die as well(scaling here isn't great, but it's still something).

Perhaps 64CU is wrong and they actually upped CU count a fair bit instead of going for narrow+faster clockspeeds? Or maybe RDNA4 CU's are a really sizeable chunk wider to accommodate better RT/AI? I dont know, but that doesn't strike me as the sort of architectural or area efficiency gains they'd have liked, especially if the aim is to be able to price this GPU more aggressively.
 
Back
Top