AMD RDNA3 Specifications Discussion Thread

I think the only interesting (could actually be good) thing is "new ray box sorting and traversal" - but that's probably purely a software thing enabled by memory hierarchy/usage and extra VGPRs.

For me the shocker is the 355W power consumption coupled with 2.3/2.5GHz clocks. These are 7nm numbers, WTF.
 
So from these slides, how does Raster and RT compare to the 4090 and 3090Ti?
Given their % increases from comparisons it looks like the 4090 is roughly 1.1x faster on average in raster (maybe more or less, game pending etc) and roughly 3090ish level RT for heavier RT games like Dying Light 2, Cyberpunk (again give or take a bit, game depends). But they're company marketing slides, wouldn't be surprised to see -10% real world
 
Probably Radeon Anti-Lag (which has been available longer than NVIDIA has had any similar techs, but for whatever reason no-one noticed before Reflex (which wasn't NVIDIAs first step into it either))

Is that actually a Reflex competitor, or just a NULL (Nvidia Ultra Low Latency, available to all DX9/10/11 games) competitor? Reflex and Null (which is basically just the old pre-rendered frames setting + power setting set to max) are very different, that is why Reflex requires developer support.

How many games support Radeon Anti-Lag?
 
As previously said, their compute chiplet is power limited.
And no matter what, shuffling bits off die (even through Infinity Fabric) is going to cost more in terms of pJ/bit transferred than keeping the memory controllers and cache on the same die as compute. All that burned power is now power you're not able to spend clocking up your WGPs/CUs.

Chiplets are never the more power efficient way to do things, you go chiplet for cost, yield, or because you're at the reticle limit and there's no other way to do it.

At least for the Ryzen chiplet SKUs, that extra power burned seems to be worse at low/partial load than at full load though; even with very light loads all those intra-die SERDES links have to still be lit up and they consume a healthy chunk of the total power draw at low loads.
 
Last edited:
Really? The 7900XT is almost 40% more expensive than it's direct predecessor (6800XT) for what appears to be roughly the same gain without actually fixing the already broken RT performance. If this was still the RDNA2 generation then that would be interesting but for a new generation of GPU where that level of performance increase is expected, I just see this as a massive price hike. Sure the 7900XTX is giving the expected generational price/performance jump over the 6900XT but those products are always horrible value anyway.
The 7900XT just like the 16GB 4080 is terrible. The 7900 XTXT is the far better buy for a meager $100 whereas the 4090 mollywhops the 4080 albeit at a $400 price premium.
 
I think the only interesting (could actually be good) thing is "new ray box sorting and traversal" - but that's probably purely a software thing enabled by memory hierarchy/usage and extra VGPRs.

For me the shocker is the 355W power consumption coupled with 2.3/2.5GHz clocks. These are 7nm numbers, WTF.

Indeed, even Vega did better than this. 200-300MHz higher than Polaris despite being on the same node.
 
Kinda feel bad for them....

They seem to have made great progress in every area except RT.

Pricing for the XTX seems "reasonable." XT probably not the best value.

Will have to wait for real world RT benchmarks I suppose. I had thought the 1.5x RT performance uplift would be multiplicative, but that did not appear to be the case.
 
Last edited:
AMD have stagnated in clocks with RDNA3 despite a new node, to the point that nvidia have again taken the lead.
One has to wonder what sort of impact the MCDs have to that equation.
Moving to the MCDs makes the GCD smaller and therefore should clock higher.
But are there power inefficiencies involved with the MCD approach?

Now I wonder about that originally rumored 8SE 15-16k shader chip.
Seeing the 355w TDP of 6SEs though... a bigger chip would definitely blow the power budget with something at least ~500w.
 
So, will the 200mm^2 6nm Navi33 chip still be on par with a 6800+?
Doesn't leave much room going by the lowerish numbers. Assuming ~1.5x is average ish for 7900XTX.
7900XTX ~1.5x Navi21
7900XT ~1.35x Navi21
7800XTX ~1.2x Navi21
7800XT ~1.05x Navi21
7700XTX <Navi21
Best case would seem to put the highest Navi33 somewhere between 6800 and 6800XT which is a pretty big gap, so likely closer to 6800.

Edit- Navi33 is rumored to be ~300mm2 wasn't it?
 
Credit to some of the leakers for taking it on the chin at least:


The clock speeds are not the only detriment, but they're a big one. If you kept the same price point but tacked on ~500mhz, RT would still be lackluster, but that would also mean an easy win on raster vs the 4090 and at least Ampere-level RT. That would change the value proposition substantially.
 
Last edited:
Back
Top