AMD: Navi Speculation, Rumours and Discussion [2019-2020]

Status
Not open for further replies.
Well looking at the X Box Series X 315W PSU, the power supply is nominally 255W (12V main output) + 60W (others).
So with the caliber rated at about 255W and the actual operating power consumption at about 140W (multiplied by the efficiency of DC-DC (usually about 80%))... gets us to about 120W. That is 33% lower than 5700XT and slightly higher than 5700. The performance then is around 30% higher than 5700XT, coupled with the energy efficiency ratio 66% higher than that of 5700XT which in turn leads to the confidence that AMD claims to increase energy efficiency by 50%.
I don’t believe that’s how performance per watt is calculated. What you’re describing is just increasing CU count and decreasing frequency to obtain more processor power for less wattage.
Performance per watt should be a shift on the whole power curve If I understand it correct. Not moving down the power curve.
 
Never remembered the 50% performance per watt claim, but looking it up, yep there it is all official.

If we go off that, and total power for a 5700xt is 270 watts, well then: 15 teraflops for 270 watts, crank it up to 350 at max, just under 20 teraflops. Now how this does against the 30XX series ideally (IE what's shown so far) would put it over a 3080 but just under a 3090. What the two will do versus less ideal conditions is unknown, how does Navi handle raytracing, is Ampere really this bad in some title or is that just totally not ready drivers, well we can wait to see. But if they hit that 50% here's a few cards we could see. 350 watts max going to lower: 6900xt 24/16bg? ram/19.5 teraflops $1000-$800; 6800xt 21/14gb? ram/16.5 teraflops $600. 6700xt 16gb 12.5 teraflops 16gb ram $500-450; 6700 8/16gb 10 teraflops $400-350. This last, lowest card should rate somewhere about a 2070 super or higher, we'll see.

That's assuming a lot of things as usual. But one high end die with two SKUs and one middle die with 2. Seems reasonable, but as pointed out AMD could've done better:

Well looking at the X Box Series X 315W PSU, the power supply is nominally 255W (12V main output) + 60W (others).
So with the caliber rated at about 255W and the actual operating power consumption at about 140W (multiplied by the efficiency of DC-DC (usually about 80%))... gets us to about 120W. That is 33% lower than 5700XT and slightly higher than 5700. The performance then is around 30% higher than 5700XT, coupled with the energy efficiency ratio 66% higher than that of 5700XT which in turn leads to the confidence that AMD claims to increase energy efficiency by 50%.

Would like to point out that the series X is only rated at about 22% faster than a 5700xt. But that powerdraw is significantly lower than what anand has for the 5700xt actual power draw. While we can see that the Series X is downclocked from a 5700xt, 30% more CUs for 20% more ideal performance, it's still a huge gap, Nvidia better hope RDNA2 isn't that efficient.
 
Last edited:
My turn for some napkin math.

There was a picture indicating the Xbox Series X PSU Could deliver up to ~315W, as someone just mentioned above. No console is going to risk using near the max capability of the PSU, so count 250W max for constant usage.
If you take the power the CPU, SSD, other I/O components need to use, 150W is a reasonable estimate for the GPU.
Additionally, assuming AMD complies with their 50% power consumption decrease for the same performance, we can say;

52/40=1.3 (the XSX has 30% more CUs than the 5700XT)
1.3*225W = 293W
50% power consumption is 293*0.5=146W

They are around the same clocks, give or take a few 100 MHz, so, despite knowing that power consumption is not linear, that should still be a viable calculation to give some ballpark figures.
Using the CU ratio of big Navi and XSX (80/52) with its 150W power consumption nets you approximately 230W of usage. Assuming it could reach 2.2GHz, multiply by 2.2/1.8 (PS5 frequency divided by XSX frequency) nets you around 282W. Some lower efficiency is expected due to the higher clocks, so around 300W seems reasonable. It might be 2GHz instead of 2.2 GHz depending on how good RDNA2 can scale with power.
Back calculating the power consumption per CU, gives you a value of 3.75W/CU. If you multiply this by 1.5 (50% increase in power consumption), this nets you the power consumption of the 5700XT. It all balances out. So the 80CU navi should have around 300W of power consumption.

And this card should be at least 5% faster than the RTX 3080.
 
"calculations" are fun

XBX GPU: 130-140w; 52CU; 1825MHz
5700XT GPU: 180w (only the rated TDP for the GPU); 40CU; 1905MHz

52/40 x 180w = 234w, so maybe 220w due to the lower frequency.

220/130= +69% better efficiency
220/140= +57% better efficiency

The +50% seems to be real

BigNavi with 80CU + 2250MHz

80/52 x (2250/1825)² x 130w = 304w (only for the GPU, we will see how much power the RAM will need)

2250 / 1905 x (80/40) = 2,36x 5700XT

2800Ti = ~50% faster than 5700XT
2,36/1,5 = 1,57

so I expect BigNavi to be between 55% and 60% faster than a 2800Ti, without taking into account the IPC improvements of RDNA2. If the power target of AMD is lower (275w?) than the performance would be lower too of course.
 
Never remembered the 50% performance per watt claim, but looking it up, yep there it is all official.

If we go off that, and total power for a 5700xt is 270 watts, well then: 15 teraflops for 270 watts, crank it up to 350 at max, just under 20 teraflops.
I think it depends on where you take the 50%. If you are getting 50% better or more than it’s 7.5 + 50% which is 11TF.

If you are going the other way, 50% of 15TF than it becomes 7.5TF. But 7.5 to 15 is a 100% improvement.

so 50% improvement should be to 11TF.
 
I don’t believe that’s how performance per watt is calculated. What you’re describing is just increasing CU count and decreasing frequency to obtain more processor power for less wattage.
Performance per watt should be a shift on the whole power curve If I understand it correct. Not moving down the power curve.
https://www.anandtech.com/show/14618/the-amd-radeon-rx-5700-xt-rx-5700-review/15
5700XT most often runs a bit higher than its game clock on average, right around the 1825MHz mark that the Series X GPU runs at. The difference in frequency is minimal. Increased CU count, that's true, but the Series X seems to show an even greater than 50% perf/watt increase over the 5700XT if I'm not mistaken.
 
AMD claimed higher IPC in “Vega NCU”. Most reviewers seemed ended up trying to quantify it through normalised performance of the complete GPU system, in which CUs are just one among many other kinds of cogs.
Vega CUs satisfied that claim through rapid packed math already. It's marketing after all.

This time they claimed explicitly 50% performance per watt improvements over RDNA 1, not just a specific metric (IPC) of a specific component (CU).
To be fair, the footnote regarding this in their most recent corp. presentation still says it's based on AMD internal estimates. Make of that what you will.
 
To be fair, the footnote regarding this in their most recent corp. presentation still says it's based on AMD internal estimates. Make of that what you will.
Just to make it really fair, that's a disclaimer they always use on unreleased products.
 
Clearly a miscalculation on my part.

Well glad you finally acknowledged it.
Which is also of unknown and variable power, depending on the workloads it can vary a lot, Doom 2016 does 120w, Gears 4 does 170w.

Also consoles have a fixed frame rate mentality, this on it's own reduces power consumption as opposed to desktop chips that have to work all of their parts to maximum utilization to achieve the highest fps.

And desktop GPU power consumption is also variable depending on workload. We do have a maximum Wattage as per you so anything below is obviously a plus. Either ways, the discussion was on XSX since it is also RDNA2. The XSX's 52 CU GPU consumes sub ~150W at 1.825 Ghz ( which is similar to Navi 10 desktop clocks). PS5 GPU boosts up to 2.2 Ghz at presumably similar or lower power levels. These are all on the same/similar process nodes. Yet you seem to think there's zero relevance.
AMD told you exactly what to expect from RDNA2, which is RDNA1 + 50% in perf/watt.

Meaning you should expect e.g. 2x Navi 10 performance if you increase power by 33% (1.33 * 1.5 = 2).
On the full Navi 10 you got 225W, so on Big Navi N21 you'd have 300W TBP for twice the performance. 2x Navi 10 would put it about 30% above the 2080 Ti, meaning it'll go against the RTX 3080.

So either AMD lied about power efficiency or Big Navi - if its TBP is set to 300W - will trade blows against the RTX 3080.

Could be a bit higher than 30% even, latest Techpowerup 4K results have the 2080ti at 143% of 5700XT. But doubling CU's wouldn't scale linearly of course (or at all as per some...) so I think the 30% figure sounds about right. If the Azor tweet was about a reveal, we'll know soon!
Except RDNA2 also has whatever units they are using for RTRT, which I presume need to be powered too, somehow. The final equation may end up being RDNA x2 x66% (Raster) + X (RTRT) = > 300W in RayTraced games. I wonder is that's why a certain IHV is so concerned with super-precise power measurements all of the sudden....

RDNA2 RT hardware is shared with the TMUs, they wont be functional at the same time. And with minimal extra silicon as stated already. Wouldn't rule it out of course, but it is highly unlikely to consume extra power compared to pure raster performance. Said IHV is probably concerned with its own power measurements.
 
If Nvidia was particularly concerned about AMD beating them in perf/watt, why would they make available PCAT right now?
Someone at marketing team spazzed and thought it was a good idea.
C'mon it's the woodscrews company.

Oh wait they also did a video about fans and cooling like 2 weeks ago.
 
Last edited:
Na, it's more than a lapsus at sending out random packages from NVLogistics. They had separate slides on their techday briefings, a separate NDA on the tools and they sent out the kits to their NDA partners.
 
For the full Navi 10 (5700XT) they used the full Vega 10 (Vega 64) as reference, so for Navi 21 they're most probably using the full Navi 10 SKU as reference.
The RDNA marketing material for perf/w comparison had a very convoluted explanation of what they tested.
https://forum.beyond3d.com/threads/...nd-discussion-2019.61042/page-46#post-2073559
Slides from the above link- AMD Next Horizon: Gaming PDF

Isn't FidelityFX a DLSS alternative? And there's also DirectML, which i doubt RDNA2 wouldn't support.
AMD's Radeon Image Sharpening (RIS) was better than DLSS 1.0, and there was talk about AMD using RadeonML to update and offer a comparable solution to DLSS 2.0 but haven't heard anything further about it.
I would expect something would be mentioned leading up to launch, whether it is ready at launch or shortly after, I think it is agreed they need something and the consoles point towards that being the case.
 
Last edited:
Status
Not open for further replies.
Back
Top