Never remembered the 50% performance per watt claim, but looking it up, yep
there it is all official.
If we go off that, and total power for a 5700xt is 270 watts, well then: 15 teraflops for 270 watts, crank it up to 350 at max, just under 20 teraflops. Now how this does against the 30XX series ideally (IE what's shown so far) would put it over a 3080 but just under a 3090. What the two will do versus less ideal conditions is unknown, how does Navi handle raytracing, is Ampere really
this bad in some title or is that just totally not ready drivers, well we can wait to see. But if they hit that 50% here's a few cards we could see. 350 watts max going to lower: 6900xt 24/16bg? ram/19.5 teraflops $1000-$800; 6800xt 21/14gb? ram/16.5 teraflops $600. 6700xt 16gb 12.5 teraflops 16gb ram $500-450; 6700 8/16gb 10 teraflops $400-350. This last, lowest card should rate somewhere about a 2070 super or higher, we'll see.
That's assuming a lot of things as usual. But one high end die with two SKUs and one middle die with 2. Seems reasonable, but as pointed out AMD could've done better:
Well looking at the X Box Series X 315W PSU, the power supply is nominally 255W (12V main output) + 60W (others).
So with the caliber rated at about 255W and the actual operating power consumption at about 140W (multiplied by the efficiency of DC-DC (usually about 80%))... gets us to about 120W. That is 33% lower than 5700XT and slightly higher than 5700. The performance then is around 30% higher than 5700XT, coupled with the energy efficiency ratio 66% higher than that of 5700XT which in turn leads to the confidence that AMD claims to increase energy efficiency by 50%.
Would like to point out that the series X is only rated at about 22% faster than a 5700xt. But that powerdraw is significantly lower than what anand has for the
5700xt actual power draw. While we can see that the Series X is downclocked from a 5700xt, 30% more CUs for 20% more ideal performance, it's still a huge gap, Nvidia better hope RDNA2 isn't
that efficient.