AMD Radeon RDNA2 Navi (RX 6500, 6600, 6700, 6800, 6900 XT)

If it was an outlier product, I'd be inclined to agree. But when you look at the bigger picture, we have 3 out of 4 G6X products at or above 320 watts, which in itself is a new dimension in power consumption. And the fourth G6X SKU is just a tiny bit below 300 Watt. 70 watts extra for the GPU alone seems a bit much, just in order to bin a few more chips - especially if you can sell those in the CMPs as well directly. Then we have heat problems with the G6X cards on the memory while mining ETH. That the RTX A6000 only sports G6 without the X, and has 50 watts less TDP than RTX 3090 while being the full GA102 config, probably also has binning reasons.

Adding to that: HWInfo has a sensor readout for GeForce RTX 3k cards called "GPU FBVDD Input Power" where FB probably stands for Frame Buffer, i.e. graphics memory.
On an RTX 3090 in our lab, this goes to roughly 120 Watts during ETH mining, to 80 watts during Furmark (and AIDA64 memcopy test) and 41 watts during an ALU throughput test (also most AIDA64 ALU tests).
 
Adding to that: HWInfo has a sensor readout for GeForce RTX 3k cards called "GPU FBVDD Input Power" where FB probably stands for Frame Buffer, i.e. graphics memory.
On an RTX 3090 in our lab, this goes to roughly 120 Watts during ETH mining, to 80 watts during Furmark (and AIDA64 memcopy test) and 41 watts during an ALU throughput test (also most AIDA64 ALU tests).
120W for the GDDR6X?
Is that really possible?
 
Taking those graphs at face value, it certainly looks like voltage is a contributing factor for sure, since they're running at the same nominal frequency.
 
120W for the GDDR6X?
Is that really possible?
It's unlikely, that this is G6X devices alone. But maybe it's grouped with the memory controllers inside the GPU. Nvidia did mention, that one of Ampere's things was a "dedicated memory system rail".
 
They tested one 3070 versus one 3070Ti?

How do they control for sample variation?
How many samples do you think reviewers get? Especially at the moment? How many would be considered enough to average out sample variation?
 
How many samples do you think reviewers get? Especially at the moment?
Hardware Unboxed (Techspot) said in a recent video that they are constantly turning down video cards for review...

How many would be considered enough to average out sample variation?
I'll leave the answer to that question for the statisticians.

Possibly the only way to get a decent answer is to put the same watercooling loop onto each card in turn to approximately normalise GPU, VRM and memory cooling. But PCB design variation is going to make that tough. Any other ideas?

Does HWiNFO read "silicon quality" on Ampere and RDNA 2 (it used to on older GPUs didn't it)? Does that meaningfully correspond with voltage/power/clocks?
 
Hardware Unboxed (Techspot) said in a recent video that they are constantly turning down video cards for review...


I'll leave the answer to that question for the statisticians.

Possibly the only way to get a decent answer is to put the same watercooling loop onto each card in turn to approximately normalise GPU, VRM and memory cooling. But PCB design variation is going to make that tough. Any other ideas?

Does HWiNFO read "silicon quality" on Ampere and RDNA 2 (it used to on older GPUs didn't it)? Does that meaningfully correspond with voltage/power/clocks?
Didn’t silicon quality end with Maxwell on Nvidia’s side?
 
AMD Radeon Pro W6800 review - AEC Magazine
June 18, 2021
This beast of a card is the first pro GPU from AMD with hardware-based ray tracing built in. With a whopping 32 GB of on board memory it’s designed for the most demanding architectural visualisation workflows.
...
For now, in more mainstream viz workflows, AMD faces very stiff competition from Nvidia. The 16 GB Nvidia RTX A4000, for example, generally offers a little less performance than the Radeon Pro W6800 but costs half as much. Meanwhile, the 24 GB Nvidia RTX A5000 offers parity on price, but has a clear performance lead in some workflows and better software compatibility.

One can’t help but wonder if AMD has missed a trick by not pricing the Radeon Pro W6800 more aggressively to make it more competitive in workflows where large memory capacity is less important. Or perhaps there’s room for a Radeon Pro W6700?
 
Last edited:
I've given up on this generation.

I decided give Red Team a punt this time rather than Green Team as they are playing silly buggers.

But I can't find either so I'm going to wait for RX7800XT and GTX4080 to see what looks good at that time. In the meantime I'm going to keep my money, they seem to have enough of other peoples to keep them going.
 
Back
Top