Power consumption for GPUs

Igorlab posted some major information on power consumption values for AMD and NVIDIA GPUs. As evidently they differ quite substantially on how they do their measurements. Igor used special equipment (shunt measurements/oscilloscope) to measure power consumption from the cards externally to compare the values vs the software values.

He found out that NVIDIA's software measurement match the hardware measurement, as NVIDIA does the monitoring on the board at the respective 12-volt rails, i.e. at the respective external PCIe sockets (Aux) and the motherboard slot (PEG) BEFORE the respective consumers are supplied. In other words, NVIDIA measures the power for the whole card (TBP). So if a card like the 3090Ti reads 450w via software monitoring, it reads that same value with hardware monitoring.

AMD doesn't do that, they only allow for software measurements of TGP (GPU and memory), not the whole card (TBP), TPB is important especially after the card heats up, and the converters start burning power, the power for fans is also included in the TBP, not TGP.

In the end, software monitoring for AMD cards don't match hardware monitoring, if a card like 6950XT is reading 330w via software, in reality it actually consumes 430w! A 30% increase in power consumption, a 6750XT ends up with a 20% increase from 225w to 270w! Same goes for a 6650XT.

Power-Draw.png


The implications for this is significant, as AMD needs to expose true measurements of their total power consumption and not rely on marketing numbers. This is important with the upcoming 7900XTX launch, because if the situation remains the same, then AMD's 355w number doesn't reflect the whole truth.


 
Last edited:
What review sites go off what software reports? I’d hope none. Especially when NV seems to hand out PCAT devices.

Seems like more of an issue for end users who do rely on the software numbers. Even more so when they try to do their own comparisons, eg posting their undervolt results.
 
What review sites go off what software reports? I’d hope none. Especially when NV seems to hand out PCAT devices.
Many reviewers just do a total system power consumption, faster GPUs make the CPU do more work, thus increasing total power consumption of the system, those reviewers attribute the increase to the faster GPU alone.

Example: here is Hardware Unboxed during the 6950XT review, measuring only total system power.

 
Beating a dead horse? This is well known since Vegas or even some series before them.

Or is this some kind of "but what about..." thread trying to find some power related issues for another IHV?

Is measuring of total system power with different GPUs on the same system "irrelevant" now or why is that a bad thing?
 
Beating a dead horse? This is well known since Vegas or even some series before them.
I didn't know that, many people don't, we watch people play on Youtube with software monitoring and think this is the real value for AMD GPUs.

Is measuring of total system power with different GPUs on the same system "irrelevant" now or why is that a bad thing?
It's not irrelevant of course, but it's irrelevant to infer a GPU consumption rating from. It just means that X GPU + X CPU use more power than Y CPU and Y GPU. That's it.

For example, from the HU review above, the 6900XT has a 300w TDP, it uses 444w of total system power, the 6950XT has a TDP of 335w, yet it's total system power is 528w, a difference of 84w, far more than the 35w difference between the 6900XT and 6950XT.
 
It's not irrelevant of course, but it's irrelevant to infer a GPU consumption rating from. It just means that X GPU + X CPU use more power than Y CPU and Y GPU. That's it.

For example, from the HU review above, the 6900XT has a 300w TDP, it uses 444w of total system power, the 6950XT has a TDP of 335w, yet it's total system power is 528w, a difference of 84w, far more than the 35w difference between the 6900XT and 6950XT.

What do you mean? That's a total SYSTEM power, not a graphics card TBP
 
What do you mean? That's a total SYSTEM power, not a graphics card TBP
My point exactly. Total system power doesn't tell you much about the power rating between different GPUs. Does the 84w difference between 6900XT and 6950XT come from the GPU alone? or from the CPU (doing more fps) and the GPU?

Another interesting point, the difference between 3090Ti total power (572w) and the 6950XT (528w) is just 44w. It should be 115w based on the official numbers for each card (3090Ti is 450w, 6950XT is 335w). Does the that 44w difference come from the 3090Ti alone? or from the CPU doing more work for the 3090Ti, or from both CPU and GPU?
 
Last edited:
Granted I didn't test a 6950XT, this isn't really what I determined on all other AMD cards I tested. Wouldn't surprise me if it's just something that's vendor card specific. Not to mention the card he covers is beyond the AMD spec so that 335W TGP doesn't even apply.

Compare the reference card boosting to 2310MHz: https://www.techpowerup.com/review/amd-radeon-rx-6950-xt-reference-design/35.html
To the MSI card boosting to 2450MHz: https://www.techpowerup.com/review/msi-radeon-rx-6950-xt-gaming-x-trio/36.html

What did you think you'd expect?
 
Last edited:
Isn't nVidia doing some scheduling work in software that AMD does in hardware (not sure if this changed in RDNA3)? Thus, the nVidia-provided PCAT doesn't account for some work done on the CPU (beyond the CPU working harder at higher frame rates).

I don't understand why system power draw isn't useful, except maybe when trying to suss out idle/desktop power draw. Total system power draw at vsynced, game-limited, and equal frame rates would make for equally interesting real-world efficiency numbers than the card defaults, IMO.
 
Saying total system power draw isn't useful because CPU might be pushed more on faster card is oxymoron. Shouldn't you by the same logic call all benchmarks irrelevant because faster card is pushing the CPU more for extra performance, too?
 
Isn't nVidia doing some scheduling work in software that AMD does in hardware (not sure if this changed in RDNA3)?
That's an urban myth to explain the overhead difference between AMD and NVIDIA in DX12. This overhead has nothing to do with scheduling (which is similar across Vega, Turing, Ampere, RDNA1, RDNA2 and Ada).

Saying total system power draw isn't useful
It's useful, but it's not really useful if you are trying to isolate power consumption for a specific GPU, especially in comparison to other faster or slower GPUs. If you are content with using total system numbers, then you should explain these anomalies.
the difference between 3090Ti total power (572w) and the 6950XT (528w) is just 44w. It should be 115w based on the official numbers for each card (3090Ti is 450w, 6950XT is 335w). Does the that 44w difference come from the 3090Ti alone? or from the CPU doing more work for the 3090Ti, or from both CPU and GPU?
Does the 84w difference between 6900XT and 6950XT come from the GPU alone? or from the CPU (doing more fps) and the GPU?
 
I didn’t know about this but I get my power numbers from TPU and I think they measure it properly using hardware probes at the socket.
 
My point exactly. Total system power doesn't tell you much about the power rating between different GPUs. Does the 84w difference between 6900XT and 6950XT come from the GPU alone? or from the CPU (doing more fps) and the GPU?
From CPU and GPU, or really good Vs bad sample or something.
TPU tests GPU only (whole card) power consumption, difference between 6900/6950 is under 40W in both max and gaming loads on their tests
 
When did Nvidia change their power specs?
Going back a decade...
Kepler had GTX680 listed at 195w and GTX780 at 250w, before that GTX580 was 244w TDP.
Back with Maxwell they listed the GTX980 as a 165w TDP though the card that was pulling about the same power as known 200-250w cards
They did the same with Pascal but were a bit closer, GTX1080FE was listed as 180w but sitting in the same range of 200-250w cards.
Turing was about the same, RTX2080FE was rated at 225w TDP, but pulled ~60w more than the GTX1080FE and sitting with cards pulling +250w.

Edit- Historically, it seems fairly common practice that when there is a power efficiency push from marketing they redo their power ratings/spec.
Both AMD and Nvidia have done this before.
 
Last edited:
That doesn't sound right to me. Generally got better around Fermi because it was important for getting Tesla GPUs in datacenters. It was a big deal to not draw more power than it should which means actually knowing how much power its drawing. Another way to see it, you don't see card drawing more than TDP starting with Fermi.. .TPU has been measuring actual GPU power draw starting with Fermi..

GTX280 TDP=236W TPUAvg=249W https://www.techpowerup.com/review/point-of-view-geforce-gtx-280/23.html
GTX480 TDP=250W TPUAvg=223W https://www.techpowerup.com/review/nvidia-geforce-gtx-480-fermi/30.html
GTX680 TDP=195W TPUAvg=166W https://www.techpowerup.com/review/nvidia-geforce-gtx-680/25.html
GTX780 TDP=250W TPUAvg=199W https://www.techpowerup.com/review/nvidia-geforce-gtx-780/24.html
GTX980 TDP=165W TPUAvg=156W https://www.techpowerup.com/review/nvidia-geforce-gtx-980/24.html
GTX1080 TDP=180W TPUAvg=166W https://www.techpowerup.com/review/nvidia-geforce-gtx-1080/24.html
GTX2080 TDP=215W TDPAvg=215W https://www.techpowerup.com/review/nvidia-geforce-rtx-2080-founders-edition/31.html
RTX3080 TDP=320W TDPAvg=302W https://www.techpowerup.com/review/nvidia-geforce-rtx-3080-founders-edition/31.html
RTX4080 TDP=450W TPUGaming=346W https://www.techpowerup.com/review/nvidia-geforce-rtx-4090-founders-edition/39.html
 
Ahhh... that may be part of it.
TPU before GTX980 used Crysis 2 for their gaming power consumption numbers.
GTX980 review their gaming power consumption testing was using Metro:LL, and there are some interesting differences.

Still, the changes around that time made it impossible to compare anything apples to apples and other reviews of that time period perfectly showcased that.
 
Could be that the interposer losses are bigger than expected. You can see that a lot of people have cool cards.

If you compare these two tests. Thermo Budget for both are equale. Both Hotspot of the 4090 tuf and the 7900 xts tuf are running nearly same temparature. The 4090 is a little bit cooler but Fan spins also 15% higher, i think this is a tie.

Amd should give the card more power. Give them 500W and Watercooling and this card will be a beast.

Also Intersting is the scalling of Power. I took the 4080 strix and the 7900xtx tuf from techpowerup. The Powerlimits of the 7900xtx i guessed. But if AMD says that powerlimti is 355W and that total limt is 15% higher than i think it will not exeed 420W.

If you check out AMD Scales Perfectly! I never have seen this befor. Scaling only breaks when power Limits is not enough.

Powerlimit.jpg
 
Last edited:
Also Interesting. AMD ist at Gaming Always at Power Limit. The 4080 have some head to maximum power.

1671529591675.png
1671529712003.png
 
Back
Top