Power consumption for GPUs

Difference comes from the fact that nVidia measures(and limits) power consumption before VRM losses occur and AMD does that after VRM conversion. There's no 'lying' misinforming and so on, just different ways of doing measurements (there was a video that Buildzoid did on how Texas Instruments INA chip works that nV uses to measure power consumption)
 
Does this mean that most of the power figures of AMD cards I've seen in reviews over the last few years are incorrect? Or were most reviewers aware of this and accounted for it?
 
Does this mean that most of the power figures of AMD cards I've seen in reviews over the last few years are incorrect? Or were most reviewers aware of this and accounted for it?
Reviewers actually measure things. Most use "from the plug" method aka whole computer consumption (excluding monitor). Those who measure actual card power draw use specialized tools (sitting between card and pcie slot and on the power plugs). Software measurement differences are thus irrelevant for them.
YouTubers like @DavidGraham mentioned are another story though
 
Reviewers actually measure things. Most use "from the plug" method aka whole computer consumption (excluding monitor). Those who measure actual card power draw use specialized tools (sitting between card and pcie slot and on the power plugs). Software measurement differences are thus irrelevant for them.
YouTubers like @DavidGraham mentioned are another story though
I see. Where are the good GPU reviews at these days? Ever since Scott Wasson and Ryan Shrout moved on I've been having a tough time finding high quality reviews that I know I can trust.
 
I see. Where are the good GPU reviews at these days? Ever since Scott Wasson and Ryan Shrout moved on I've been having a tough time finding high quality reviews that I know I can trust.
Everyone has their favourites I suppose. TPU and Tom's (at least sometimes) measure just the card power draw rather than whole rig from the plug. TPU also has easy to read "overall results" per res (and per some features even), ComputerBase has similar easy to use overall performance graph, but the site & articles are in german. Obviously by far best reviews, by leaps and bounds, in different league altogether totally impartially considered are at io-tech.fi but they're in finnish so good luck with that.
 
That's not an actual 464w, that's the reading of the software overlay, those don't read AMD GPUs accurately. You need to add 30w to 50w on top of them to account for the full consumption of the card.
Do you have actual evidence for this for the RDNA3 cards, or are you just pushing your usual agenda?
 
That's not consistent with Igor's findings.


I don't see anything there that states it's tracking the die only? To the contrary, this would imply the opposite:

By the way, we can see very nicely that almost the full TBP is utilized starting from WQHD (in contrast to the RTX 4080) and even the torture loop barely increases. AMD’s new implementation of telemetry for the entire card has a very limiting effect here
 
To the contrary, this would imply the opposite

This doesn't:
I had already discussed this in several articles and AMD also communicated it this way. The matching link to the article is available again at the bottom of this page.

Then he links again to the article discussing that software doesn't read the entire consumption of AMD cards.

In short, he is implying that status quo hasn't changed. AMD also seems to have indicated as much.
 
Last edited:
This doesn't:


Then he links again to the article discussing that software doesn't read the entire consumption of AMD cards.

In short, he is implying that status quo hasn't changed. AMD also seems to have indicated as much.
The original german version of the article is pretty clear about this. The new telemetry option is for whole card power consumption, that's why it's "limiting" the effect a stress test like furmark can have.

Maybe it helps for a better understanding to read the corresponding paragraph in his summary: "AMD has completely turned telemetry on its head, which has worked out well in most respects. The fact that you can finally read out a TBP that is reasonably accurate, even if it is only a good estimate, is a big step forward. NVIDIA has long relied on real monitoring of the rails via shunts, while AMD now at least uses the summation of all values from the DCR and some mathematics, which is also possible."
 
The new telemetry option is for whole card power consumption, that's why it's "limiting" the effect a stress test like furmark can have.
Oh, I see.
while AMD now at least uses the summation of all values from the DCR and some mathematics, which is also possible
Hmmm, review articles already give overclocked power figures that are higher than these Youtube videos with software overlays, which means these overlays are probably still not accurate enough.
 
Oh, I see.

Hmmm, review articles already give overclocked power figures that are higher than these Youtube videos with software overlays, which means these overlays are probably still not accurate enough.
The old "TGP" value is still there and lower than TBP. You have to update your readout-software (to make use of the new "sensor") AND your overlay config (to actually display the new value instead of TGP) in order to see the actually correct results. Else, you'll still have the old TGP, which has the issues mentioned here.
 
You have to update your readout-software (to make use of the new "sensor") AND your overlay config (to actually display the new value instead of TGP) in order to see the actually correct results
Sigh, which gets us back right to where we started, these Youtube videos are not reliable for power consumption figures, because the person making the video has to actually use the latest version of the software, and has to actually "manually" caliberate it to show readings from the new semi accurate sensor. Otherwise, his readings will be false.
 
Back
Top