By all means and purposes that's what it is. I don't know why they want to call it differently. Probably because they see it as a on-the-fly adjustable metric, or maybe because they model for power directly instead of "thermals".What's NVIDIA's definition for it? Is it in any way comparable to TDP?
TDP implies a certain average power draw over longer periods of time. i.e. You could have a TDP of 5 Watt but still draw a maximum of 20 Watt for 1/10th of a second as long as your energy usage over that second stays equal to or under 5 Joule. Would that EDP be 20 Watts in that case?
Seems Denver isn't exactly power efficient. Also 26W EDP on the N9.
By all means and purposes that's what it is. I don't know why they want to call it differently. Probably because they see it as a on-the-fly adjustable metric, or maybe because they model for power directly instead of "thermals".
Nvidia's nomenclature for dynamic TDP. (Electrical design power)
What makes you so sure?...so as long as power consumption is less than 2x higher (which it almost surely is)
What makes you so sure?...
This conversation will be more informed after you publish your article.
I'm expecting Nexus 9 battery life to be pretty poor - but nowhere near the ~45 minutes that would result if we burned through its 6700 mAh battery at 26 W for the processor alone.
Indeed, data centers typically don't power on all the machines at once (that would cause both power and software issues)I'm thinking that if you had a million PCs well synchronized that came out from idle to big power draw (CPU, GPU) at the same time, that would mess with the power grid.
Not all machines in a data center are doing the same job. Imagine some are dedicated to provide IP addresses to other machines; if you don't start these first you'll have what I called "software" issues (as opposed to HW issues due to power delivery).Off topic, but what kinds of software issues would that be? I'm unfamiliar with data center software, but still curious. Is it a problem of having so many pieces of hardware start up at once that driver initialization becomes problematic or something?
AnandTech has a graph of iPad Air A7 power consumption during a power-hungry test. So in this case, would ~8 W (peak platform power – idle power) be close to the EDP and ~7 W be close to the TDP?What might help untangle this gordian knot would be a table of theoretical maximum power consumption of competing SoCs, as I'm sure f.e. that A8X should have also a quite high TDP, EDP, whatever.