NVIDIA Tegra Architecture

What's NVIDIA's definition for it? Is it in any way comparable to TDP?

TDP implies a certain average power draw over longer periods of time. i.e. You could have a TDP of 5 Watt but still draw a maximum of 20 Watt for 1/10th of a second as long as your energy usage over that second stays equal to or under 5 Joule. Would that EDP be 20 Watts in that case?
 
What's NVIDIA's definition for it? Is it in any way comparable to TDP?

TDP implies a certain average power draw over longer periods of time. i.e. You could have a TDP of 5 Watt but still draw a maximum of 20 Watt for 1/10th of a second as long as your energy usage over that second stays equal to or under 5 Joule. Would that EDP be 20 Watts in that case?
By all means and purposes that's what it is. I don't know why they want to call it differently. Probably because they see it as a on-the-fly adjustable metric, or maybe because they model for power directly instead of "thermals".
 
By all means and purposes that's what it is. I don't know why they want to call it differently. Probably because they see it as a on-the-fly adjustable metric, or maybe because they model for power directly instead of "thermals".


I don't know anything about Nvidia's definition of EDP, but generally a power delivery system has to be sized for transients, while a thermal system is designed for average power. Average power is also the metric that matters for battery life.

So perhaps Nvidia is making a distinction between EDP and TDP because they both are important in different contexts.

This discussion sadly reminds me of when Charlie Demerjian said Tegra K1 dissipates 40 W (which was clearly out of context). It's a little worrisome that we're still debating fundamental facts.
 
Nvidia's nomenclature for dynamic TDP. (Electrical design power)

Are you sure? The name suggests something closer to the maximum value that the power delivery system must be able to handle, for example during transient power spikes. That's quite different from TDP.

Besides, it does seem like a huge value for a thin, fanless tablet.

Edit: whoops, it seems RecessionCone beat me to it.
 
Last edited by a moderator:
We're already had this discussion a few months ago on the A15 version, no it has nothing to do with power delivery. It's a similar metric to what that ARM uses for their Intelligent Power Allocator, again something which has actually nothing to do with power delivery itself.
 
Power efficiency is a measure of both performance and power consumption. To accurately gauge power efficiency of the CPU, one would need to measure both performance and power consumed at the voltage rails with a CPU-heavy application. In an application such as Geekbench 3, the single-threaded performance of Denver in Nexus 9 is nearly 2x higher than the single-threaded performance of R3 Cortex A15 in Shield tablet, so as long as power consumption is less than 2x higher (which it almost surely is), then the power efficiency of Denver would be higher than that of R3 Cortex A15 in this CPU-heavy application.
 
This conversation will be more informed after you publish your article.
I'm expecting Nexus 9 battery life to be pretty poor - but nowhere near the ~45 minutes that would result if we burned through its 6700 mAh battery at 26 W for the processor alone.

What might help untangle this gordian knot would be a table of theoretical maximum power consumption of competing SoCs, as I'm sure f.e. that A8X should have also a quite high TDP, EDP, whatever.

Now back on topic: if NV/Google have trimmed the Nexus9 to the levels I'd expect it based on the first TRex results, it might very well last =/>2.5 hours under 3D. Nebu isn't implying that it would last only 45 minutes. While this is the 2nd time we're having the same debate I think we can say that it has been established that the 26W value stands as a theoretical peak value.
 
Power circuitry isn't necessarily trivial with brutal voltage and power use transitions, or it's a problem if you have a drive to make it as cheap and as small as possible.
I know a guy on a forum with an AM2 socket mobo, where the power circuitry seems to have been underengineered (a low end Asus with very small VRMs) and it crashes when Cool'n'Quiet is used.

I'm thinking that if you had a million PCs well synchronized that came out from idle to big power draw (CPU, GPU) at the same time, that would mess with the power grid.
 
I'm thinking that if you had a million PCs well synchronized that came out from idle to big power draw (CPU, GPU) at the same time, that would mess with the power grid.
Indeed, data centers typically don't power on all the machines at once (that would cause both power and software issues) :)
 
Off topic, but what kinds of software issues would that be? I'm unfamiliar with data center software, but still curious. Is it a problem of having so many pieces of hardware start up at once that driver initialization becomes problematic or something?
 
Off topic, but what kinds of software issues would that be? I'm unfamiliar with data center software, but still curious. Is it a problem of having so many pieces of hardware start up at once that driver initialization becomes problematic or something?
Not all machines in a data center are doing the same job. Imagine some are dedicated to provide IP addresses to other machines; if you don't start these first you'll have what I called "software" issues (as opposed to HW issues due to power delivery).
 
Also: most software has never been tested under these kind of stress conditions. Say, 10000 machines asking for an IP address at a time. It will probably work fine, but why take the risk.
 
In theory every machine or software should be "well behaved" and smartly wait for the storage to be up, the database to be up etc. but there may be horrible things going on, not-quite-working failover mechanisms triggered, moral equivalents to "keyboard not found - press F1 to continue". We probably can't imagine. Many stuff barely working, hanging from pieces of string attached together with reused duct tape (e.g. a server program with a memory leak that's killed and relaunched daily or weekly or when it reaches 40GB memory use). It's a mystery that computers work at all
 
What might help untangle this gordian knot would be a table of theoretical maximum power consumption of competing SoCs, as I'm sure f.e. that A8X should have also a quite high TDP, EDP, whatever.
AnandTech has a graph of iPad Air A7 power consumption during a power-hungry test. So in this case, would ~8 W (peak platform power – idle power) be close to the EDP and ~7 W be close to the TDP?

maxpower2.png
 
Back
Top