NVIDIA Tegra Architecture

They are absolutely meaningless to the discussions of the hardware when that power consumption can be simply altered by a one-line shell command in software.

I'm done arguing if fanboys aren't even willing to recognize Nvidia's own source code.
Those max EDP numbers doesn't mean anything. Those tables were most probably used for debugging purposes to compare actual performance with DVFS vs virtually uncapped
max EDP performance, this doesn't mean that actual power consumption numbers will ever reach max EDP limits in the tables
 
Ok, but the point is that the application processor + mem. actual power consumed in any real world TK1-powered product is nowhere near 20-30w, and in the majority of cases should be well below 5w, even with GPU-intensive applications such as GFXBench. That is all I am really trying to say.
 
Nebuchadnezzar said:
And you'd be a fool to believe that they are. Qualcomm has been doing the same and Samsung and others recently too.
And now I have you. If none of the modern measurements are real, then why does Nvidia's matter so much. By your own admission, even if one could obtain a "real" measurement of the K1, there would be nothing to compare it to. I hear the 801 is secretly 90W TDP. Shhh.... don't tell anyone.

Now, what I'm wondering is why a person who believes there are no "real" measurements of TDP on modern chips seems so fixated on one particular chip from one particular company.

Hint: I didn't use a question mark.
 
Those tables were most probably used for debugging purposes
Please read the code instead of assuming what it does.
Ok, but the point is that the application processor + mem. actual power consumed in any real world TK1-powered product is nowhere near 20-30w, and in the majority of cases should be well below 5w, even with GPU-intensive applications such as GFXBench. That is all I am really trying to say.
And what I'm trying to say is that this chip *is* capable of Charlie's doomsday 30W prediction, while everybody else was dismissing it as heresy. Great for Nvidia if they have the dynamic range to be able to cap this to smaller values, but then what we're discussing nowdays is no longer hardware power consumption but to who has the best power management algorithms.

And now I have you.
:rolleyes:

The difference is Qualcomm's 8-9W "unadultered" TDP vs Nvidia's 20W+ figure. If you can't figure the discrepancy there then I'm sorry.
 
Nebuchadnezzar said:
And what I'm trying to say is that this chip *is* capable of Charlie's doomsday 30W prediction,
And one could push the Haswell in the surface 3 pro beyond 75W if so inclined. The horror.
 
And what I'm trying to say is that this chip *is* capable of Charlie's doomsday 30W prediction

And a GTX 780 Ti with a normal peak consumption of 250w is capable of 350w consumption with a modified BIOS and a ridiculously high overvolt. And a BMW M series car with an electronically limited top speed of 125mph is capable of 190mph with new racing tires, aero, and an unrestricted rev limiter. And so on and so forth.

At the end of the day, no consumer TK1-powered product will get even close to 20-30w for application processor + mem. power consumption, so for the author to even suggest such a thing is highly misleading at best.
 
Please read the code instead of assuming what it does.
The code won't say you the purpose of edp numbers in tables. Those tables used to uncape DVFS for debugging purposes, those limiter numbers aren't real numbers measured in real devices, it's just what it's - some empirical random numbers to uncape the perforormance with DVFS, you won't find any tablet on market which power subsystem is able to handle more than 10 W for a very short periods of time
 
Only recently have chip designers come up with more advanced power modelling or actual limiting mechanisms that consider power limitations between the various SoC elements. (Power budget, frequency limitation on either CPU or GPU side depending on load)

Most if not all SoCs before that did not care about such things and the only real limitation was a simple individual temperature throttling mechanism. You only have to go back 1 or 2 SoC generations to find these "left unrestricted" SoCs.


And you'd be a fool to believe that they are. Qualcomm has been doing the same and Samsung and others recently too. "TDP" has become a meaningless characterization of a chip's power consumption because it will never represent what the hardware is capable of. Qualcomm openly admits this by saying that the S800 targeted 5-6W in tablets while 3.5W in phones.

Well, TDP is precisely what it says on the tin: thermal design power, i.e. what the cooling system needs to be capable of dissipating. It can't be used to assess the peak power consumption of a chip, but that's not what it's supposed to do.
 
I think it has been said many times before that any theoretical maximum power consumption is completely irrelevant in a power constrained environment.

It may well be that one chip can do 20W and another only 8W. Who cares. What matter are the power vs. performance curves. Without those you can prove any position.

In fact, if the perf/W stays within check, you could very well argue that the ability to operate at 20W is a feature instead of liability: it simply opens up uses cases beyond just phone and tablet. For Nvidia, that's probably a more important consideration than for Qualcomm, but I don't expect that you-know-who gets this kind of subtlety.
 
And a GTX 780 Ti with a normal peak consumption of 250w is capable of 350w consumption with a modified BIOS and a ridiculously high overvolt. And a BMW M series car with an electronically limited top speed of 125mph is capable of 190mph with new racing tires, aero, and an unrestricted rev limiter. And so on and so forth.

At the end of the day, no consumer TK1-powered product will get even close to 20-30w for application processor + mem. power consumption, so for the author to even suggest such a thing is highly misleading at best.


This. The max power consumption the chip is capable of in an unrestricted enviroment is meaningless when discussing its inclusion in a tablet, the only numbers i would be interested in are the ones AMS posted...actual measured power draw in the shipping product, or as in this case, a non optimised jetson dev board, shield tablet should consume much less than 7w im guessing, it doesnt matter whether its through software optimisations or not, actual performance is what counts. Period.

As for the performance per watt, i think this is pretty outstanding actually, this is the real nvidia graphics we have all been waiting for, just a shame android sucks valuable cpu cycles making unoptimised android games heavily cpu bound even on A15s.
Just imagine how efficient this chip would be running on say windows phone 9 /rt running dx 12...pretty amazing im guessing.
Maybe android L with its new garbage collector and ART will help?
 
And what I'm trying to say is that this chip *is* capable of Charlie's doomsday 30W prediction, while everybody else was dismissing it as heresy.

Sure if you Overvolt and Overclock it.

That also goes for every SOC in existence.

Now back in the REAL world The Tegra K1 in the Shield Tablet uses 7 watts (or less) running a graphics intensive test full out. Heck even your grossly inflated number of 10 watts is still way below the priest's number (he also stated that the K1 could never be in a tablet).

http://forum.beyond3d.com/showpost.php?p=1863301&postcount=2687
 
Last edited by a moderator:
This is a strange discussion.

TDP is irrelevant. It's *supposed* to be a THERMAL parameter - therefore the TDP of the SoC in isolation without considering considering software and temperature throttling is of questionable value. For hardware, it makes more sense to talk of "peak power" and "real-world power" as opposed to TDP in my opinion.

And higher peak power isn't necessarily worse. For example more specialised logic (even just video decoder versus doing on the CPU!) means higher peak consumption, but potentially lower average consumption for a given workload (assuming the right dynamic-static power trade-off). And the notion of "unrestricted" is equally bizarre as every chip is necessarily restricted by its power circuitry among other things.

On the other hand, it's clear that 7W is *not* K1's "peak power". It's the "real-world power" for a stressful workload but GFXBench is clearly not a power virus. If you can get 7W in GFXBench on the Shield Tablet, I can easily imagine 25W+ for a true power virus (CPU+GPU). Again, I'm not sure why anyone should care unless you're designing the power circuitry...
 
I've got some trouble interpreting the table (fwiw you can browse it online here: https://android.googlesource.com/kernel/tegra.git/+/android-tegra-3.10/arch/arm/mach-tegra/ though it's slow) correctly.
pthrot "denotes the amount of power that is reduced when CPU and GPU are fully throttled (100%)" according to the commit log. Whatever. And the rest doesn't seem easy to interpret neither. I guess though the highest entries aren't actually used? After all it seems like to reach the highest cpu clock (rather highest cpu power budget actually), you'd only need the "14000mW" entry, and for the highest gpu clock (and I don't think those devices actually ever reach that?) you'd still only need the "23000mW" entry and that would still leave room for some cpu load. There seems to be plenty of other data too in other files ("edp" stands for electrical design point, sounds more like something needed for power regulator rather than sustained power) which I can't interpret.
In any case what's important is the actual perf/w which these numbers don't tell.
 
The difference is Qualcomm's 8-9W "unadultered" TDP vs Nvidia's 20W+ figure. If you can't figure the discrepancy there then I'm sorry.
Still, SHIELD Tablet DESTROYS anything QC has to offer in the tablet form factor (more than 2 times the performance of 805 with same power envelop, ie a bit less than 10W). Even better, it does it without any long term throttle as your test shows (more than 110 iterations of GFX bench before slightly going down). So end-user will never see this 30W thing as it doesn't correlate in real world usage...
 
The point was never that the user would see it. The point is that the SoC is capable of it, which is an interesting piece of factual data for everyone to understand.

What then followed was a bunch of folks rushing to the NVIDIA's defence, twisting good facts to meet an all too predictable narrative. Nebu knows full well that the device won't consume that power or generate it as heat through switching in practice, yet a bunch of you rushed to say that doesn't matter over and over again.

Yes it does matter. It tells you a bunch of interesting things about how K1 was designed and how it needs to be deployed and where the limits are in the form factors (many of them non-obvious and certainly not limited to tablets) it's suitable for.

That 10W is even a thing for fully embedded ARM-based SoCs is incredibly interesting. In the rush to wave away 20W a lot of interesting discussion about 10W got lost. Where's the discussion about what K1 is clearly capable of at less than nominal voltage on 28HPM at competitive power? Where's the discussion about what kind of products could be possible with K1 with active cooling if 20W was doable?

Drowned in a sea of bullshit because people feel obliged to defend the Shield Tablet and the K1 config therein. Enough; that's not what Beyond3D is about or for.
 
That 10W is even a thing for fully embedded ARM-based SoCs is incredibly interesting. In the rush to wave away 20W a lot of interesting discussion about 10W got lost. Where's the discussion about what K1 is clearly capable of at less than nominal voltage on 28HPM at competitive power? Where's the discussion about what kind of products could be possible with K1 with active cooling if 20W was doable?
That's a very good question. Have the Jetson TK1 boards, which are actively cooled, been benchmarked? Note they might require some kernel hacks to prevent too conservative power/clock settings.
 
Drowned in a sea of bullshit because people feel obliged to defend the Shield Tablet and the K1 config therein. Enough; that's not what Beyond3D is about or for.
and when someone gives half information about a product, putting in a very bad light, without
talking about the pros, what is that ?
If he has said that TK1 can go up to 30W but at real world power level (8~9W) it destroys the other SoCs in performance (thus offering a better perf/w ratio), then nobody would react.
if you don't want to have people on you, start to be objective in the first place...
 
Discussion of Tegra's roadmap

In Nebu's post about the EDP driver he was very clear about the peak draw and how it was governed, and the fact the current SKUs were limited to less by the driver and the thermal budget.

His posts didn't need to concern themselves with perf/watt either. Stop picking holes in what he posted because you couldn't interpret it properly.
 
Back
Top