NVIDIA Tegra Architecture

What is this comment based on? At 1.5W they're probably already vastly dominated by dynamic power over static, I don't think they'd have such strong performance vs their competitors otherwise. From there, getting more performance means increasing clock speed (linear) necessitating increasing voltage (quadratic) which results in a superlinear expansion. Not sublinear.

At least when moving from 1.51w to 1.74w, the GPU perf. per watt increases slightly (~ 6%) according to this slide: http://www.pcper.com/image/view/35636?return=node/59260 .

To reach the 2.6x performance improvement, all NVIDIA would need is another 6% increase in GPU perf. per watt from 1.74w to 2.00w and again from 2.00w to 2.31w. With GPU perf. per watt maintained at this level from 2.31w to 4.00w, then TK1 would reach slightly less than 2x GPU performance improvement over A7 iphone 5s with both in the 2.6-2.7w range, and would reach close to 2.6x GPU performance improvement in the 3.9-4.0w range (which is just a bit more than 1w higher power consumption in comparison).

As long as the power efficiency doesn't dramatically reduce when moving beyond 1.74w GPU power consumption, then NVIDIA's performance claims should hold true for a thin tablet form factor. We'll see.
 
Last edited by a moderator:
At least when moving from 1.51w to 1.74w, the GPU perf. per watt increases slightly (~ 6%) according to this slide: http://www.pcper.com/image/view/35636?return=node/59260 .

Those benchmark numbers of 9 and 11 FPS are much less precise than the percentage you're coming up with. They're almost certainly rounded to whole numbers. What if that 9 was rounded from 9.49 and that 11 was rounded from 10.51? That'd give you a very different perf/W scaling. The margin of error is huge here.

The voltage scaling is probably only so fine grained so you'd expect it to look linear over small clock ranges, that may be the case going from ~1.5W to ~1.75W. Would definitely not be the case going all the way up from this to peak performance.

To reach the 2.6x performance improvement, all NVIDIA would need is another 6% increase in GPU perf. per watt from 1.74w to 2.00w and again from 2.00w to 2.31w. With GPU perf. per watt maintained at this level from 2.31w to 4.00w, then TK1 would reach slightly less than 2x GPU performance improvement over A7 iphone 5s with both in the 2.6-2.7w range, and would reach close to 2.6x GPU performance improvement in the 3.9-4.0w range (which is just a bit more than 1w higher power consumption in comparison).

As long as the power efficiency doesn't dramatically reduce when moving beyond 1.74w GPU power consumption, then NVIDIA's performance claims should hold true for a thin tablet form factor. We'll see.

So what you're saying is that nVidia has no voltage scaling at all (and achieves such great perf/W at the low end with magic), or you don't believe in physics, or what?
 
Well naturally voltage scaling will not be perfectly linear in reality. My point is that, if NVIDIA is able to achieve a perf. per watt in the 1.75-4w range that is reasonably close to the [admittedly imprecise] data points that they have provided so far in the 1.5-1.75w range, then their 2.6x performance claims may have some validity even for a TK1 variant used in a thin tablet. Note that Tegra 4's power consumption should be no better than TK1, and the only actively cooled device with Tegra 4 is Shield (other devices such as Tegra Note 7" tablet and Xiaomi Mi3 5" phone appear to work just fine, among others, albeit at reduced frequencies relative to Shield).

At the end of the day, it doesn't matter too much, because the GPU performance difference between an actively cooled Shield and a Tegra Note 7 is "only" ~ 15%.

I would guess that the 365 GFLOPS throughput number that NVIDIA provided (in the slide with Xbox360 and PS3) refers to Shield 2, whereas in tablets it would be closer to 320 GFLOPS throughput (which would be very close to the 852MHz GPU clock operating frequency that Rys referenced earlier). So I do agree with Rys on the frequency target for tablets (and I'm pretty sure I mentioned something similar many months ago when Kepler.M was first announced).
 
Last edited by a moderator:
I would guess that the 365 GFLOPS throughput number that NVIDIA provided (in the slide with Xbox360 and PS3) refers to Shield 2, whereas in tablets it would be closer to 320 GFLOPS throughput.

My guess that number is derived using a Kepler Boost clock and not the base clock and in normal operation the clock and performance will be lower. Based upon the 192 shaders, that clock for the GPU would be ~950MHz and that seems really high for a mobile GPU.

Both Adreno and A7's GPU run around 400-500MHz. This Tegra would then have many more shaders and a much higher clock. I wouldn't believe that, not at 28nm at least.
 
I think we all can be patient until final devices appear to see what is what. We can then compare it to S805, A8 and whatever else will be shipping in devices. We've seen that kind of history repeat itself 4x times this far and each and every time the exaggerated optimism ended up in just a bunch of fireworks. Now odds are better hw wise but probably also because K1 is truly a convergence between the ULP (upper threshold) and PC space (lower threshold) and it gives fine material for devices that actually sit in between and preferably on windows 8.x.

Imagine a device like the Asus Transformer Book T100 with a K1 at a similar price point (but with a Denver CPU). I'd buy one instantly.
 
Now odds are better hw wise but probably also because K1 is truly a convergence between the ULP (upper threshold) and PC space (lower threshold) and it gives fine material for devices that actually sit in between and preferably on windows 8.x.

Imagine a device like the Asus Transformer Book T100 with a K1 at a similar price point (but with a Denver CPU). I'd buy one instantly.

The only hope that Tegra chipsets have on a Windows platform in the short term is through Microsoft [Surface]. If Microsoft can successfully merge Windows Phone and Windows RT to provide a Windows Consumer OS, then the longer term prospects for Tegra chipsets on Windows may be better, but this is all very much unknown. NVIDIA's licensed GPU technology will of course be used by Intel on the Windows platform. NVIDIA and Intel will be collaborating more closely in the future than in the past, but the fruit of that labor and implications of that collaboration are unknown at this time.

The only sure bet for Tegra chipsets at this time is Android (and perhaps Microsoft Surface products too).
 
The only hope that Tegra chipsets have on a Windows platform in the short term is through Microsoft [Surface]. If Microsoft can successfully merge Windows Phone and Windows RT to provide a Windows Consumer OS, then the longer term prospects for Tegra chipsets on Windows may be better, but this is all very much unknown. NVIDIA's licensed GPU technology will of course be used by Intel on the Windows platform. NVIDIA and Intel will be collaborating more closely in the future than in the past, but the fruit of that labor and implications of that collaboration are unknown at this time.

The only sure bet for Tegra chipsets at this time is Android (and perhaps Microsoft Surface products too).

What makes you think so?
 
NVIDIA's CEO said something along those lines some months ago, soon after Intel's new CEO was appointed (I don't remember exactly when and where though).

Oh, yeah, I do remember JHH saying something vaguely friendly about Intel. In no way does this mean the latter is interested. What would they need NVIDIA for?
 
What would they need NVIDIA for?

Maybe because NVIDIA has really good GPU technology, Intel has really good CPU technology, and many consumers and system builders really like the combination of Intel CPU's + NVIDIA GPU's? Who knows. Use your imagination :D
 
That and nVidia announced they were going to sell Geforce IP for SoCs.
 
The only hope that Tegra chipsets have on a Windows platform in the short term is through Microsoft [Surface]. If Microsoft can successfully merge Windows Phone and Windows RT to provide a Windows Consumer OS, then the longer term prospects for Tegra chipsets on Windows may be better, but this is all very much unknown.

Windows tablets like the T100 hybrid I mentioned starve from lack of applications for its tablet mode too; else one of RTs biggest problems isn't magically going away. We see now dual boot windows/android tablets appear probably exactly to find a mid of the way sollution temporarily but I'm treating them with equal scepticism since they're in their majority too heavy in tablet mode. Vendors are experimenting to find something that might sell.

Microsoft's hw is unfortunately overprized.


NVIDIA's licensed GPU technology will of course be used by Intel on the Windows platform. NVIDIA and Intel will be collaborating more closely in the future than in the past, but the fruit of that labor and implications of that collaboration are unknown at this time.
Intel licensed NV GPU technology? Not unlikely but up to recently Intel's graphics folks didn't sound like they're up to call defeat. I've fooled around with a T100 and while its GPU is roughly on iPad4 performance level it's not throttling, doesn't consume much for its category and seemed quite stable.

The only sure bet for Tegra chipsets at this time is Android (and perhaps Microsoft Surface products too).
So if let's say Asus would want to use it hypothetically in a T100 alike hybrid tablet/notebook device with windows NV would say no? I don't know what vendors are planning to be honest, however the rush to get Denver out can't be just to follow any nonsense 64bit marketing. For which ironally quad core and 4+1 isn't obviously a must have for some reason.

That and nVidia announced they were going to sell Geforce IP for SoCs.

Getting a couple of dozen cents per core isn't exactly overwhelming for NV's business model; yes it might get them some marketshare but that's about it. For FY13' they segragated R&D expenses for Tegra at $300Mio (annual basis) and that's roughly twice the total revenue IMG made for that year. So while the imaginary break even point of $1b JHH himself had mentioned for Tegra, shrunk "magically" to $700Mio that year, I'm all ears how you'd expect that department to re-structure in order to cut back expenses to make it more reasonable for any sort of IP business.
 
Getting a couple of dozen cents per core isn't exactly overwhelming for NV's business model; yes it might get them some marketshare but that's about it. For FY13' they segragated R&D expenses for Tegra at $300Mio (annual basis) and that's roughly twice the total revenue IMG made for that year. So while the imaginary break even point of $1b JHH himself had mentioned for Tegra, shrunk "magically" to $700Mio that year, I'm all ears how you'd expect that department to re-structure in order to cut back expenses to make it more reasonable for any sort of IP business.

I just pointed out their announcement. Don't kill the messenger...
 
I just pointed out their announcement. Don't kill the messenger...

My ancestors actually killed the messenger if he beared bad news :LOL: Jokes aside I am actually commenting on the message itself. Selling GPU IP - if successful - will gain them marketshare and then what? Another dead end?

Instead of say selling at $25 per SoC it's so much better to sell GPU IP at $0.25/core or what exactly am I missing?
 
My ancestors actually killed the messenger if he beared bad news :LOL:

This is madness!…

More to the point, Intel already has pretty good (and fast improving) GPU technology, which has the added benefits of being free to use (it's theirs), of being tuned to their manufacturing process, designed for their cache hierarchy, etc. The even have Knights Landing IP if they ever feel like giving the Larrabee approach another try.

I really don't see the point of going to NVIDIA (or anyone else, for that matter) for GPU IP.
 
This is madness!…

Chances are high that you actually liked the 300 as a movie (despite the fact that it's typical hollywood crap wrapped in a g** cartoon) :LOL:

More to the point, Intel already has pretty good (and fast improving) GPU technology, which has the added benefits of being free to use (it's theirs), of being tuned to their manufacturing process, designed for their cache hierarchy, etc. The even have Knights Landing IP if they ever feel like giving the Larrabee approach another try.

I really don't see the point of going to NVIDIA (or anyone else, for that matter) for GPU IP.
I might not be as barbaric as my ancestors but I would like to shoot the bloke who had the idea to circulate the rumor that Intel wanted to use LRB in the longrun for the ULP market. If it's not a rumor than the bloke at Intel who had that idea.... :oops:

Other than that I agree.
 
Chances are high that you actually liked the 300 as a movie (despite the fact that it's typical hollywood crap wrapped in a g** cartoon) :LOL:

It was entertaining, but little more.

I might not be as barbaric as my ancestors but I would like to shoot the bloke who had the idea to circulate the rumor that Intel wanted to use LRB in the longrun for the ULP market. If it's not a rumor than the bloke at Intel who had that idea.... :oops:

Other than that I agree.

Your ancestors were, by definition, the very opposite of barbaric. :p

As for Larrabee, from what I've heard it was supposed to be used in Haswell, not Atoms. Things happened differently and I don't expect Intel's graphics IP to converge with KNL any time soon. Still, that's plenty of good graphics/parrallel IP that Intel can use without having to rely on third parties; which is not really their style anyway.
 
I really don't see the point of going to NVIDIA (or anyone else, for that matter) for GPU IP
Well, once you account for the process advantage, Intel's HD series while decent enough is rather pedestrian.
 
OL1vfcd.jpg


http://www.youtube.com/watch?v=Pfp_ZFs7DIA

Wished Nvidia would release the Tegra Note 7 with Tegra K1 sooner.
 
Back
Top