NVIDIA Tegra Architecture

Well as many believe, I think first Maxwell dGPUs are 28nm and second gen will be 20nm, but I also believe TK1-D is 20mn as the first sample came back few days before CES. In fact TK1-D Product timing and size are suitable for 20nm. And maybe NV taped out 2 versions of TK1-D, one on 28nm, one on 20nm and will chose the best solution... who knows...

There's quite a difference between anyone's "belief", "gut feeling" or anything else. Last time I heard anything and I haven't been able to verify it is that there is one chip laid out for 20SoC for which even the design hadn't finished then.

Your last sentence is the most absurd of them all; no IHV has as many resources to throw around for something as dumb as you propose. As a first engineers are "as good" to know the differences of different processes and as a second switching from one processes libraries to the other takes in a best case scenario half a year and is anything but cheap.
 
Your last sentence is the most absurd of them all; no IHV has as many resources to throw around for something as dumb as you propose. As a first engineers are "as good" to know the differences of different processes and as a second switching from one processes libraries to the other takes in a best case scenario half a year and is anything but cheap.
well I've been working 3 years for a big fabless company and I saw a IC targeting 2 nodes. One safe bet and a risky one. Only late in the process the safest was chosen (by late, I mean 8 months before production on a 3 years time project).
OK it was nearly 10 years ago and the cost / size of dev were much lower than now, but still, this strategy was already used
 
well I've been working 3 years for a big fabless company and I saw a IC targeting 2 nodes. One safe bet and a risky one. Only late in the process the safest was chosen (by late, I mean 8 months before production on a 3 years time project).
OK it was nearly 10 years ago and the cost / size of dev were much lower than now, but still, this strategy was already used

10 years ago? Hahahahaha :LOL:
 
I don't doubt that any Maxwell irrelevant of size has taped out yet under 28nm, but do you have a link or anything half way credible to support that there's also been a 20SoC GPU chip tape out already?

Surely Nvidia would use TSMC's 20nm G rather than 20SoC for their full fat GPUs, otherwise wouldn't peak performance suffer versus a more mobile focused node?

Also with a 20/16nm shrink allowing almost double the transistors, I wonder whether Tegra 6 will opt to double the SMX or just chase clockspeed to boost performance. A dual SMX mobile SoC would be almost freakish :) of course I don't know how Maxwell will differ in terms of its core uArch .
 
Surely Nvidia would use TSMC's 20nm G rather than 20SoC for their full fat GPUs, otherwise wouldn't peak performance suffer versus a more mobile focused node?

I'm afraid TSMC doesn't have anything else than 20SoC available as a 20nm process but would like to stand corrected.

Also with a 20/16nm shrink allowing almost double the transistors, I wonder whether Tegra 6 will opt to double the SMX or just chase clockspeed to boost performance. A dual SMX mobile SoC would be almost freakish :) of course I don't know how Maxwell will differ in terms of its core uArch .

Last sentence is the key. Easiest and cheapest scenario would be simply more SIMD32/SMX.
 
I'm afraid TSMC doesn't have anything else than 20SoC available as a 20nm process but would like to stand corrected.

You would seem to be right:

tsmc1.jpg
 
These results seem to put the K1 ahead of Haswell ULP in GFXBench offline:

FcTmRct.jpg


Coincidentally, it's about half the score of a GT 740M which makes some sense (since it's about half a GK107 GPU and both are running at approximately the same clocks).
These results are probably non-thottled, though.

Tom's Hardware got their hands on a prototype 4K monitor from Lenovo that will feature an embedded TK1 and this one got less impressive results:

Rd0A5jW.png


Then again, it is a prototype.
A 28" 4K monitor, scheduled to release in July for $1000. I want this so much...
 
These results seem to put the K1 ahead of Haswell ULP in GFXBench offline:

FcTmRct.jpg


Coincidentally, it's about half the score of a GT 740M which makes some sense (since it's about half a GK107 GPU and both are running at approximately the same clocks).
These results are probably non-thottled, though.

Tom's Hardware got their hands on a prototype 4K monitor from Lenovo that will feature an embedded TK1 and this one got less impressive results:

Rd0A5jW.png


Then again, it is a prototype.
A 28" 4K monitor, scheduled to release in July for $1000. I want this so much...

It may be a prototype, but it's also much less thermally constrained than a regular 7~12" tablet, let alone a smartphone; and it doesn't have a battery to worry about.

As for the display itself, I suspect that like all cheap 4K monitors that have been announced so far, it's based on a TN panel.
 
For the first graph here a quote from the notebookcheck link:

In a 7-inch Tegra K1 reference tablet, Nvidia claims the ability to render 60 fps in the GFXBench 2.7.5 T-Rex offscreen test. In our benchmarks,...

Allow me to want to wait for independent 3rd party results first.
 
$1000 is cheap?

The 4k monitor itself will be at $799 and the AIO with the 4k monitor and Tegra K1 will be at $1199 from what I recall; considering the first MSRP yes for a 4k resolution monitor it's not only cheap it's dirt cheap actually for today's standards.

new rumor. Tegra 4 in 2014 Nexus 10. If true, it's a big design win for Nvidia

source: http://motoringcrunch.com/news/google-nexus-10-2-specs/1002672/

The Nexus7 2013 successor would be rather a big design win. The Nexus 10 is even today quite expensive and gets quite warn under stress. If true I'd really wonder why Google didn't opt for a K1 instead.
 
new rumor. Tegra 4 in 2014 Nexus 10. If true, it's a big design win for Nvidia

source: http://motoringcrunch.com/news/google-nexus-10-2-specs/1002672/


Would Google be willing downgrade the GPU featureset of all Nexus devices launched in the last 1.5 years (or ~2 years by the time the new Nexus 10 comes out)?
Tegra 4 is OpenCL-less and only supports OpenGL 2.0..

TK1 would make all the sense in the world, though. Apple will need quite some time to catch up to that level of performance.
 
It may be a prototype, but it's also much less thermally constrained than a regular 7~12" tablet, let alone a smartphone; and it doesn't have a battery to worry about.

As for the display itself, I suspect that like all cheap 4K monitors that have been announced so far, it's based on a TN panel.

Interesting that the SoC in the massive Lenovo ThinkVision 28 is only running at 2.0 GHz CPU / GPU clock unknown but definitely lower than Nvidia's reference tablet. For example it scores 48 vs 60 FPS in T-Rex 2.7, so the GPU is probably 20% underclocked compared to its max reference design.

It's not like that 28 inch giant is going to battery or thermally constrained, so why the discrepancy? I reckon it could be a sign that Nvidia is struggling to yield their new chip, and selling off these lower-clocked variants at a discount would keep the cash rolling in, as they improve yields.
 
Last edited by a moderator:
Interesting that the SoC in the massive Lenovo ThinkVision 28 is only running at 2.0 GHz CPU / GPU clock unknown but definitely lower than Nvidia's reference tablet. For example it scores 48 vs 60 FPS in T-Rex 2.7, so the GPU is probably 20% underclocked compared to its max reference design.

It's not like that 28 inch giant is going to battery or thermally constrained, so why the discrepancy? I reckon it could be a sign that Nvidia is struggling to yield their new chip, and selling off these lower-clocked at a discount would keep the cash rolling in, as they improve yields.

In any case we could argue that the GPU is clocked in between 750 and 820MHz, but it would be hairsplitting IMO. This one is supposed to ship in July 14' according to the link at Toms.

What we have here as preliminary results is a GPU that performs in an AIO about 32% higher than a iPhone5S smartphone and 77% higher in GLB2.7. With power envelope being the big unknown here it's far too early for conclusions and no the device isn't obviously ready yet for prime time either.
 
Interesting that the SoC in the massive Lenovo ThinkVision 28 is only running at 2.0 GHz CPU / GPU clock unknown but definitely lower than Nvidia's reference tablet. For example it scores 48 vs 60 FPS in T-Rex 2.7, so the GPU is probably 20% underclocked compared to its max reference design.

It's not like that 28 inch giant is going to battery or thermally constrained, so why the discrepancy? I reckon it could be a sign that Nvidia is struggling to yield their new chip, and selling off these lower-clocked at a discount would keep the cash rolling in, as they improve yields.

This is a prototype monitor that is probably not running at final SoC clocks and drivers, so I wouldn't read too much into it.
 
This is a prototype monitor that is probably not running at final SoC clocks and drivers, so I wouldn't read too much into it.

You wouldn't worry about battery life on that one either. The test is somewhat meaningless at this stage but hey a headline is headline (for Toms).
 
Back
Top