NVIDIA Tegra Architecture

Or perhaps it explains why Toshiba isn't successful at making tablets? IMHO before drawing conclusions you'd need a larger sample of reviews, and reviews from other designs based on Tegra 4.

Perhaps. But Toshiba is quite good at making laptops, and this isn't hugely different. What other reason could there be for a tablet running hot than excessive power consumption?
 
Anyone knows what that "closest mobile competitor" is? Bay Trail should already offer all the same features (minus CUDA) much earlier (though is going to be slower), and I thought Rogue's should as well - though not sure if anyone is going to sell the dx11 versions. No idea about the featureset of adreno 4xx or Mali's.

One candidate could be Adreno320 since it is OGL_ES3.0 compliant. On another note I'm impressed with the Island demo; NV for sure knows (for years now) how to create pretty techdemos. Others should really take an example from that since techdemos can be technically interesting and pleasing to the eye at the same time.
 
So are we making a motion to force Ailuros to eat a hat or what?


The diagram though is most probably just taken from some ordinary desktop Kepler. Because it still shows the 16 TMUs, even though it is most likely just 8 per SMX like gk208. Well if it really is that close to gk208 that is.

Maybe it makes more sense to have the same TMUs/ALU ratio as GK107, after looking at the typical workload in Android games?
Or maybe the GK208 was developed really fast, after the "half-GK107" design had already been submitted to Logan..


Anyone knows what that "closest mobile competitor" is? Bay Trail should already offer all the same features (minus CUDA) much earlier (though is going to be slower), and I thought Rogue's should as well - though not sure if anyone is going to sell the dx11 versions. No idea about the featureset of adreno 4xx or Mali's.
Anandtech only compares with ipad 4..
I thought there were no Rogues with DX11 compliance.



Toshiba's latest Tegra 4-powered tablet is available from Amazon, but 4 out of 7 (at this time) reviews mention overheating problems: http://www.amazon.com/Toshiba-Excit...iewpoints=1&sortBy=bySubmissionDateDescending

I guess this explains why we're not seeing many Tegra 4s in phones.

That's too bad, the new Toshiba tablets looked really good on paper.


Perhaps. But Toshiba is quite good at making laptops, and this isn't hugely different. What other reason could there be for a tablet running hot than excessive power consumption?

I think the illumination LED panel for the ipad3/4 is responsible for a good chunk of the heating that happens in those tablets.
 
So are we making a motion to force Ailuros to eat a hat or what?

As I said before I've never had a problem taking things back or apologizing in public if I had been wrong. Aren't you a bit in a hurry to wear your party hat?

Anandtech only compares with ipad 4..
I thought there were no Rogues with DX11 compliance.
Not yet announced variants; all so far announced afaik are DX10.0 while future variants will be DX11.1. Neither Android nor iOS really needs more than DX10.0 for the time being. For anything windows obviously in order to be competitive DX11 is more of a necessity.
 
Perhaps. But Toshiba is quite good at making laptops, and this isn't hugely different. What other reason could there be for a tablet running hot than excessive power consumption?
I don't know. I just wanted to point out that a few comments on a web site is too little to draw a conclusion :)
 
As I said before I've never had a problem taking things back or apologizing in public if I had been wrong. Aren't you a bit in a hurry to wear your party hat?

But eating a virtual hat is so much more fun than apologizing in public!
Please don't apologize for having a wrong prediction, this is just a forum :)

BTW, I am always in a hurry to wear my party hat. Especially when I get to be wrong so many times in this forum..


Not yet announced variants; all so far announced afaik are DX10.0 while future variants will be DX11.1. Neither Android nor iOS really needs more than DX10.0 for the time being. For anything windows obviously in order to be competitive DX11 is more of a necessity.

But will there be DX11.1 variants for Rogue? Is the compliance with windows APIs something that IMG wants, now that Intel will use their own graphics IP for future mobile SoCs?
 
But eating a virtual hat is so much more fun than apologizing in public!
Please don't apologize for having a wrong prediction, this is just a forum :)

BTW, I am always in a hurry to wear my party hat. Especially when I get to be wrong so many times in this forum..

It's too early to wish anyone bon apetit yet; that's all I meant. You know we old farts have the benefit of higher patience albeit it's an oxymoron.

But will there be DX11.1 variants for Rogue? Is the compliance with windows APIs something that IMG wants, now that Intel will use their own graphics IP for future mobile SoCs?

No idea what they'd want it for now that ST_E amongst other is out of the picture. Intel will use their GenX GPUs for tablet SoCs only afaik; of course Intel wouldn't theoretically need anything >DX10 either for its smartphone SoCs
 
One candidate could be Adreno320 since it is OGL_ES3.0 compliant. On another note I'm impressed with the Island demo; NV for sure knows (for years now) how to create pretty techdemos. Others should really take an example from that since techdemos can be technically interesting and pleasing to the eye at the same time.
Yes we need more Ruby vs. Dawn again :).
Adreno3xx won't really be a competitor as Adreno4xx is going to be in chips way before Logan appears. I have no idea though what features it's going to have.

And I'm wondering if that's really 28nm? Anandtech figures it can't be 20nm if they got silicon back 3 weeks ago, which may be true.
 
Last edited by a moderator:
Yes we need more Ruby vs. Dawn again :).
Adreno3xx won't really be a competitor as Adreno4xx is going to be in chips way before Logan appears. I have no idea though what features it's going to have.

Adreno420 is according to QCOM's roadmap DX11.1 if memory serves well.

And I'm wondering if that's really 28nm? Anandtech figures it can't be 20nm if they got silicon back 3 weeks ago, which may be true.
It could be many things above 20nm. One funky idea would be the GPU not being 28LP but something else like HPM f.e. By the way if it needs almost 1GHz to reach its peak GFLOP rate, my estimate would be that it shouldn't need more than 200-250MHz to reach those 18 fps in GLB2.7.

If I take the current best case scenario of a GK110 I have for the QuadroK6000 roughly 23 GFLOPs/W; for a worst case scenario for Logan my speculative math gives me for the above around 75 GFLOPs/W if not more in the end.
 
Is Logan sample on 28nm or 20nm?

Anand seems to believe 28nm (not verified with anyone, just a belief).

NVIDIA got Logan silicon back from the fabs around 3 weeks ago, making it almost certain that we're dealing with some form of 28nm silicon here and not early 20nm samples.

http://www.anandtech.com/show/7169/nvidia-demonstrates-logan-soc-mobile-kepler
Whereas a poster in the comments points out that TSMC has had 20nm risk production available since April 14, 2013.

While TSMC has four "flavors" of its 28nm process, there is one 20nm process, 20SoC. "20nm planar HKMG [high-k metal gate] technology has already passed risk production with a very high yield and we are preparing for a very steep ramp in two GIGAFABsTM," Sun said.

Sun noted that 20SoC uses "second generation," gate-last HMKG technology and uses 64nm interconnect. Compared to the 28HPM process, it can offer a 20% speed improvement and 30% power reduction, in addition to a 1.9X density increase. Nearly 1,000 TSMC engineers are preparing for a "steep ramp" of this technology.

http://www.cadence.com/Community/bl...-20nm-16nm-finfet-and-3d-ic-technologies.aspx
If 20nm risk production was available since April 14, 2013 and the samples of Logan came back 3 weeks ago (around July 1st) that would leave 11 weeks between 20nm risk production availability and the Logan sample being physically available.

Isn't 11 weeks more than enough time for that to happen?
 
"Risk production" generally starts a long time before you start seeing complex logic device doing "risk production"; they often start with more simple, easily repairable, structures like memory or FPGA's.
 
So the consensus here is that the Tegra 5 (Logan) sample is on 28nm.

Since Nvidia seems to announce new Tegra's at the CES show in January and have availability 3-5 months later.

Will that time frame allow for 20nm production or will the Tegra 5 be on the trailing 28nm process?
 
Has Tegra 4 gotten design wins on devices which are likely to sell as well as the new Nexus 7?

Can they sustain development without being used on high-volume devices?

Does their PC card business carry the development of these SOCs or something?
 
Has Tegra 4 gotten design wins on devices which are likely to sell as well as the new Nexus 7?

Can they sustain development without being used on high-volume devices?

Does their PC card business carry the development of these SOCs or something?

Tegra hasn't turned a profit yet, so basically, yes. Of course, NVIDIA is betting that Tegra will eventually grow enough to be profitable.

But if it doesn't happen within a couple of generations, I guess they may reconsider. Bleeding money is only fun for so long.
 
You need a mature process for low power device?, if trying to push the boundary at least.
And expectations are distorted by Intel's lead.

As a point of comparison, the AMD chips which do about the same thing as Logan but a category above in TDP : APU SoC with a desktop GPU. Bobcat came out in the beginning of 2011, on 40nm. Kabini on 28nm just launched, 2013. Temash, dunno where is it but it exists (lowest power version basically competes with Tegra 4, Logan and Atom old and new).
Tegra 4 itself, announced on January 6 2013.
 
Back
Top