Nvidia Tegra


So you're absolutely sure the 3DS' SoC had already taped out when the console was paper-launched in March 2010?

BTW, I wonder if the childish\elitist mockery was really necessary.
 
metafor said:
That's not "a lot" of use cases in modern smartphones. At least, not yet. And even if so, the CPU in general is not taxed. The GPU and memory controller isn't going to push 800mW. Nowhere close.
Let's bring out the classic Silicon valley cliche and agree to disagree. ;)

It's not. Look at a modern smartphone design and tell me which one discriminates based on active power (so long as it's below say, 1W) of the SoC in lieu of performance, features or price. If power were the primary discriminant, Tegra 2 would be shunned.
Does a Tegra 2 really consume more than an equivalent qcom or Ti chip? (Charlie's opinion doesn't qualify.)

Hard to answer that one for now: because there's nothing on the market ready for production that has 2 cores. They had the market for themselves right now.

Cell phone makers are desperately trying to find an edge over Apple and they're looking at the SOC check marks to do so. But if there were an equivalent competitor, there's no doubt that the one with 20% lower power consumption would win.
 
Let's bring out the classic Silicon valley cliche and agree to disagree. ;)

I can speak for at least one manufacturer. Running GLBench, Neocore or Quadrant does not get you anywhere close to 800mW.

Does a Tegra 2 really consume more than an equivalent qcom or Ti chip? (Charlie's opinion doesn't qualify.)

Hard to answer that one for now: because there's nothing on the market ready for production that has 2 cores. They had the market for themselves right now.

Cell phone makers are desperately trying to find an edge over Apple and they're looking at the SOC check marks to do so. But if there were an equivalent competitor, there's no doubt that the one with 20% lower power consumption would win.

Would it? The Hummingbird variant of the A8 at 1GHz was quoted at 750mW by Intrinsity. That's about a 50% premium over the standard Cortex A8 from manufacturers such as TI and well beyond Scorpion's 65nm 1GHz power numbers. Yet you don't see Samsung throwing it out in favor of an OMAP.

Ultimately, I think OEM's ask the question of whether the power premium will be noticeable to the end-user. And when it comes to picking out SoC's, intrinsic power seems to be last on the list of priorities so long as it's not outrageous (under 1W).

Manufacturer loyalties (HTC, Apple, Samsung), software compatibility (WP7), feature sheets (dual core, 1 GHz) and media capability (1080p, etc.) all seem to be bigger priorities. And from an OEM's perspective, it makes sense.

Apple can make their iPhone last twice as long as a Samsung using basically the same SoC. Yes, if you were to benchmark it under a use-case where the SoC was taxed at 100% load for extended periods of time, both may come out the same, but the every-day user will never play games for 5+ hours straight.

In every-day use, the iPhone last longer than a Samsung; despite the same SoC. It just isn't that big of a factor.
 
In every-day use, the iPhone last longer than a Samsung; despite the same SoC. It just isn't that big of a factor.

Not that I disagree with you general sentiment, however your example probably not a great one, as the two Socs are significantly different.

It has a different graphics core, different video encode and decode cores, its widely assuming Apple isn't running it at 1Ghz (probably 800Mhz ?), and I don't think Apple is clocking their SGX535 at the 200Mhz that Samsung is clocking their SGX540.
 
I'd say just the CPU component of the SoCs are similar and as tangey points out there are many differentiators with A4 packing more IMG ip that Hummingbird even though it's gpu is slightly lower spec. A5 will be a world away from Orion and I can only imagine who will have the more successful soc
 
Does Tegra 2 3D use different silicon compared to Tegra 2?

http://www.brightsideofnews.com/new...-tegra-2-3d-in-january2c-tegra-3-by-fall.aspx

Nothing surprising considering the CPU. 3x times faster graphics? :rolleyes:

Well that 3X is probably a marketing number but even if its true i wouldnt be surprised. Tegra 2 trades blows with SGX 540 and Adreno 205 right now and maybe approx 10-20% faster depending on the benchmark.

With Samsung moving to Mali 400 we should see at least a doubling of performance compared to SGX 540. Apple is rumoured to be going for 4X their current performance with SGX 543 MP2. Im not sure about Adreno 220 v/s 205 though. But Nvidia also had to aim for at least double the performance of their current part, lest they be left behind(they are already going to have the lowest graphics performance of all the dual core SoC's, even TI with their SGX 540 will beat them)

NV claims Tegra3 started sampling in 4Q10 whereas Freescale implied i.MX6x wasn't even sampling yet, so yes, they do get to claim that. Although if it's true that the PSP2 is also quad-core, then that would have taped-out first. Since that's a proprietary solution it wouldn't really be comparable though.

Is TSMC 28 nm ready though? Or are we gonna have another 40nm fiasco? :rolleyes:

Edit: And i remember you mentioning that they were using a different version of 28nm too

Not that I disagree with you general sentiment, however your example probably not a great one, as the two Socs are significantly different.

It has a different graphics core, different video encode and decode cores, its widely assuming Apple isn't running it at 1Ghz (probably 800Mhz ?), and I don't think Apple is clocking their SGX535 at the 200Mhz that Samsung is clocking their SGX540.

Not to forget that the Samsung phones use Super AMOLED screens, which suck more power on anything other than a black screen. ( for eg in internet browsing there is usually a lot more of white)
 
So you're absolutely sure the 3DS' SoC had already taped out when the console was paper-launched in March 2010?

See also Lazy8s reply; there are pictures of Nintendo dev-board stating "TEG2" on the internet. Plus there's quite a difference in overall complexity between the 3DS and "PSP2" SoC.

BTW, I wonder if the childish\elitist mockery was really necessary.

I take B3D way too serious to even deal with a line like that.
 
Does Tegra 2 3D use different silicon compared to Tegra 2?

Good question. It sounds to me rather like a SoC with faster clocked units. I wouldn't be surprised if the GPU block is also clocked higher by say 20% as the CPU.

Well that 3X is probably a marketing number but even if its true i wouldnt be surprised. Tegra 2 trades blows with SGX 540 and Adreno 205 right now and maybe approx 10-20% faster depending on the benchmark.
That's what public benchmarks show unfortunately at the moment. But that's not the point.

With Samsung moving to Mali 400 we should see at least a doubling of performance compared to SGX 540. Apple is rumoured to be going for 4X their current performance with SGX 543 MP2. Im not sure about Adreno 220 v/s 205 though. But Nvidia also had to aim for at least double the performance of their current part, lest they be left behind(they are already going to have the lowest graphics performance of all the dual core SoC's, even TI with their SGX 540 will beat them)
I can't make up my mind as a layman if the GPU ALUs in T3 will be hot-clocked or not. Hot-clock means obviously ~half the unit amount and no hot-clock ~twice. In any case to expect something in the MP2@200MHz performance range could be well realistic (but then again I was way too optimistic too for the T2 GPU and expected twice the T1 units). My personal disagreement and Arun knows, is that NV is concentrating in this case way too much on CPU power. In terms of marketing a quad core CPU definitely will give you advantages, but albeit I'm anything but against multi-core I find it hard to believe that applications will scale from single to quad threading over night. Besides how many chances are there that CPUs in the embedded space will scale beyond 4 cores anytime soon? After that isn't there a possibility of changing the entire strategy?

And yes the first best argument from someone like Arun will be that A9 cores are relatively small; but so are GPU ALUs especially if they should be hot-clocked.

Is TSMC 28 nm ready though? Or are we gonna have another 40nm fiasco? :rolleyes:

Edit: And i remember you mentioning that they were using a different version of 28nm too.
Tegra2 is using TSMC 40G afaik. They must have had a good reason for not picking something like LP instead, but yes I can't help but think that the TSMC 40G problems might have affected T2 production too. I think it taped out somewhere in late 08'.

However even if T3 ends up by several months later than currently projected, NV is still going to have a time advantage against their competitors considering quad core CPU SoCs.
 
Good question. It sounds to me rather like a SoC with faster clocked units. I wouldn't be surprised if the GPU block is also clocked higher by say 20% as the CPU.

That's what public benchmarks show unfortunately at the moment. But that's not the point.

Ok i got a confirmation that its the same silicon, just higher bins

Ok you're right, i was comparing against the numbers of the Gtab on Anandtech. Different resolutions as well. Lets wait for the official numbers.

I can't make up my mind as a layman if the GPU ALUs in T3 will be hot-clocked or not. Hot-clock means obviously ~half the unit amount and no hot-clock ~twice. In any case to expect something in the MP2@200MHz performance range could be well realistic (but then again I was way too optimistic too for the T2 GPU and expected twice the T1 units). My personal disagreement and Arun knows, is that NV is concentrating in this case way too much on CPU power. In terms of marketing a quad core CPU definitely will give you advantages, but albeit I'm anything but against multi-core I find it hard to believe that applications will scale from single to quad threading over night. Besides how many chances are there that CPUs in the embedded space will scale beyond 4 cores anytime soon? After that isn't there a possibility of changing the entire strategy?

And yes the first best argument from someone like Arun will be that A9 cores are relatively small; but so are GPU ALUs especially if they should be hot-clocked

Yea i agree with you there, they seem to be concentrating on CPU power a lot. Im sure there are certain advantages to be the one to reach quad core first, but as you said it remains to be seen how good adoption is and whether we will have any application/games which actually make use of quad cores. But the fact that they are going to be the first one to launch quad cores means that their execution is good compared to the other SoC manufacturers. The earlier they get the product out, the better the chances of adoption.

Tegra2 is using TSMC 40G afaik. They must have had a good reason for not picking something like LP instead, but yes I can't help but think that the TSMC 40G problems might have affected T2 production too. I think it taped out somewhere in late 08'.

However even if T3 ends up by several months later than currently projected, NV is still going to have a time advantage against their competitors considering quad core CPU SoCs.

Im sure you mean late 09 and not 08 :smile: And yea i suspect it affected their production. I think arun mentioned that they are using 28 LPG or something for Tegra 3.

I heard they might just demo Tegra 3 at MWC if all goes well. They haven't even got first silicon, its expected back from TSMC late this week so if they can get it running by then they'll demo it
 
Tegra2 is using TSMC 40G afaik. They must have had a good reason for not picking something like LP instead, but yes I can't help but think that the TSMC 40G problems might have affected T2 production too. I think it taped out somewhere in late 08'.

IIRC, ARM only offers the A9 hard-IP in 40G. They have two versions in 40G, one using mostly HVT cells targeted at 1GHz and one using more LVT cells targeted at 2GHz.

I imagine due to their aggressive push for time-to-market, nVidia went for the hard-macro solution from ARM rather than do the synthesis and back-end work themselves.

Could also explain why all the Tegra 2 phones announced are packing 1900mA-h batteries.
 
Im sure you mean late 09 and not 08 :smile: And yea i suspect it affected their production. I think arun mentioned that they are using 28 LPG or something for Tegra 3.
No, he did mean late 08 ;) They actually had early samples back in the lab during Mobile World Congress 2009 (but obviously not at the show), and then they did a respin and started sampling to customers in July 2009 iirc. As for Tegra 2 3D, it could be a derivative with very minor changes ala APX 2600 or it might be the exact same chip. Not sure it really matters.

On process: Tegra 2 is 40LPG and Tegra 3 is 28LPG (aka 28LPT). I know that with 100% confidence from multiple sources.

I heard they might just demo Tegra 3 at MWC if all goes well. They haven't even got first silicon, its expected back from TSMC late this week so if they can get it running by then they'll demo it
They were going to tape-out in mid-2010, so I assumed the semiaccurate tape-out date was correct (even if Charlie's Tegra sources are pretty awful *cough* Rayfield *cough*). That presentation slide also clearly says they are sampling it in Q4 2010, and AFAIK that slide is very recent so that means it has definitively already started sampling.

metafor: That's rather unlikely given that ARM finished work on that macro in late 2009, and NVIDIA taped-out Tegra 2 in Q4 2008. Unless NVIDIA made the time machine and ARM made the hard macro, but it'd be pretty lame if that was the best use of a time machine they could think of :)

And while I'm not convinced either that NV's Tegra 2 A9 implementation is very power efficient, it seems to me that very large batteries are something you want anyway in an ultra-high-end phone as long as you can maintain a very thin profile (which the Motorola Atrix does) so I don't think there is any correlation here.
 
That's rather unlikely given that ARM finished work on that macro in late 2009, and NVIDIA taped-out Tegra 2 in Q4 2008. Unless NVIDIA made the time machine and ARM made the hard macro, but it'd be pretty lame if that was the best use of a time machine they could think of :)

First tape-outs aren't really an indicator of a fully flushed core design, between commercial release by ARM and initial silicon by one of the leading vendors, there are many many initial revs of the A9 that could've gone in. I'll trust it if your sources say Tegra 2 is on 40LPG, which means nVidia did the back-end and synthesis work themselves as, IIRC, ARM doesn't provide a hard-macro for anything other than 40G.
 
IIRC, ARM only offers the A9 hard-IP in 40G. They have two versions in 40G, one using mostly HVT cells targeted at 1GHz and one using more LVT cells targeted at 2GHz.

I imagine due to their aggressive push for time-to-market, nVidia went for the hard-macro solution from ARM rather than do the synthesis and back-end work themselves.

Could also explain why all the Tegra 2 phones announced are packing 1900mA-h batteries.

The Optimus 2X has a 1500 mah battery, only the Atrix has a 1930 mah.

No, he did mean late 08 ;) They actually had early samples back in the lab during Mobile World Congress 2009 (but obviously not at the show), and then they did a respin and started sampling to customers in July 2009 iirc. As for Tegra 2 3D, it could be a derivative with very minor changes ala APX 2600 or it might be the exact same chip. Not sure it really matters.

Really? They started sampling to customers in July 2009 and we're getting end products 6 quarters later?

They were going to tape-out in mid-2010, so I assumed the semiaccurate tape-out date was correct (even if Charlie's Tegra sources are pretty awful *cough* Rayfield *cough*). That presentation slide also clearly says they are sampling it in Q4 2010, and AFAIK that slide is very recent so that means it has definitively already started sampling.

Well i've got first hand info that they haven't yet received first silicon back from the fab. That slide also mentions Tegra 2 phones in Fall 2010 and we know how that turned out :D

metafor: That's rather unlikely given that ARM finished work on that macro in late 2009, and NVIDIA taped-out Tegra 2 in Q4 2008. Unless NVIDIA made the time machine and ARM made the hard macro, but it'd be pretty lame if that was the best use of a time machine they could think of :)

Haven't ARM and GF announced that they have a hard macro A9 ready to go on 28nm btw?

And while I'm not convinced either that NV's Tegra 2 A9 implementation is very power efficient, it seems to me that very large batteries are something you want anyway in an ultra-high-end phone as long as you can maintain a very thin profile (which the Motorola Atrix does) so I don't think there is any correlation here.

While the Atrix is certainly not bulky(10.8mm), its not as thin as the iPhone(9.3mm) or the rumoured 9mm thickness for the Galaxy S2. But i agree with you there. I dont care if a phone is 1mm thicker, or even 2mm as long as they put a nice big battery in there. Current smartphones barely manage a day with moderate use. Android phones especially need to improve battery life. But marketing loves features like "world's thinnest phone" or in the case of the xperia arc "world's thinnest phone section" :rolleyes:
 
Really? They started sampling to customers in July 2009 and we're getting end products 6 quarters later?
Welcome to the handheld industry. Snapdragon started sampling in late 2007 and the Toshiba TG01 was released in mid-2009, with the vast majority of OEMs only shipping in late 2009/early 2010. The OMAP3430 taped-out in August 2006 (not sure when it started sampling) and the first OMAP3 phone was (iirc) the Samsung Omnia HD in May 2009. Cycles are shortening, but we're not quite there yet.

Actually there is some ambiguity about AP20 vs T20 sampling dates. There have been some indications that T20 started sampling in July 2009 and AP20 in Q4 2010, but that could be completely false so don't take my word on it.

Well i've got first hand info that they haven't yet received first silicon back from the fab. That slide also mentions Tegra 2 phones in Fall 2010 and we know how that turned out :D
Errr... Toshiba AC100? Turned out pretty well on the hardware side, I think. Not so much in terms of software and most partners actually delivering end-products.
 
Errr... Toshiba AC100? Turned out pretty well on the hardware side, I think. Not so much in terms of software and most partners actually delivering end-products.

If you're talking about Folio 100, the internal hardware may be as powerful as you can get (or was, back in October/November when it sold here in Europe), but the external hardware is really bad.

The casing is made of real low quality plastics (the whole thing bends with little effort), the screen quality is horrible (regarding both contrast, viewing angles and color accuracy), the touchscreen is innacurate.. Not to mention the software was so rushed that it didn't even have multi-touch support.

It was incredible to see how people seemed to cherish and long for iPads and Galaxy Tabs in Mediamarkt and then looked at the much-more-powerful-inside Folio 100 as if it was "just another chinese fake".


However, I do know that people at xda-developers have done a wonderful job in updating and tweaking\hacking the Folio's firmware and the device now lives up to its hardware. The bad screen and casing persists, though.
 
http://www.anandtech.com/show/4144/...gra-2-review-the-first-dual-core-smartphone/4

Well the entire article is a very interesting read, but the GPU blocks being clocked at 300 and 333MHz respectively explain the T2 3D scores a whole lot better than the 240MHz I assumed so far.

Anand's conclusions on the benchmark page make sense considering the results he's getting today, but I'd be very surprised if NVIDIA has unoptimized drivers for something like Q3a. On contrary I'd suggest that SGX530/540 need some additional driver love from IMG.
 
Well the entire article is a very interesting read, but the GPU blocks being clocked at 300 and 333MHz respectively explain the T2 3D scores a whole lot better than the 240MHz I assumed so far.
And it just so happens that 300/333/400MHz are the exact maximum memory frequencies for AP20/T20/T25 iirc. Anand's article is very nice, but there are tons of minor technical mistakes, so you'll have to excuse me if I don't quite believe this just yet.
 
Back
Top