Tegra 3 officially announced; in tablets by August, smartphones by Christmas

Some more phones with dual-core Scorpions:

HTC Sensation
HTC Evo 3D
Pantech Vega Racer
Xiaomi Phone
myTouch 4G

Seems to me like nVidia doesn't have close to 50% of design wins either. Maybe if you count canceled ones.


6 designs from tier 1 manufacturers is probably close. Sensation, Evo3D, Optimus3D, 2 Droids and the Galaxy S2.

I don't know that tier 3 asian manufacturers really is included in the "high-end" list for nVidia but that's a side-argument.

myTouch 4G isn't dual-core.

In either case, I think the more significant thing to take away from this is its timing in conjunction with the release date of Ice Cream Sandwich. "October or November" suggests that there still isn't fully working T3 hardware yet to finalize the release date and that ICS's tablet partner is set to be T3.

I thought Google was a bit conservative with the rumored use of an OMAP4 for their ICS phone reference design but it does help one avoid these kinds of schedule-slip issues.
 
6 designs from tier 1 manufacturers is probably close. Sensation, Evo3D, Optimus3D, 2 Droids and the Galaxy S2.

I don't know that tier 3 asian manufacturers really is included in the "high-end" list for nVidia but that's a side-argument.

myTouch 4G isn't dual-core.

I don't see how it's even remotely reasonable to not consider a phone high end on the basis of being "tier 3." I meant myTouch 4G Slide - didn't see Ailuros already got it. Indeed it isn't the same as the myTouch 4G, but it is dual core.

In either case, I think the more significant thing to take away from this is its timing in conjunction with the release date of Ice Cream Sandwich. "October or November" suggests that there still isn't fully working T3 hardware yet to finalize the release date and that ICS's tablet partner is set to be T3.

I thought Google was a bit conservative with the rumored use of an OMAP4 for their ICS phone reference design but it does help one avoid these kinds of schedule-slip issues.

I doubt Google would use the same vendor for a reference two times in a row unless the competition really sucked. I don't think Tegra 3 was ever a serious consideration. I'd say it was about Ti's turn (won't be surprised if next time it's ST Ericsson, if U9500 is out in time)
 
I don't see how it's even remotely reasonable to not consider a phone high end on the basis of being "tier 3." I meant myTouch 4G Slide - didn't see Ailuros already got it. Indeed it isn't the same as the myTouch 4G, but it is dual core.

I don't think nVidia even considers the asian market something they're gonna get into. Not that I don't think it is; simply as they likely don't count design wins there in their PR.

I doubt Google would use the same vendor for a reference two times in a row unless the competition really sucked. I don't think Tegra 3 was ever a serious consideration. I'd say it was about Ti's turn (won't be surprised if next time it's ST Ericsson, if U9500 is out in time)

I thought it was fairly established that the tablet reference for ICS would be T3. I could be wrong I suppose but with the release dates -- as well as the uncertainty of available final product -- it does seem like it will be a T3 ICS tablet.
 
I thought it was fairly established that the tablet reference for ICS would be T3. I could be wrong I suppose but with the release dates -- as well as the uncertainty of available final product -- it does seem like it will be a T3 ICS tablet.

There is a theory floating around that Google might go for more than one reference platforms for ICS, but I'm not so sure anymore it has any merit.
 
There is a theory floating around that Google might go for more than one reference platforms for ICS, but I'm not so sure anymore it has any merit.

I don't see why not. Google was reportedly allowing multiple vendors access to ICS, each submitting a prototype. That way, there isn't as much risk to them if silicon slips. OMAP4 may be the most conservative both in performance as well as schedule risk.

I see T3 outperforming OMAP4 by a significant enough degree (though it'll likely be less power efficient) that it's worth it to use a different platform for tablets.
 
I don't see why not. Google was reportedly allowing multiple vendors access to ICS, each submitting a prototype. That way, there isn't as much risk to them if silicon slips. OMAP4 may be the most conservative both in performance as well as schedule risk.

Point taken.

I see T3 outperforming OMAP4 by a significant enough degree (though it'll likely be less power efficient) that it's worth it to use a different platform for tablets.

OMAP4460 will be appearing in devices soon and its 2*A9 are clocked at 1.5GHz, a frequency that T3's quad core A9's are supposed to run at too. It'll then come down as to how often the quad core CPU of the latter will be utilized to that extend to make a significant difference compared to a same clocked dual core A9, if and how much single channel memory and it's possible bandwidth restriction will pose a bottleneck and how the ULP GF in T3 can compare to a 384MHz SGX540.

Currently at least in GL Benchmark a 200MHz SGX540 seems to roughly break even with the 300MHz AP20 Tegra2 smart-phones and I don't think they even contain the latest drivers which increased iPad2 performance by about 30%:

http://www.glbenchmark.com/result.j...3&os=4&version=all&certified_only=1&brand=all

OT: does anyone know if the Samsung GT-I9220 containing an Exynos SoC is the long fabled Nexus Prime, Galaxy Nexus or whatever it's going to be called after all?
 
OT: does anyone know if the Samsung GT-I9220 containing an Exynos SoC is the long fabled Nexus Prime, Galaxy Nexus or whatever it's going to be called after all?

The most recent report I've seen (GSMArena), seems to think that GT-I9250 is the Nexus Prime:

http://www.gsmarena.com/support_pag..._up_we_get_tips_from_an_insider-news-3105.php

Their report doesn't mention which SoC is to be used but they claim a large screen (no less than 4.65") with a high-resolution. The high-resolution part sounds a little hopeful to me, however.
 
OMAP4460 will be appearing in devices soon and its 2*A9 are clocked at 1.5GHz, a frequency that T3's quad core A9's are supposed to run at too. It'll then come down as to how often the quad core CPU of the latter will be utilized to that extend to make a significant difference compared to a same clocked dual core A9

In a tablet environment -- and assuming ICS will bring in more desktop-like functionality -- I suspect there would be marginal benefits to having 4 cores, though I suppose it isn't a huge deal.

if and how much single channel memory and it's possible bandwidth restriction will pose a bottleneck and how the ULP GF in T3 can compare to a 384MHz SGX540.

I can't speak for ULP GF, but wasn't it Samsung who claimed memory bandwidth was not a bottleneck for the 540 series GPU?

Currently at least in GL Benchmark a 200MHz SGX540 seems to roughly break even with the 300MHz AP20 Tegra2 smart-phones and I don't think they even contain the latest drivers which increased iPad2 performance by about 30%:

http://www.glbenchmark.com/result.j...3&os=4&version=all&certified_only=1&brand=all

Let's say, ideally, they get a perfect scaling of +50% of their current throughput -- not realistic depending on the end resolution. Do you think nVidia could clock T3's GPU to 400+ MHz?
 
I can't speak for ULP GF, but wasn't it Samsung who claimed memory bandwidth was not a bottleneck for the 540 series GPU?

I'm not aware of that statement and what Samsung actually meant with it, but it could be they meant the TBDR side of things. Deferred renderers typically consume less bandwidth than other architectures.

Let's say, ideally, they get a perfect scaling of +50% of their current throughput -- not realistic depending on the end resolution. Do you think nVidia could clock T3's GPU to 400+ MHz?

Going from 200MHz to 384MHz even with just a 15% performance improvement through the driver it's a rough 100% increase at least and not just 50%.

Samsung i9000 Galaxy S (SGX540@200MHz) =
Egypt standard 800*480 = 26.10 fps
Egypt offscreen 1280*720 = 14.70 fps

LG P999 Optimus X2 (ULP GF@300MHz) =
Egypt standard 800*480 = 25.30 fps
Egypt offscreen 1280*720 = 14.50 fps

Yes of course can NV clock the GPU in T3 beyond 400MHz. AP20 ULP GF is at 300MHz and T20 at 333MHz. Do you think they'll be able to clock the GPU at =/>600MHz under 40nm though?
 
Well, the ULP GF in T3 should be faster per clock than the ULP GF in T2...

As for hitting 600MHz, sure they could do it as long as heat and battery life are not a concern, so obviously they won't... Now, if the chips were going into a laptop...
 
I'm not aware of that statement and what Samsung actually meant with it, but it could be they meant the TBDR side of things. Deferred renderers typically consume less bandwidth than other architectures.



Going from 200MHz to 384MHz even with just a 15% performance improvement through the driver it's a rough 100% increase at least and not just 50%.

Samsung i9000 Galaxy S (SGX540@200MHz) =
Egypt standard 800*480 = 26.10 fps
Egypt offscreen 1280*720 = 14.70 fps

LG P999 Optimus X2 (ULP GF@300MHz) =
Egypt standard 800*480 = 25.30 fps
Egypt offscreen 1280*720 = 14.50 fps

Yes of course can NV clock the GPU in T3 beyond 400MHz. AP20 ULP GF is at 300MHz and T20 at 333MHz. Do you think they'll be able to clock the GPU at =/>600MHz under 40nm though?

Looking at the Optimus 3D (OMAP4430), a T3 at 400MHz -- again, assuming they get a 50% per-clock scaling with the additional cores -- should be a match for an SGX400 at 384MHz. If they clock it higher, they can surpass it in performance; assuming they aren't memory bottlenecked.

For a tablet part, they may clock it even higher simply due to the larger TDP and battery.
 
It looks like OMAP4460 products may be out well before Kal-El ones - the Archos G9 tablet is currently slated to be released this month. With OMAP4470 sampling about now and allegedly to be available in products H1 2012 it might be Kal-El's real competition, and the ICS reference platform. I imagine that moving from 4460 to 4470 won't be the biggest problem software-wise.
 
Samsung's GT-I9220 was tested running Gingerbread 2.3.5, and it does have an adaptation of an HD display at 1280x800.

The ALUs of Tegra 3 are supposedly improved, so graphics performance should be in the ballpark of Apple's A5. OMAP4470 should approach that, but I think the 4460 will be somewhat lower (yet more power efficient than Tegra 3).
 
Looking at the Optimus 3D (OMAP4430), a T3 at 400MHz -- again, assuming they get a 50% per-clock scaling with the additional cores -- should be a match for an SGX400 at 384MHz. If they clock it higher, they can surpass it in performance; assuming they aren't memory bottlenecked.

For a tablet part, they may clock it even higher simply due to the larger TDP and battery.

Difference being that the Galaxy S I used in my former comparison uses a newer driver which isn't neither in the Optimus 3D nor in the LG 925 . The Galaxy S uses a more recent driver which is by about 23% faster than the one you're seeing in the Optimus 3D (Samsung SPH-D700/SGX540@200MHz = 21,3 fps, Samsung i9000 GalaxyS/SGX540@200MHz = 26,1 fps) which is clocked at 307 and not 384MHz. The OMAP4460 should end up a tad below 50fps or else >5k frames in Egypt standard with the same driver as GalaxyS (>20% lower than the Exynos Galaxy2).

The ALUs of Tegra 3 are supposedly improved, so graphics performance should be in the ballpark of Apple's A5. OMAP4470 should approach that, but I think the 4460 will be somewhat lower (yet more power efficient than Tegra 3).

If NV manages while just adding another Vec4 PS ALU at higher core frequencies to end up in the ballpark of the A5 (8 Vec4+1 ALUs, 4 TMUs, 32 z/stencil) I'll tip my hat off to NV and IMG's driver team deserves to get shot. Else I'd love to hear a half way reasonable explanation how you suddenly quadruple graphics performance with 50% more PS ALU lanes and a higher frequency unless the latter ends up close or over 1GHz.

I think the Mali400MP4 in Exynos contains 4 Vec4 PS + 1 Vec2 VS ALUs at estimated 275MHz and it ends up at about half the performance as the iPad2 in 720p.
 
I fully expect the A5 to hold a commanding lead in graphics over Tegra 3 (and everyone else), but the comparison will finally move past the SGX540 definitively into the range of the 543s/544s, Adreno 220s, GC800s, and Mali-400MP4.

The amount of time that will have to pass before someone else beats the 543MP2 will be truly impressive in such a fast moving market. Processors would normally be knocked off the high end within five months as the next wave of mobile devices launch.
 
I fully expect the A5 to hold a commanding lead in graphics over Tegra 3 (and everyone else), but the comparison will finally move past the SGX540 definitively into the range of the 543s/544s, Adreno 220s, GC800s, and Mali-400MP4.

I still believe there's some untamed life learking inside cores like the 540. I'm in no way expecting any wonders, but I wouldn't write it off just yet. What TI's OMAP4s actually need are more design wins.

The amount of time that will have to pass before someone else beats the 543MP2 will be truly impressive in such a fast moving market. Processors would normally be knocked off the high end within five months as the next wave of mobile devices launch.

Apple won't be sitting idle either for their next SoC. One of the reasons it's not that easy to follow Apple's traces is that not everyone has the luxury to deal with as high volumes at the moment and have a >120mm2 large die at 45nm without bleeding financially.
 
NVIDIA pre-announces "Grey" with integrated baseband:

X-Bit (rough summary): http://www.xbitlabs.com/news/mobile...uilt_in_3G_4G_Communication_Capabilities.html
Heise (more detailed but in german): http://www.heise.de/newsticker/meldung/Nvidia-Mit-Kal-El-in-den-Notebook-Markt-1338799.html
Original source (english audio): http://phx.corporate-ir.net/phoenix.zhtml?c=116466&p=irol-EventDetails&EventId=4186691

I'm honestly surprised it's coming so early - the roadmap picture indicates it's lagging Wayne by only about one quarter. That also means it's not lagging Icera's 28nm baseband by much (tape-out planned for early Q4 2011 as of MWC11 iirc) - interestingly that's on 28HP whereas this will probably be on 28HPM, although it would be weirdly interesting and unexpected if NVIDIA used 28HP rather than 28HPM for the full chip. Not likely since there's no point and the overall architecture is probably a derivative of Kal-El though.

So I'd have to guess Wayne will tape-out in early Q1 2012 and Grey will tape-out in early Q2 2012, assuming neither gets delayed. And you'd probably want to add a slight delay to Grey for baseband certification (and the fact it's smartphone-only which takes longer to get to market than tablets). BTW, it turns out I was wrong about Tegra 2's tape-out date; it taped-out in early Q2 2009 rather than late 2008 as I often claimed. Oops!

Here's my random first guess for Grey specs: 2GHz dual-core A9 with NEON, 1MB L2, Kal-El GPU but clocked substantially higher, 1080p High Profile Decode, 1080p Baseline Encode, 32-bit LPDDR2 up to 1066MHz. 1 core/4 channel Icera DXP up to 100Mbps LTE (ICE8060 is 1 core/2channel up to 50Mbps, ICE9040 is 2 core/4 channel up to 150Mbps) is most likely although 1 core/2 channel 50Mbps isn't impossible either. Basically a slightly higher-end competitor to the MSM8930 either way. Let's see if I'm wrong on everything or not in a year or two :)
 
Difference being that the Galaxy S I used in my former comparison uses a newer driver which isn't neither in the Optimus 3D nor in the LG 925 . The Galaxy S uses a more recent driver which is by about 23% faster than the one you're seeing in the Optimus 3D (Samsung SPH-D700/SGX540@200MHz = 21,3 fps, Samsung i9000 GalaxyS/SGX540@200MHz = 26,1 fps) which is clocked at 307 and not 384MHz. The OMAP4460 should end up a tad below 50fps or else >5k frames in Egypt standard with the same driver as GalaxyS (>20% lower than the Exynos Galaxy2).

Interesting. Do you know whether the driver improvements span across for all scenes? What does Pro show?

Not likely since there's no point and the overall architecture is probably a derivative of Kal-El though.

I thought Wayne was to be an A15 device?
 
I thought Wayne was to be an A15 device?
I've been pretty systematic in claiming Wayne is very likely based on a quad-core Cortex-A9 again but (unlike OMAP5 or Krait) on a 28nm High-K process, very likely TSMC 28HPM. And yes, that probably does imply a tape-out around late Q4 2011 (as also implied by the roadmap slide in that german article - if it's more than one year behind Kal-El, it can't be taping-out so soon).

The only evidence against 4xA9 come from Charlie ("NVIDIA WILL DIE, NO WAIT, THEY ARE ALREADY DEAD INSIDE!") and Theo Valich ("WAYNE IS OCTO[STRIKE]MOM[/STRIKE] CORE A15 AND CONSUMES NEGATIVE POWER"). Excuse me if I don't take them very seriously compared to all evidence to the contrary ;) (e.g. low performance improvement claims for Wayne and very high improvement claims for Logan).

I'm curious how high a Cortex-A9 could clock on 28HPM within a tablet TDP. Apparently there will be Kal-El SKUs with noticeably higher clock speeds (aka Kal-El+ still on 40nm) so assuming 1.8GHz on 40LPG, then 2.5GHz should be very reasonable on 28HPM. That could be reasonably competitive, especially in terms of marketing, versus a dual-core 2GHz A15.

BTW, talking of process tech, this GlobalFoundries press release implies the ST-Ericsson is on 28SLP (equivalent of TSMC 28HPL) rather than 28HPP (equivalent of TSMC 28HPM). So the A15 can (theoretically) hit 2GHz on 28LP SiON (OMAP5) and 2.5GHz on 28SLP High-K (A9600). I wonder how much it would hit on 28HP or 28HPM - TSMC mentioned 28HPM can handle 3GHz CPUs in their latest CC, but I'd be surprised if A15 couldn't hit a lot more than that (>3.5GHz?) if you targeted traditional PC TDPs (not that anyone is going to). And that implies it's basically as fast (clockspeed-wise) as Bulldozer and could do even better on Intel's process. The more I think about it, the more I wonder if ARM went too far and should have stuck with a slightly shorter pipeline.

Oh and while we're talking process - metafor, any idea how different are 28HP and 28HPM in terms of synthesis? I assume you'd basically be forced to redo everything but I don't really know. One reason why I assume NV would wait longer to integrate the baseband is that ICE9040 is on 28HP and it doesn't make sense for NV to use that process for their application processors. The problem obviously is that Icera uses some structured custom and that'd take some time to redo (and would delay the 20nm generation as well). Then again maybe RDR on 28nm means they're doing much less custom than they used to...
 
Back
Top