Intel's smartphone platforms

Regarding Windows on ARM, it is basically an insurance against Intel fucking up. It is not just for tablets but also for servers and desktops if needed.
 
I've heard rumblings that Intel isn't planning more than 15m Atom SoCs next year. I don't know what the real number is, but it's not impossible that Intel can be volume constrained in this market. Being able to support ARM as well could purely be a play to grab higher volume from more vendors.

So far Surface RT is the highest selling WinRT/Win 8 tablet. Of course this is going to be partially due to how Microsoft controls OS pricing, and due to how much extra advertising it's given. Point remains, MS themselves want to be able to keep their own tablets in a strong position, so the question is, do they use ARM or Atom for the lower end model? They do have that ARM license, meaning they could be planning to make their own SoC or even CPU design.
 
Sure but there is a major advantage in going with X86 for Windows 8, it gives you legacy applications. What advantage would Samsung have in using X86 processor for Android?

Many things. Cedar Trail is still basically a 5 year old architechture with minor tweaks. Slightly better GPU, etc. It's still on a 32 nm process which is not low power.

Krait is significantly newer on a 28 nm LP process.

Yet despite that they are roughly comparable in both performance and power.

The A15 is faster, but also far more power hungry. Using up to 4x the power but rarely 4x the performance. And to achieve the 4W TDP in the Anandtech test it has to throttle the CPU if both it and the GPU are being used. In this scenario, there's no reason Haswell won't be competitive in performance and power. So Atom based SOC's obviously won't be competing with A15.

The only barrier then comes down to price if we assume that Intel with a newer architecture on a low power process doesn't out perform the competition. At the very least they should remain competitive. As to price, Intel could certainly price it competitively with regards to relative performance. They did it back when AMD was competitive in performance, I see no reason why they wouldn't do the same in this market.

In other words, why wouldn't someone choose an Intel chip over an Arm chip for an Android device if the price and/or performance were right? Battery life is already competitve so it's just down to price and performance at this point.

There's only 2 real disadvantages that Intel currently has. A weak GPU which can be easily addressed by licensing a newer GPU core or developing one in house. And a relatively weak high end baseband, at least compared to Qualcomm which is potentially harder to address.

Regards,
SB
 
Be it Apple with its Swift or Qualcom and Krait, I expect not neat improvements till they jump to another lithography (as far as CPU performances are concerned).

There have been plenty of hints that Qualcomm will come with a revision to Krait that brings a 10% improvement to IPC, along with a slight bump in clock speed. We'll probably hear more about those by MWC2013, if not earlier. It wouldn't surprise me if Apple didn't tweak their core whenever they see fit to do so, that's one of the advantages to having your own core design.
 
Many things. Cedar Trail is still basically a 5 year old architechture with minor tweaks. Slightly better GPU, etc. It's still on a 32 nm process which is not low power.

We don't really know what has or hasn't been improved with the move from Bonnell to Saltwell, except that it only includes minor performance sensitive areas. But that doesn't mean that the layout hasn't been better optimized for the process.

And what exactly is not low power about Intel's 32nm SoC process? They were already differentiating the SoC process on 45nm; one take-away was that it offered much lower leakage for 7% lower peak performance. Sounds like an LP style process to me. There's a reason why the 32nm Atom chips came out so much later than 32nm Core chips.

Krait is significantly newer on a 28 nm LP process.

Yet despite that they are roughly comparable in both performance and power.

Not in everything. Intel is seeing its biggest advantages in Javascript, as usual. Whether or not that advantage continues to go down in time will be interesting to watch. Anyone want to take bets?

The A15 is faster, but also far more power hungry. Using up to 4x the power but rarely 4x the performance.

Of course it wouldn't be. I feel like I have to repeat this everywhere I go - perf/W is not linear. No one is bothering to test Cortex-A15 at lower max clock speeds to try to normalize performance, to get an idea of the scaling. We also only have an example on one particular implementation on one particular process.

And to achieve the 4W TDP in the Anandtech test it has to throttle the CPU if both it and the GPU are being used. In this scenario, there's no reason Haswell won't be competitive in performance and power. So Atom based SOC's obviously won't be competing with A15.

This part is just nonsense spread by Anandtech at Intel's behest (sorry, Intel's "suggestion"). Using peak GPU and CPU at the same time isn't a normal scenario at all, as the game test shows. It's totally unfair to hold it against Exynos 5 for giving them a big peak power budget on the GPU that it can't easily sustain while the CPU is at max. Especially when you're not even saying "Exynos 5" but "the Cortex-A15."

The worst part about this is when they say that Exynos 5 would like to use 8W at peak unrealistic scenarios and Haswell has shown to be able to do 8W under typical scenarios so hey they're basically the same thing. Or do you really think that the lowest bin Haswell is going to use 8W while running the CPU and GPU both at the highest turbo bins? No, of course not. It's going to have to seriously throttle to reach that. NO ONE sets TDP based on peak CPU plus peak GPU.

That, and the 8W figure is probably not going to be TDP for Haswell ULV. Wasn't it the typical number for some workload using a 15W chip? At any rate, I very much don't think we're going to be talking about a 8W Exynos 5250 vs an 8W Haswell and the latter will probably need more cooling than the former. Not to mention price, of course.
 
Intel is not matching ARM prices with their 5 year old Atom on mature process but they will start doing that with Haswell/Silvermont? Yeah ok....

Also looking at the leaked roadmap Intel is promising 2X better performance with less power draw but most of it seems to be from the clock frequency of +2.6 GHz.. No wonder its not coming until Q1 2014
 
Also looking at the leaked roadmap Intel is promising 2X better performance with less power draw but most of it seems to be from the clock frequency of +2.6 GHz.. No wonder its not coming until Q1 2014

Lower power draw running a more complex core at 2.7GHz vs 1.8GHz certainly has to be seen to be believed. I know Intel's 22nm is promising power consumption miracles but those are mostly realized on the lower end of the voltage curve (they are for instance not really realized in the upper frequencies on IB vs SB). Will 2.7GHz Silvermont still be playing at very low voltages? Lower than Haswell at high clocks, okay, but especially low overall?

I was surprised to see such a high clock speed target. And frankly, I don't think we'll see phones or even tablets reach those clocks, not even on turbo. What it seems to me is that Intel is strengthening its investment of Atom in netbooks and nettops. Which I find surprising. Given the decline of these markets, the era of $40 Celerons and Trinity chips, the greatly increased emphasis on smartphones and tablets, and Intel outright admitting that Cedar Trail was delayed due to low demand, I'm really surprised to see this and not the opposite. Although it could really be the micro-server market driving things instead.

I wonder what the IPC penalty is in supporting 2.7GHz with Silvermont. Contrary to what some think you don't get to increase frequency without some IPC penalty (obviously not as large as the gain from the frequency, and you can add more reordering hardware to hide some of it). I get the feeling that Cortex-A15's frequency goals of 2.5GHz in the highest end implementations may have impacted both IPC and power consumption in the lower frequencies we currently see.

Then again, maybe the gap from 2GHz to 2.7GHz isn't that big given the process bump and Intel is just much more talented at boosting frequency than those implementing ARM cores.
 
Intel is not matching ARM prices with their 5 year old Atom on mature process but they will start doing that with Haswell/Silvermont? Yeah ok....

Also looking at the leaked roadmap Intel is promising 2X better performance with less power draw but most of it seems to be from the clock frequency of +2.6 GHz.. No wonder its not coming until Q1 2014

LOL the promised perf/watt is just another one of these typical Intel FUD lies, just like the fabled 10Ghz Tejas CPU. It's the same over and over again, I am really tired of this Monsanto of CPU business.
Intel's does not see nor understand that their biggest problem is not located on the technical side of things. It's their self-conception that the monpoly tax they collect in the x86 area is legitimately theirs - and should be extended to all microprocessor-equipped systems sold. But manufacturers know people can spend every dime only once and the bucks that go to intel will never find their way to apple, samsung, whoever. Intel taketh away, reducing the manufacturers margins. Why should manufacturers support Intel's business model that relies on absurd monopolistic profits?
Wintel is a case of two interlocking monopolies that support each other - if one of these collapses, the other will follow shortly afterwards.
And Windows is already in severe danger everywhere apart from the desktop. And regarding legacy application: I can gladly live without them, above all on a platform they were never designed to run on :D
 
Also looking at the leaked roadmap Intel is promising 2X better performance with less power draw but most of it seems to be from the clock frequency of +2.6 GHz.. No wonder its not coming until Q1 2014
I'm surprised by that release date, I would have expected the new Atom at least 3 months earlier; many news back in July were claiming Q4 2013 for Bay Trail.

Perhaps is that 2014 date only for tablet chips, and we'll see some smartphone targetted SoC earlier.

EDIT : I think I got it wrong: the Q1 2014 is Bay Trail for nettop. So perhaps the tablet version still is scheduled for Q4 2013.
 
Last edited by a moderator:
Lower power draw running a more complex core at 2.7GHz vs 1.8GHz certainly has to be seen to be believed.
While 2.6 or 2.7GHz CPU would be bad ass to have in a phone, frankly I'm not convinced a phone, or even a tablet really needs that level of oomph. My iPhone4 is doing just fine, with its what, 800ish MHz single core ARM9 or whatever is packed in there. Methinks people are unlikely to run more demanding apps on their portable devices just because available CPU power increases. It hasn't been true for desktop computers that's for sure.

All it's most likely to accomplish is bloat software. ...Which ironically also is true for desktop computers... :rolleyes:
 
I'm surprised by that release date, I would have expected the new Atom at least 3 months earlier; many news back in July were claiming Q4 2013 for Bay Trail.

Perhaps is that 2014 date only for tablet chips, and we'll see some smartphone targetted SoC earlier.

EDIT : I think I got it wrong: the Q1 2014 is Bay Trail for nettop. So perhaps the tablet version still is scheduled for Q4 2013.

Looks like 2014 for Bay Trail-T too: http://www.zdnet.com/leaked-intel-a...ls-next-gen-tablet-soc-processors-7000007597/

Merrifield may be released earlier, much like Medfield was released earlier than Clover Trail.

While 2.6 or 2.7GHz CPU would be bad ass to have in a phone, frankly I'm not convinced a phone, or even a tablet really needs that level of oomph. My iPhone4 is doing just fine, with its what, 800ish MHz single core ARM9 or whatever is packed in there. Methinks people are unlikely to run more demanding apps on their portable devices just because available CPU power increases. It hasn't been true for desktop computers that's for sure.

All it's most likely to accomplish is bloat software. ...Which ironically also is true for desktop computers... :rolleyes:

Cortex-A9, please don't call it ARM9 >_> While 800MHz may have been good enough for you who are you to really say what's good for everyone else? At the very least everyone making these tablets disagrees.
 
While 2.6 or 2.7GHz CPU would be bad ass to have in a phone, frankly I'm not convinced a phone, or even a tablet really needs that level of oomph. My iPhone4 is doing just fine, with its what, 800ish MHz single core ARM9 or whatever is packed in there. Methinks people are unlikely to run more demanding apps on their portable devices just because available CPU power increases. It hasn't been true for desktop computers that's for sure.

Single core cortex A8 is fine if that tiny 3.5" screen restrains the user from viewing full web pages with lots of interactivity and embedded HD video. Games probably suck too, at least according to today's standards.
 
Cortex-A9, please don't call it ARM9 >_>
WHATEVER. :p

While 800MHz may have been good enough for you who are you to really say what's good for everyone else?
Looking at most apps, they don't really need more. Generally most iOS apps (that aren't games) are quite simple and could just as well have run in a web browser.

At the very least everyone making these tablets disagrees.
Much of the performance hoopla is artificial need from different makers to differentiate themselves from the competition, not any actual, real-world performance limit running software. Except for games, which are crippled by touch input and small screen sizes (not to mention memory and storage limits) anyway, thus putting a natural limit for complexity.

Games probably suck too, at least according to today's standards.
3D performance isn't fast that's for sure, but who plays complex games on a phone anyway? Touch input is often fiddly, not seldomly inaccurate.
 
Games probably suck too, at least according to today's standards.

I can confirm that an iphone 4 (A8/sgx 535) dies with anything over ~40 draw calls. It's simply a horrible phone from a performance standpoint. Grall must be a patient man because the iphone 4 was near the bottom of the list of the phones I've tested. :p
 
Most advanced game I ever played on my phone's Final Fantasy IV. It runs OK. It's also roughly speaking the most advanced type of game I'd ever WANT to play on a phone. I normally use that phone to do things like banking stuff, paying bills, checking bus or train time tables, playing music, checking the weather forecast, looking at maps and loot gold deliveries every now and then from my world of warcraft auctions. All that stuff and more runs just fine.
 
Sure but there is a major advantage in going with X86 for Windows 8, it gives you legacy applications. What advantage would Samsung have in using X86 processor for Android?
None. What I'm suggesting is that Samsung will eventually make a dockable (maybe even wirelessly) Windows superphone that can run x86 apps. It will be able to function as your primary personal computing device, whether you're interacting with a tablet, laptop, or the phone itself.

SoCs for such superphones will actually have use for a strong processor, and thus command enough of a price premium for Intel to care about. AFAIK, currently ARM SoC makers earn rather meager profits on their chips due to the intense competition.
 
Well it is the figure in turbo mode, it may help to burst through dome task that can't be conveniently multi threaded and then goes into sleep.
It may also be that Intel may want to make 1 core soc and wants high single thread performance.
anyway Q1 2014 is quiet far, do you think they will offer a refresh of atom in the mean time?
 
Well it is the figure in turbo mode, it may help to burst through dome task that can't be conveniently multi threaded and then goes into sleep.
It may also be that Intel may want to make 1 core soc and wants high single thread performance.
anyway Q1 2014 is quiet far, do you think they will offer a refresh of atom in the mean time?

Z2580 is coming out soon, which will offer two Saltwell cores paired with SGX544MP2. There's been some rumblings recently that said configuration will actually be an update to Clover Trail, and that the Medfield version will have an SGX544MP1.
 
Z2580 is coming out soon, which will offer two Saltwell cores paired with SGX544MP2. There's been some rumblings recently that said configuration will actually be an update to Clover Trail, and that the Medfield version will have an SGX544MP1.

Confirmed today at ces that new chip smartphone chip, 2580 will have dual core graphics.
Also confirmed a new low end chip with same graphics core as medfield ( suspect its clocked slower) and support for dual SIM cards. The dual SIM card support suggests to me that this chip is aimed at the Chinese/Asia market were dual sim handsets are commonplace.

Can we please keep thread on topic of intel smartphone platforms, there are loads of other threads to talk above relative performance of different iPhone generations.
 
Back
Top