Apple dumps Intel from laptop lines

I think there's a whole market of professionals who buy those i5/i7 notebooks for very legit reasons. And then, there are you know, gamers. I don't know why anyone would think the entire world is ready to stick to CPUs that deliver performance probably similar to a Coppermine P3.

Apple does not care who buys i5/i7 WINDOWS notebooks.

It's the tiny percentage of people who buy top of the line Macbook Pros thats relevent to Apple.

What current common "heavy duty" program running on an Intel based Macbook can't be run on a future ARM cpu?

Hardcore gaming isn't very popular on Macbooks.
 
Last edited by a moderator:
swaaye said:
I just find it exceptionally hard to believe that ARM has a magic advantage in the near future that will allow it to match competitors facing the same challenges and still maintain ultra low power status.
Who said anything about ultra low power status... I think if Nvidia or Apple could make something that has 80-85% of the performance of intels latest and greatest with a fair bit less power consumption and keep the profits for themselves they would be more than happy. Of course, that is just my perspective... no idea what they are actually targeting.
 
Ok, newcomer to the thread. I want to ask a question about these two quotes:
Saying a future arm notebook can't run flash because an iphone has trouble with it is like saying a core i7 can't decode a 1080p bluray because a P4 couldn't.
... and ...
What current common "heavy duty" program running on an Intel based Macbook can't be run on a future ARM cpu?

I see all this commentary about "future ARM cpu" and how powerful it is -- can anyone who is saying these things please point me to this future CPU? A current roadmap that shows the number of cores, the necessary clockspeed, and the necessary supporting technology (primarily memory bandwidth) that is going to fit in the power envelope that you're preaching?

I think we could simply turn this around another way: who is to say that Intel (nor AMD) cannot create a sub-10 watt x86 SOC that equals or betters this "future ARM" in performance and power consumption?

The lowest power SB i3 platform today is more than an order of magnitude faster than the fastest ARM, and yet uses ~50% more power as a complete system. If you're projecting one entire order of magnitude increase in computing power with a 50% increase in power consumption for ARM, I must assume that you're going to allow Intel and AMD similar time to continue tweaking their own architectures -- right?

Because it certainly doesn't sound that way.
 
It doesn't matter what ARM cpu comes out in 2013 as long as it is capable of providing the processing power needed to run the software that is being used while being cheap.

If Intel or AMD comes out with a comparable chip at a comparable price in 2013 then good for them. Apple will have more choices.
 
Last edited by a moderator:
Albuquerque said:
I see all this commentary about "future ARM cpu" and how powerful it is -- can anyone who is saying these things please point me to this future CPU?
It is called Denver and is supposed to appear in Maxwell (Nvidia's flagship GPU in 2013). http://channel.hexus.net/content/item.php?item=28540&page=2

If they are going to put it in their flagship GPU and dedicate significant die area to it, I would hope it is going to have "flagship" performance, who knows though...

Albuquerque said:
I think we could simply turn this around another way: who is to say that Intel (nor AMD) cannot create a sub-10 watt x86 SOC that equals or betters this "future ARM" in performance and power consumption?
AFAIK, no one has said such a thing...

I certainly expect Intel and AMD to continue being quite successful at what they do anyway...
 
Well, all I see is a few folks who are hell-bent on "future ARM cpu" being -- what was it, 25x to 30x faster than today -- and in two years? Where has this performance been hiding all this time that allows them to make 1.2 orders of magnitude more performance in the span of 30 months?

You're talking about octupling(sp?) total transistor count if not more, and you're also talking about no less than doubling of clock speed. So somehow we're going to power 8x more transistors at twice their current speed on perhaps two process shrinks from today? At the very best, that kind of chip would give you i3-esque performance at roughly the same power envelope.

It seems to me that you're hoping that ARM will leap three Moore's Law's cycles in 30 months (reminder: transistor count doubles every 18 months per good ol' Moore), and at the same time, Intel will somehow stand still for that entire time. Let's not ignore the fact that Intel's fabs are pretty much a minimum of 12 months ahead of EVERYBODY right now, and have just recently stretched that by probably another 6 months with their new 22nm finfet process...

You have presented nothing that makes me believe ARM will leap four and a half years of Moore's law in the span of 30 months, nor have you presented anything that even remotely suggests that Intel cannot continue making lower and lower power x86 devices with more and more performance. By the time 2013 gets here, ARM-based laptops (if they exist, and they might) will still be relegated to netbook-class performance, and quite frankly, netbook-class pricetags.

This isn't something that Apple is going to bother with, because they're not in this business to compete for single-digit margins.
 
Last edited by a moderator:
I agree with Albuquerque. What is the secret potion that ARM has up its sleeve that Intel won't have again? From a power per perf ratio it seems intel is the leader both in absolute and in rate of change.
 
While that article makes a few good points, I still don't think perf is one of them. Besides, it has a few oversights of it's own.

I beg to differ. If Apple don't value performance, they would have already released Atom based Macbooks. Yet, they didn't, despite the fact that Atom CPUs are cheaper and consume less power (which means longer battery life, which Apple value a lot).
 
I agree with Albuquerque too, ARM/RISK lovers make it sounds as it's a breathe to beat Intel @ its own game. Sandy Bridge are the "overall" best CPU in the world no matter the ISA and it is with confortable margins. Ivy bridge will only grows the gap while taking power consumption down.

On top of it low power but slow CPU are nice but depending their usage they prove to consume more power than faster CPU (there quiet some reviews of Bobcats and others CPU that shows).

I don't get all the concerns about Intel, as more and more perfs are asked from smartphones and tablets I see Intel process advantage ending being more relevant than whatever advantage the ARM ISA bring on the table.
 
Albuquerque said:
what was it, 25x to 30x faster than today -- and in two years

I didn't say 25x to 30x faster than today, I said 25x to 30x faster than the iphone 4.

I'd expect a Denver core to offer ~2x the performance/clock as a Cortex A8. The CPU in the iphone 4 is clocked at 800 MHz, while a Denver core (in Maxwell) will probably be at least 2.4 GHz. There is only a single core in the Iphone 4, there will probably be 4 to 8 in Maxwell. I leave it to you to do the math; surely you can get that right.
 
Last edited by a moderator:
I didn't say 25x to 30x faster than today, I said 25x to 30x faster than the iphone 4.

I'd expect a Denver core to offer ~2x the performance/clock as a Cortex A8. The CPU in the iphone 4 is clocked at 800 MHz, while a Denver core (in Maxwell) will probably be at least 2.4 GHz. There is only a single core in the Iphone 4, there will probably be 4 to 8 in Maxwell. I leave it to you to do the math; surely you can get that right.

Do you seriously believe that it can maintain 2X ILP while running at 3 times of the frequency?
 
Let's put it another way. Do you think current Core i7 is "25x to 30x faster" than iPhone 4's CPU? Because a CPU like you described is very likely to be slower than current similar clocked Core i7 (as even 2X ILP of Cortex-A8 is very likely to be worse than Core i7). Not to mention that Core i7 has hugely better memory subsystem, and hugely larger cache.
 
Could someone give me a quick overview of what mobile processors are out there at the moment from the various vendors? This isn't a market I've followed like the desktop but it seems I need to start paying attention given the way the wind seems to be blowing. As far as I can tell Intel have Atom, ARM have Cortex, AMD have something (Bobcat?) and Nvidia have something too but I have no clue as to what.
 
@ pcchen, I was being slightly conservative with the estimates.

Let's put it another way... do you think the Phenom II competes with an i7?
 
I didn't say 25x to 30x faster than today, I said 25x to 30x faster than the iphone 4.

I'd expect a Denver core to offer ~2x the performance/clock as a Cortex A8. The CPU in the iphone 4 is clocked at 800 MHz, while a Denver core (in Maxwell) will probably be at least 2.4 GHz. There is only a single core in the Iphone 4, there will probably be 4 to 8 in Maxwell. I leave it to you to do the math; surely you can get that right.

According to your own words, I did the math exactly right, and you obviously ignored it because it's easier to sidestep the situation than tackle it directly.

I'm sure you'll balk, so let me remind youof my math, and you can now use your next reply to properly refute it:
You're talking about octupling(sp?) total transistor count if not more, and you're also talking about no less than doubling of clock speed. So somehow we're going to power 8x more transistors at twice their current speed on perhaps two process shrinks from today? At the very best, that kind of chip would give you i3-esque performance at roughly the same power envelope.

It seems to me that you're hoping that ARM will leap three Moore's Law's cycles in 30 months (reminder: transistor count doubles every 18 months per good ol' Moore), and at the same time, Intel will somehow stand still for that entire time. Let's not ignore the fact that Intel's fabs are pretty much a minimum of 12 months ahead of EVERYBODY right now, and have just recently stretched that by probably another 6 months with their new 22nm finfet process...

Please, elaborate on which math problem I got wrong, and then elaborate on why you think this is even plausible.
 
Also, since when does 4-8 cores mean much to general purpose computing? Once you get past dual core the benefits diminish rapidly for most usage. Or are we considering these ARM CPUs as competitors in massively parallel tasks?

I think it would be fascinating if one of the big CPU companies put all of their knowledge to work at building a really impressive ARM core. Unfortunately all of the big companies probably have a vested interest in not doing that.
 
a quad core is way better than automagical "GPU acceleration will make it ok" (can we acknolewdge that there are pretty much no OpenCL applications?, and that they need to be written for a specific GPU's memory architecture to be worth existing)

nvidia denver is interesting and might change things, currently there's only a niche market for gpgpu, running on nvidia.
but I doubt we'll see that on small laptops.
 
Back
Top