Next-Gen iPhone & iPhone Nano Speculation

Very smart of apple...improve memory subsystem/cache, use wider execution and keep clocks low, thus leaving precious tdp for advanced graphics acceleration.

Ailuros has a point, if apple moved to 28nm, rogue, lpddr3....that is going to be one advanced power efficient chip.

Such a pity I don't/won't purchase apple products...if only Nokia designed those apple chips and stuck them in a lumia!! ;)

Rogue GPU IP won't be exclusive to Apple's products. As for NOKIA I'd be waiting until ST Micro/ST Ericsson executes with its Novathor A9600. I hope we'll see it in products at least in H2 13'.
 
Samsung's foundry side has been delivering for Apple and should be able to transition them to 28nm relatively smoothly, and both companies are mature enough to keep their mutually beneficial business interests separate from their unrelated patent/industry-politics warfare.

I can imagine Apple wanting the flexibility of multiple suppliers, yet I don't think they have a pressing need for it right now.

I thought the supposed deal with TSMC went poof because the latter couldn't guarantee Apple any priorities or am I wrong? Qualcomm and Apple were on the outlook for major foundry investments and afaik Qualcomm went for a bilion $ investment into UMC. For Apple I haven't heard anything, but Samsung's 28nm seems to be on track.
 
Wouldn't some kind of arm trustzone implementation make jailbreak near impossible?
A single feature will never make a system completely secure. Every single piece must be secure. So as long as software has bugs, no matter whether you use TrustZone or not, people will hack the system.
 
I thought the supposed deal with TSMC went poof because the latter couldn't guarantee Apple any priorities or am I wrong? Qualcomm and Apple were on the outlook for major foundry investments and afaik Qualcomm went for a bilion $ investment into UMC. For Apple I haven't heard anything, but Samsung's 28nm seems to be on track.

TSMC said they are open to the idea of dedicated fabs.

edit:

EETimes muses on the nature of the A6: http://www.eetimes.com/electronics-news/4396818/Could-Apple-A6-be-a-big-little-processor

I'm sure Apple put all of Intrinsity's design experience to use, but a big.LITTLE scheme a la Tegra 3 is new.
 
Last edited by a moderator:
http://semiaccurate.com/2012/09/24/an-update-on-apple-moving-away-from-intel/

I'm personally not taking anything for granted, yet there are some points that could give food for thought in that piece.

I see how developments could point toward them making a full PC type TDP processor, but I disagree with his software angle. I think the point of recent OSX releases has been to homogenize the experience (remember, they're trying to woo iOS customers to OSX) and protect from the growing virus/malware threat (appstore, signed apps). All the customization hasn't disappeared from the back-end.

A big question will be whether or not they can retain windows support via bootcamp. With Microsoft's warming up to ARM, that future seems brighter as well.

Then it's just a matter of can they push the architecture to a performance level desktop and laptop consumers are used to. And do they do a discrete GPU?

And as a side-note, do all these developments make Intel warm up to the idea of letting more third parties in on their world class fabs?
 
The die shots certainly don't look like a Tegra 3 style arrangement. And I somehow don't think Apple designed two different kinds of cores for this.. part of the reason ARM can leverage the big.LITTLE design is because they hope to get design wins for Cortex-A7 outside of this model.

http://semiaccurate.com/2012/09/24/an-update-on-apple-moving-away-from-intel/

I'm personally not taking anything for granted, yet there are some points that could give food for thought in that piece.

Charlie occasionally gets some interesting leaks out there but frankly his editorializing is by and large worthless. He loves to spin anything in a way that makes him look like he wasn't wrong about things, but A6 is really far from a sign that Apple is ready to give up x86 laptops and desktops. He's pulling the usual "people said X but it was wrong so therefore they're wrong about Y too!" tactic that I see online all the time... anyone who said a year ago (well after Apple bought PA Semi and Intrinsity) that Apple had no capability to design their own ARM core is a moron. NOT the same class of people who would deny a shift in Apple laptops to ARM processors.

Apple may well be keen on locking down their laptops and suckifying the interface, but that doesn't mean they're willing to throw away a huge amount of CPU (and most likely GPU) performance. Somehow I think this transcends the limits of even what Apple buyers are willing to settle for. I also don't think it's worth Apple's design effort to try to compete with Intel in performance with a separate ARM design, not that I think they'd win.
 
Apple may well be keen on locking down their laptops and suckifying the interface, but that doesn't mean they're willing to throw away a huge amount of CPU (and most likely GPU) performance. Somehow I think this transcends the limits of even what Apple buyers are willing to settle for. I also don't think it's worth Apple's design effort to try to compete with Intel in performance with a separate ARM design, not that I think they'd win.

This. I think they're taking too much design risk if they're migrating away from an industry standard CPU platform. They can get away with it with the A6 because they're all still licensing the same ISA so performance will roughly track.
 
Would it be about winning against x86 or more like becoming step by step more independent while designing their own hw even outside small form factor SoCs? Higher ambitions should not be uncommon even for another paradigm of a hw giant like Qualcomm; whether it'll ever materialize is another chapter of its own. Heck Qualcomm even has left hints that they might consider investing in the less foreseeable future into a own foundry.

Those aren't obviously parallels since their completely independent cases, but I also find it hard to believe that companies like Apple aren't investigating internally various "what if" scenarios.

Then it's just a matter of can they push the architecture to a performance level desktop and laptop consumers are used to. And do they do a discrete GPU?

IMHO if Apple ever goes that direction they'll most likely do it step by step to minimize the risk. If then I'll figure they'll start out with low end desktop/notebook SoCs in the future and might work up from there. What do they need a discrete GPU for for such a hypothetical case? IMG's Rogue can scale up fairly high up in the next years to come and I doubt the maximum design latency wouldn't be good enough for Apple's typical GPU requirements. Nothing concrete though, just pure speculation on possibilities.

And as a side-note, do all these developments make Intel warm up to the idea of letting more third parties in on their world class fabs?
Isn't Intel intending or considering to allow selected small customers to manufacture at their fabs? I'm not sure if they ever will, but I'd say that if names would surface in the future we shouldn't expect any surprises.
 
Isn't Intel intending or considering to allow selected small customers to manufacture at their fabs? I'm not sure if they ever will, but I'd say that if names would surface in the future we shouldn't expect any surprises.

Yes. There was a story about an FPGA maker getting access to their 22nm fab capability. But only select vendors and limited quantities. I'm not sure if they are testing the waters or having other third parties test their process for design spaces they aren't currently in.
 
Yes. There was a story about an FPGA maker getting access to their 22nm fab capability. But only select vendors and limited quantities. I'm not sure if they are testing the waters or having other third parties test their process for design spaces they aren't currently in.

I'm pretty confident they won't ever let any of their competitors in.
 
http://semiaccurate.com/2012/09/24/an-update-on-apple-moving-away-from-intel/

I'm personally not taking anything for granted, yet there are some points that could give food for thought in that piece.

I read the piece until I hit this section.

Next up is the OS. Have you noticed that OSX releases of late, well, to be blunt, suck? It’s not that they suck as a stand alone OS, but they take away a lot of the freedoms and flexibility that the Mac desktop OS user has come to expect. Bit by bit Apple is removing all of the parts that make OSX something other than a phone class OS. The UI changes, the App store, and the overall feel of the new OS move it more and more toward a slightly open iOS, not a UNIX core with a slick GUI on top. It is being progressively closed down and phone-ized. Any guesses as to why?

What exactly have they ever taken away? If anything, the OS is more powerful than ever.

What parts are they talking about? I can still use OS X 10.8 as I did with Mac OS X 10.4.
 
Apple may not care about winning against x86 on some level, but its users will care about maintaining some baseline of performance. At the very least, taking a big step backwards in performance with a new product is a major no-no.

I could possibly see them pushing an ARM laptop as a splinter upwards from their iPad line, maybe in a Transformer-esque design, but I do not see them flushing their existing laptop lines. Not Macbook Air and certainly not Macbook Pro.

I'm not saying I think this will never happen but definitely no time soon and not as a sudden transition. Charlie seems to think it's right around the corner.

I don't know if they need a discrete GPU per se, but I also don't think Rogue will scale up to what they want any time soon. The problem is that GPUs that are good for mobile are trading area for power efficiency. One look at Apple's A5X will make this abundantly clear. I'm sure things aren't as bad with Rogue but there has to be a price, you can't be best in all metrics. And it's hard to look at what Haswell will offer in terms of IGP on laptop chips and totally disregard Intel's process advantage. I really don't think you could offer something competitive with a Rogue solution in a similar die area.

Isn't Intel intending or considering to allow selected small customers to manufacture at their fabs? I'm not sure if they ever will, but I'd say that if names would surface in the future we shouldn't expect any surprises.

Intel has shown some interest in opening their fabs for some devices like FPGAs, but nothing that shows the slightest hint of competition or threat to any of their products. I don't think they'd open it to Apple if it raised the chances of losing their business on CPU sales, since I'm sure they'd make far more money that way.
 
I'm sure things aren't as bad with Rogue but there has to be a price, you can't be best in all metrics. And it's hard to look at what Haswell will offer in terms of IGP on laptop chips and totally disregard Intel's process advantage. I really don't think you could offer something competitive with a Rogue solution in a similar die area.

Let's wait and see when further details appear in the future if it really is like that.
 
Well, I do think IMG has much more experience and probably more raw talent in the GPU design area. And they could be offering some different designs that are better balanced for higher power utilization/less area, I suppose, although I'd expect that to be pretty fundamentally different.

But the process node part is a disadvantage no matter how you look at it, and IMO area (and therefore process node, since that's the number one metric that improves) translates to a much bigger advantage with GPUs right now than CPUs. I mean, look at how much nVidia and AMD improved going from 40nm on TSMC to 28nm.
 
Well, I do think IMG has much more experience and probably more raw talent in the GPU design area. And they could be offering some different designs that are better balanced for higher power utilization/less area, I suppose, although I'd expect that to be pretty fundamentally different.

But the process node part is a disadvantage no matter how you look at it, and IMO area (and therefore process node, since that's the number one metric that improves) translates to a much bigger advantage with GPUs right now than CPUs. I mean, look at how much nVidia and AMD improved going from 40nm on TSMC to 28nm.

NV and AMD improved by as much from 40G to 28HP because amongst other reasons there was coincidentially a 32nm node at TSMC cancelled. Do you think the distance between Fermi and Kepler would had been as big if TSMC wouldn't had cancelled 32nm? It's not like foundries cancel process nodes on a regular basis either.

As for process advantages they are certainly a point, yet not a defining factor that would save any sort of architecture IMHO. Larabee would have had also Intel's process advantage and I didn't see any process liferaft for it either. Even better Knights-whatever (sorry I've lost track with all the nobel codenames there....) for HPC only is/will be competitive based on its own architectural merits and the most important factor Intel's manufacturing could play there, would be to better control the power consumption characteristics of the design itself.

Other than that did Intel roll it out before Kepler to respective customers and on top of that does it have only advantages against Kepler or some disadvantages too?
 
http://www.displaymate.com/Smartphone_ShootOut_2.htm

DisplayMate has a review of the iPhone 5 display which they found excellent. The display was well calibrated and managed the SRGB profile as Apple claimed. Screen reflectance and outdoor visibility was also noted as the among the best/the best they've seen on a mobile device.

http://www.ifixit.com/Teardown/Apple-A6-Teardown/10528/2

iFixit also did their A6 decap analysis again confirming 2 CPU cores and 3 GPU cores. They also confirmed that the A6 is using Samsung's 32nm process. As well, they believed that not only are the A6 CPUs a custom Apple architecture but the logic blocks have been laid-out by hand for improved performance rather than by automation. No doubt Intrinsity's work. If I'm not mistaken, Intel and AMD CPUs are typically hand laid out while GPUs tend to be laid out by software. Are ARM cores typically laid out automatically as well?

I haven't looked too much for games which would tax the graphics yet.

I might be willing to try things like Madden or FIFA if they have online play comparable to consoles.

Looks like EA may not be updating their sports games for iPad this year.

Even with improving graphics there doesn't seem to be a market for 3D games yet.

The most taxing 3D thing that I will probably be doing for awhile is flyover in the maps.
http://toucharcade.com/2012/09/25/heads-up-fifa-13-has-launched-in-the-app-store/

FIFA 13 has just launched on the App Store with online multiplayer and Retina iPad support. I'm not sure if that's what you were hoping for?
 
Last edited by a moderator:
Back
Top