Next-Gen iPhone & iPhone Nano Speculation

Have they all got inside info on each other or something? being as things take bloody years to design and sort out, how are they all correctly guessing what the other one is doing? how would Apple for instance know than quad would be the standard by now? had you asked me 18 months ago i would have laughed at you..

Well, technically its not the standard, there are still some high end Dual Cores arriving ( A15, Krait ), but than again, there is also a Quad core version of this planned.

From a technical point, its probably more easy to move from a existing A9 dual core design, to a Quad Core, then implementing the A15's design?

The hardest step is going from a Single Core design, to a Dual Core design ( not just from technical point of view, but also from software point of view ).

18 Months ago, there where just baby steps going to the dual core design. So, naturally people where not yet thinking much about quad core design.

What we are seeing, is somewhat the normal PC evolution, but at a MUCH faster speed. I'm also surprised to see how fast things have gone.

Take my "old" HTC HD2. When it was released, this was ... amazing. A8 1Ghz Single core, kicking everything's behind. Now we are already at public available Quad Core's, with faster speed, faster architecture ( A9 ), and QUAD! cores. This is just like 2 years ago?

Dedicated VRam is a tricking something. Makes any device more complicated, and you run the risk for it to be never used. From a "normal" working point of view, Vram is unneeded ( and can actually turn out to be a hindrance if the memory allocation is too little ). For Games on the other hand, fast vram memory is ... well, the goal. But so far, Apple has not made any efforts for going more strongly into dedicated ( "hardcore" ) gaming.

Lets wait and see, ... There are more and more leaks it seems, so we are probably close to a actual product release.
 
Just have in mind that a Dual-core ARM Cortex-A15 will still be the better choice compared to a Quad-core ARM Cortex-A9.

The architecture difference between the two IPs are hard to ignore.
 
Just have in mind that a Dual-core ARM Cortex-A15 will still be the better choice compared to a Quad-core ARM Cortex-A9.

The architecture difference between the two IPs are hard to ignore.

Definitely. However the aspects of a possible custom design makes the current debate even more interesting.
 
Are there any indications that apps. are CPU-bound?

Certainly the roadmap seems as aggressive as at any point during the clock-speed wars on the PC.

Or do the multiple cores improve power management?

In terms of pure performance, quality of the network (speed, coverage) may seem to have more of a bearing on the mobile computing experience than pure CPU power?
 
As much as I have defended quad-core as being a logical design choice for Tegra, I don't think it's going to be a "standard" for quite a long time and I don't believe others will be disadvantaged by that (at least from a technical perspective as marketing is quite another question). I would also be very surprised if the iPad 3 was quad-core.

metafor said:
Yes. There currently isn't a thermal limit so why not clock quad-core A9's as high as they will go on the process. I expect that may change with A15 designs though.
Really? It was my understanding that Qualcomm's marketing implied otherwise, and I pretty much agreed with it. As both active power and (especially) leakage is proportional to temperature, it is very important not to enter a thermal feedback loop. You don't have a short-term thermal limit, but it does accumulate somewhat over time, doesn't it?

Ailuros said:
You can definitely talk about cores in the case of Series5XT MPs; assuming 1 SGX543 is truly at 8mm2@65nm as claimed in the past, a MP4 at the same frequency and process will be 32mm2. Best case 32, worst case more.
That's obviously an oversimplification; in practice, there's a MP block you need to have when there's more than one core. That means 2 cores is slightly more than twice as big as one core, and 4 cores is slightly less than twice as big as two cores. As I said, there's always a front-end of some kind, it's only a question of how big it is compared to everything else.
 
Really? It was my understanding that Qualcomm's marketing implied otherwise, and I pretty much agreed with it. As both active power and (especially) leakage is proportional to temperature, it is very important not to enter a thermal feedback loop. You don't have a short-term thermal limit, but it does accumulate somewhat over time, doesn't it?

Depends entirely on the device and its heat dissipation. In a tablet form-factor, it's a non-issue. In a smartphone, there may be some throttling required but that doesn't mean 4xA9 can't run at peak frequency for, say, 10-30 seconds.

4xA15 on TSMC 28 won't even be able to supply enough current through its voltage rails to run all 4 at the currently stated peak.

That's obviously an oversimplification; in practice, there's a MP block you need to have when there's more than one core. That means 2 cores is slightly more than twice as big as one core, and 4 cores is slightly less than twice as big as two cores. As I said, there's always a front-end of some kind, it's only a question of how big it is compared to everything else.

The biggest limitation is that in order to get any benefit from heavy computational cases where 4 cores are actually being used, your L2 cache has to be bigger. Which makes you kinda question, would that area have been better used by adding more GPU pipelines.
 
If A6 turns out to be quad core, and assuming that the graphics is likely to go to quad as well, in H2 apple with have an iphone which is close in horsepower terms to PSvita. Yes of course the PSvita's input methods are crucial to playability, however it says a lot about the ever increasing performance curve that a phone could have similar procesing capability to a brand new top end handheld gaming console, only 8 months after launch of said console.

I imagine for the above spec to be true, the A6 would have to be @32nm, given the already large die size of the A5.
 
If A6 turns out to be quad core, and assuming that the graphics is likely to go to quad as well, in H2 apple with have an iphone which is close in horsepower terms to PSvita. Yes of course the PSvita's input methods are crucial to playability, however it says a lot about the ever increasing performance curve that a phone could have similar procesing capability to a brand new top end handheld gaming console, only 8 months after launch of said console.

I imagine for the above spec to be true, the A6 would have to be @32nm, given the already large die size of the A5.

Actually, the Vita has only been released something like 2 months ago. The announcement of the Vita was 10 months ago, a eternity in this business.

Sure, if the iPad doubles its CPU & GPU cores, it will be closer to the Vita. The things that are missing, is that the Vita is running 128MB dedicated ( and probably speedy ) memory on its GPU, and had some other customizations done to it.

One big problem with the Vita, was the fact there was something like 8 months between the announcement, and the limited release. Its almost a YEAR! between announcement & EU/US release.

In this business, that is eternity. When first announced, the Vita was *wow* Quad Core, Quad GPU. Now its already more like ... its not the only kid on the block anymore in regards to Quad Core. The GPU gap has been closed a lot already, and is expected to be on the same level very soon. The first device to come close is probably the iPad ( assuming they doubled the GPU's ).

Technically, it has several more advantages ( except the missing 128MB Vram and the GPU customizations ).
* The iPad3 is probably going to double memory size ( 1GB vs 512MB )
* Because the size of the iPad, it can carry a bigger battery ( one of the problems they had with the Vita, was big performance but limited battery size ).
* The Vita's CPU/GPU was made at 45nm, while the old A5 on the iPad is already 45nm. Its safe to assume, that for the A6, we are looking more at something like 32nm, or 28nm. More chance for faster CPU/GPU speed, at lower power consumption ( depending on the mix ).

The biggest problem is just, pure and simple, the lack of control's. Add those, maybe somewhere in the bezel ( some lowered part ), and ... you got yourself a dedicated gaming market device. The gaming control's are actually not that expensive. Its just that your device does not display as "nicely" for its business customers.

My idea is to have the controls hidden in the bezel, that you need to slide something out, or remove a cover, to access them. Stay nice for business users, but also useful for those game users. Hard to explain. But i fear this is just a dream. If Apple was planning something like this, you will have had some leaks somewhere, from game developers about this.

Talking about launch date. The basic idea is March or sooner release date for the Ipad3, as it has been one year from the iPad2 release. Apple seems to follow this trend. But, the Vita's launch for the EU/US, is 22 February. Notice a date detail... Something tells me, that when faced with date's this potentially close together, that a lot of people are going to wait for a iPad, then a Vita. For the price of the Vita + Memory + 1 Game, you are already at the cheapest iPad price range... Not a good position for Sony i fear.
 
Just have in mind that a Dual-core ARM Cortex-A15 will still be the better choice compared to a Quad-core ARM Cortex-A9.

The architecture difference between the two IPs are hard to ignore.

True for most single and duel threaded tasks, but running loads of tabs on chrome, whilst switching from a game into an email?..i dont know 4 a9's at the same clock speed would be just as good if not better...of course that is a rare circumstance.

What is known about multi cores though, at least in mobile, is the power savings.. whilst single threads will never match an A15..i bet the power consumpion would be much better with some properly implemented A9's.

The good thing? we will have an apples to apples comparison to salivate over within 4 months, as Sammy's Exynos 4412 & 5250 will face off against each other.. my moneys on A15's for overall performance, but the quad A9's better power consumption..can't wait!:D
 
True for most single and duel threaded tasks, but running loads of tabs on chrome, whilst switching from a game into an email?..i dont know 4 a9's at the same clock speed would be just as good if not better...of course that is a rare circumstance.

What is known about multi cores though, at least in mobile, is the power savings.. whilst single threads will never match an A15..i bet the power consumpion would be much better with some properly implemented A9's.

The good thing? we will have an apples to apples comparison to salivate over within 4 months, as Sammy's Exynos 4412 & 5250 will face off against each other.. my moneys on A15's for overall performance, but the quad A9's better power consumption..can't wait!:D

That's the whole point, the Cortex-A15 is both faster and better power optimized. Not to mention the clock speed can be 1 Ghz faster than any Cortex-A9 design. At the same speed it is even 40% faster compared to Cortex-A9, while having a much larger L2 cache.

There really isn't any scenario I can come up with where a quad-core Cortex-A9 would be better when looking at a tablet device and their usage.

Although every rumor so far seems to point at a quad-core design for Apple's next SoC.
 
Is A15's perf/W higher than A9's at peak frequency? Is it at normalized performance levels? I was under the impression that it wasn't.

From a performance standpoint, quad-core A9 will very rarely if ever top a dual-A15 even at the same frequency. Moreover, the benefit of quad for power is highly exaggerated. Very few usage scenarios, if any, will perfectly balance the workload between all 4 fours at sub-peak operating conditions.
 
I couldn't realistically imagine even a contrived scenario where a dual A15 would lose in performance to a quad A9, but I don't doubt some of the A15 designs will be a little too zealous and act like battery hogs next to the quad A9 SoCs.
 
Actually, the Vita has only been released something like 2 months ago. The announcement of the Vita was 10 months ago, a eternity in this business.

Indeed, but there will be no iphone5 until at least june/july, and likely longer, hence my original statement of an 8 month gap
 
That's the whole point, the Cortex-A15 is both faster and better power optimized. Not to mention the clock speed can be 1 Ghz faster than any Cortex-A9 design. At the same speed it is even 40% faster compared to Cortex-A9, while having a much larger L2 cache.

There really isn't any scenario I can come up with where a quad-core Cortex-A9 would be better when looking at a tablet device and their usage.

Although every rumor so far seems to point at a quad-core design for Apple's next SoC.

Well the A15 will consume more power at peak performance than A9 as i understand it, Cortex A9 can already hit 2.5ghz on glo foundrys 28nm...not that i expect anyone to use it at those speeds mind.
http://www.tgdaily.com/hardware-features/60235-28nm-arm-cortex-a9-soc-hits-25ghz

Well Tegra 3 consumed less power than Tegra 2, just taking reviews of Transformers 1&2..that was on the same process..and clocked higher...with twice the cores and more powerfull gpu?...can't just be simple process maturing....
 
Well Tegra 3 consumed less power than Tegra 2, just taking reviews of Transformers 1&2..that was on the same process..and clocked higher...with twice the cores and more powerfull gpu?...can't just be simple process maturing....

What reviews showed this? I saw the opposite.
 
http://www.anandtech.com/show/5175/asus-transformer-prime-followup/3

Not a massive difference, but still an improvement is an improvement.

Fair enough. But I'd qualify those two tests as not loading the CPU or the GPU even moderately, much less beyond what the original Transformer could handle. So this is definitely not getting better battery life while exercising any of the hardware advantages you mentioned.

The improvement is probably down to using the lower leakage "fifth core" at a low clock speed for most if not all of the test durations. Some of the other hardware in the device may be more efficient as well. It could also be that the video decode has improved in perf/W by dedicating more die area to it, in order to handle higher bitrate playback without blowing battery life.
 
So this is definitely not getting better battery life while exercising any of the hardware advantages you mentioned

Ay? you mentioned about the companion core and better battery life, but then dismiss it as not proving my point about multi cores?

But I'd qualify those two tests as not loading the CPU or the GPU even moderately, much less beyond what the original Transformer could handle

Yea? I don't doubt that thrashing all 4 A9's @1.3ghz would consume more power than 2 @1ghz on the same process...but that is not what the SMP concept is about though is it? the whole idea is to split the tasks up across multiple cores, using lower speeds/voltages = less power consumption, thats also the whole idea behind big_LITTLE....but you already know that.

''seeing as how battery capacity hasn't changed it's likely that we do have Tegra 3 to thank for better battery life in the TF Prime.
Note that even running in Normal mode and allowing all four cores to run at up to 1.3GHz, Tegra 3 is able to post better battery life than Tegra 2. I suspect this is because NVIDIA is able to parallelize some of the web page loading process across all four cores''

Leaving the video test aside for the moment, the other test was web browsing, about the second most taxing thing joe consumer would be doing aside gaming...and the tegra 3 beats the tegra 2 on that scenario, just what sort of scenario was you thinking of doing on your phone?
 
Last edited by a moderator:
Ay? you mentioned about the companion core and better battery life, but then dismiss it as not proving my point about multi cores?

Yea? I don't doubt that thrashing all 4 A9's @1.3ghz would consume more power than 2 @1ghz on the same process...but that is not what the SMP concept is about though is it? the whole idea is to split the tasks up across multiple cores, using lower speeds/voltages = less power consumption, thats also the whole idea behind big_LITTLE....but you already know that.

Leaving the video test aside for the moment, the other test was web browsing, about the second most taxing thing joe consumer would be doing aside gaming...and the tegra 3 beats the tegra 2 on that scenario, just what sort of scenario was you thinking of doing on your phone?

Look at it this way:

Tegra3 has the same manufacturing process as the Tegra2. It doubles its CPU cores, increases the GPU speed, and add's a 5th companion CPU.

The A6 is supposed to be made at 32nm or 28nm, instead of the A5's 45nm.

If the Tegra3 can show a better power utilization going from 2 cores, to 4 cores + companion core, on the same manufacturing process, then what can the A6 do, with a lower manufacturing process?

Question becomes, will the A6 include a low power companion core or not? The "leaked" information can not provide any information about that.

Then you also enter the other area: If the Tegra3 does 1.3Ghz / 4 cores / 40nm, what are the speeds a 32/28nm A9 can get to? 1.7 or 1.8Ghz sound possible. Like you said before, if glo foundrys can already hit 2.5Ghz, on 28nm, probably on "picked" samples. Then using a 1.7/1.8 speed, is not exactly unthinkable. Without increasing the power usage too much.

Lower then 32/28nm sounds to be impossible to me. There are no way the manufacturing capacities available at this moment, to allow a 20nm production. Apple uses how many factories to build the A5?

Personally i find it a shame that the Tegra3 is build on 40nm. In a few months they will already be upgrading to 28nm ( looking at the roadmap ), so people that buyed any device now, will be stuck with the old version. I suspect with those delays Nvidia build up a stockpile of older 40nm, and they are shipping those out, while they build up a new stockpile of 28nm one's.

Note: We do forget one little detail, the Tegra2 was missing the NEON instruction set, so any task that relied on this, and ended up being done by the CPU, in software mode, instead of hardware, will result in a higher power drain. This may also explain some of the difference, as to why the Tegra2's power results are higher.

Even with our speculating, the chance that the A6, even with the "older" A9 Cortex, will probably be twice or three times as powerful, then the A5. Its going to be probably future proof for the next two years. Then you can upgrade to a iPad5 *lol*
 
Back
Top