Next-Gen iPhone & iPhone Nano Speculation

Note: We do forget one little detail, the Tegra2 was missing the NEON instruction set, so any task that relied on this, and ended up being done by the CPU, in software mode, instead of hardware, will result in a higher power drain. This may also explain some of the difference, as to why the Tegra2's power results are higher.
The NEON block should be power gated and unless the software using it is very inefficient, its use should result in better battery life (even though instantaneous power usage is obviously higher).
 
Sigh, don't you guys remember that there were also rumours of the iPad 2 being quad-core? I distinctly remember NVIDIA execs wondering about that at the MWC11 press dinner. Not as many rumours as for the iPad 3, mind you, but I really don't think it's reasonable to take it as a given quite yet.

Laurent06 said:
The NEON block should be power gated and unless the software using it is very inefficient, its use should result in better battery life (even though instantaneous power usage is obviously higher).
Do you know if it actually is power gated in all customer implementations? I know it is in Osprey but I was under the impression it might not be in quite a few SoCs. Also would the latency be a problem, basically forcing the entire core to idle waiting for NEON to wake up? I'm not sure how it would compare to a L2 cache miss...
 
Do you know if it actually is power gated in all customer implementations? I know it is in Osprey but I was under the impression it might not be in quite a few SoCs.
Even if I knew, I couldn't tell you unless the information is public, sorry... Anyway many customers don't let us know what exactly they do, we guess through questions they ask to the support team ;)

Also would the latency be a problem, basically forcing the entire core to idle waiting for NEON to wake up? I'm not sure how it would compare to a L2 cache miss...
The wake up latency could indeed be an issue, and that's why the software should try hard to make NEON-based operations grouped. I assume that latency depends on many implementation details and I have no idea what the value is in existing chips :cry:
 
Whilst i agree that it's possible that A6 can go quad, i also think that it currupts Apples's successful 'tick tock' strategy.

As i have already stated, that is unless Apple find the whole 'quadcore' marketing = increased sales to be worth the change, i don't think ipad 3 needs quad core to be fast and efficient, unlike android there is no fragmentation and the software is perfectly tailored to the hardware.

Im 50/50.

EDIT; Sod it, im nailing my colours to the fence and sticking to my original assertion..just an up clock and better memory on this 'tick' cycle.
(*cites numbering scheme as deciding factor*)
 
Last edited by a moderator:
Ay? you mentioned about the companion core and better battery life, but then dismiss it as not proving my point about multi cores?

Yea? I don't doubt that thrashing all 4 A9's @1.3ghz would consume more power than 2 @1ghz on the same process...but that is not what the SMP concept is about though is it? the whole idea is to split the tasks up across multiple cores, using lower speeds/voltages = less power consumption, thats also the whole idea behind big_LITTLE....but you already know that.

Tegra 3's "companion core" is completely unrelated to any power advantages it might have from SMP, which I doubt are being exercised in this review. It could have been just the LP core and the results would have probably been the same. Even big.LITTLE is supposed to be about using intrinsically lower power cores, not merely being able to leverage lower overall clocks/voltages because you're splitting the task over more cores. That will happen to some extent but rarely as ideally as nVidia would like you to believe.

Your basic argument was that nVidia performed some magic that let them have lower power consumption while having double the cores, higher clocks, more GPU, etc on the same process. I'm telling you that magic isn't there because they only get more battery life while none of that stuff is being exercised.

Leaving the video test aside for the moment, the other test was web browsing, about the second most taxing thing joe consumer would be doing aside gaming...and the tegra 3 beats the tegra 2 on that scenario, just what sort of scenario was you thinking of doing on your phone?

"Web browsing" can be very broad, but the web browsing tests Anand does barely use CPU. Just look at the iPhone tests, where web browsing gets you nearly 10 hours of battery life but heavy gaming gets you 3. I'm sure there are lots of apps out there that'll use CPU somewhere in between all of this.

Like I said, the Tegra 3 probably wins because it can use its <= 500MHz LP core for most or all of the entire web browsing test. If that ranks up there as "taxing for the average joe" then maybe he'd be better off not buying devices with this much CPU capability.

This also applies to Medfield's browsing battery scores, btw.

benjiro said:
Question becomes, will the A6 include a low power companion core or not? The "leaked" information can not provide any information about that.

Then you also enter the other area: If the Tegra3 does 1.3Ghz / 4 cores / 40nm, what are the speeds a 32/28nm A9 can get to? 1.7 or 1.8Ghz sound possible. Like you said before, if glo foundrys can already hit 2.5Ghz, on 28nm, probably on "picked" samples. Then using a 1.7/1.8 speed, is not exactly unthinkable. Without increasing the power usage too much.

nVidia uses TSMC LP for their "companion core" and GP for the rest.. while other manufacturers are using the equivalent of LP for everything (as far as I know). So they already get reasonable idle consumption if probably less perf/W at high clocks or just lower clocks altogether.

That GF sample is something else entirely, that's probably both a performance optimized core and on a performance optimized process. Very different from what someone would make for a mobile platform. Looking at the 1GHz A5 clocked at only 800MHz on iPhone I don't really get the impression that Apple is trying to win any clock wars with this.
 
Last edited by a moderator:
What constitutes common web browsing is fairly nebulous, however, I don't think the average use-case does exercise the CPU in a heavy fashion in any reasonable sense -- with a few exceptions of course.

Honestly, a pair (or hell, quad) A7 would be ideal for most use-cases that are non-gaming related.
 
What constitutes common web browsing is fairly nebulous, however, I don't think the average use-case does exercise the CPU in a heavy fashion in any reasonable sense -- with a few exceptions of course.

Honestly, a pair (or hell, quad) A7 would be ideal for most use-cases that are non-gaming related.

Its funny you say that, i was just thinking about quad A7s earlier..would be ideal, lets face it, we are spoilt these days when it comes to gadgets, just think midrange smartphones will be getting duel core kraits& Adreno 305 soon..never had it so good:smile:

EXOPHASE; You may well be correct, im just basing my opinion on the only like for like comparison i have found..especially from a respected source, im not going to requote, but i highlighted a quote from Anand where he stated all 4 cores were running @ 1.3ghz and still consumed less power than Tegra 2..which he attributed to spreading the workload across more cores.

It does make perfect sense, web browsing on Android can be taxing with loads of threads open, due to flash and likely things always running in the background..4 cores running at a lower clock rate/voltage does have the potential for power savings IMHO.

I respect your opinion, however i think you are wrong on this one, feel free to proide some links to prove your point.
 
What constitutes common web browsing is fairly nebulous, however, I don't think the average use-case does exercise the CPU in a heavy fashion] in any reasonable sense -- with a few exceptions of course.
The average is low (just scrolling around), but the peak is very high (loading the page), and that peak performance is definitely very important to the user experience (IMO).

Honestly, a pair (or hell, quad) A7 would be ideal for most use-cases that are non-gaming related.
AFAIK, the ultra-low-cost smartphone solutions (think Broadcom with VGA video and integrated 3G RF to give you an idea of how damn cheap it's supposed to be) will be moving from single-core A9 to dual-core A7. I agree that's an improvement but I'm not convinced it's really "ideal" for web browsing. I'd put the bar a fair bit higher than that (1.5GHz 2xKrait or 2GHz 2xA9 would definitely hit it though :))

french toast said:
It does make perfect sense, web browsing on Android can be taxing with loads of threads open, due to flash and likely things always running in the background
Did you miss that handheld flash is now end-of-line? Not much point optimising a chip around a workload that's on the way out (for better or worse) - and while there are still some multithreading opportunities available, they're hardly good enough to justify quad-core IMO.
 
EXOPHASE; You may well be correct, im just basing my opinion on the only like for like comparison i have found..especially from a respected source, im not going to requote, but i highlighted a quote from Anand where he stated all 4 cores were running @ 1.3ghz and still consumed less power than Tegra 2..which he attributed to spreading the workload across more cores.

That's just not how it works. Four cores running at 1.3GHz is going to use not that much less than four times what one core running at 1.3GHz uses, and I doubt nVidia did anything to make the individual cores more power efficient. The notion that all four running at 1.3GHz uses less power than two running at 1GHz is absurd, and I really doubt Anand said or even implied something like this.

The perf/W won't be better either. It'll probably be much worse. Obviously the absolute perf will be much higher.

The whole point behind nVidia's marketing of more cores = better perf/W (a strategy that you can at least kind of see mirrored in AMD's quad-core mobile platforms) is because power consumption doesn't scale linearly with clock speed. So if you can complete a task using two cores at 500MHz instead of using one core at 1GHz you'll consume less than half the power. Of course, very few tasks scale close to perfectly across the cores.

Probably a bigger argument against nVidia's claim is that synchronous tasks (the kinds that have any shot at being well balanced) will tend to want as much CPU as you can give them. So the only real way to get this better perf/W is at the deliberate sacrifice of perf, since users (or the OS) will prefer to run two cores as fast as they can run them when they need to be stressed.

At any rate, the HIGHER clock speeds of Tegra 3 will mean less perf/W than Tegra 2 when you want to use them.

It does make perfect sense, web browsing on Android can be taxing with loads of threads open, due to flash and likely things always running in the background..4 cores running at a lower clock rate/voltage does have the potential for power savings IMHO.

Loads of threads open doesn't really mean anything for utilization, most of them will tend to be sleeping. You gain very little from being able to schedule a bunch of very low utilization threads on separate cores.

I respect your opinion, however i think you are wrong on this one, feel free to proide some links to prove your point.

If you aren't going to give a link for the AT quote you're attributing why should I have to bring in third parties to establish my argument..?

The average is low (just scrolling around), but the peak is very high (loading the page), and that peak performance is definitely very important to the user experience (IMO).

I wonder how much that's true, if the loading is often dominated by waiting for stuff from the network. Web page loads are very piecemeal, especially given all the ads ;p
 
I have a hard time thinking that quad core will lead to better battery life because in mobile computing, there are very few tasks that scale linearly very well. Perhaps the best case scenario is batch photo manipulation, but that's a pretty rare occurrence. Adding two more cores that will rarely be used explicitly by developers would be playing towards marketing ("ooh 4 is more than 2!!") rather than improving performance of all apps.

Because lets face it, we all know that the A7 is going to be a 2xCortex-A15 system. A 1.5Ghz 2xCortex-A9 fits in nicely as an A6 chip. There are few, if any, developers or consumers who feel the iPad is "slow".
 
If you aren't going to give a link for the AT quote you're attributing why should I have to bring in third parties to establish my argument..?

What?
Exophase. I said i wasn't going to REQUOTE as i had already provided the link & quote i even highlighted it for you.

Here it is again...
''seeing as how battery capacity hasn't changed it's likely that we do have Tegra 3 to thank for better battery life in the TF Prime.
Note that even running in Normal mode and allowing all four cores to run at up to 1.3GHz, Tegra 3 is able to post better battery life than Tegra 2. I suspect this is because NVIDIA is able to parallelize some of the web page loading process across all four cores''
-Anand.
http://www.anandtech.com/show/5175/asus-transformer-prime-followup/3

Anand did say that in that article, make of that what you wish, so far you have come out with some plausible explanations, but only one of us has backed that up with a like for like comparison, yes it could be down to repackaging, more mature process and NEON, maybe, but the review and comments from Anand state other wise.

Loads of threads open doesn't really mean anything for utilization, most of them will tend to be sleeping. You gain very little from being able to schedule a bunch of very low utilization threads on separate cores

Well i disagree, i have a crappy Intel n270 (HT) netbook running w7 basic, any more than 3-4 flash enabled pages grinds the system down to a standstill on some occasions, not every time but i can definatly see the advantage of more cores..to a point. with ICS able to run Chrome with multiple tabs you are going appreciate the extra smoothness, don't tell me you didn't notice the extra smoothness of mobiles going to duel core? (same with desktop)
If multcore is a waste like you seem to imply, why dont we just stick 1 A9@ 1.3ghz and leave it at that??

Did you miss that handheld flash is now end-of-line? Not much point optimising a chip around a workload that's on the way out (for better or worse) - and while there are still some multithreading opportunities available, they're hardly good enough to justify quad-core IMO.

Fair point, but that doesn't change the fact for the next year it will be, besides if it isn't flash, then its going to be HTML5...surely that isn't going to come for a free processor ride?

Guys i really don't see you points, the ONLY reason a consumer would not want more cores, and more performance is to the detriment to battery life, nothing else.

If NVIDIA released a much more powerfull quadcore chip, that consumes LESS power than its duel core what is your point? do you want phones to be gimped just for the sake of it? jees.
 
Sorry, I missed the quote was from there. That's still highly misleading though. "allowing" all cores to run at full speed doesn't mean they ever are, at all, period (during the tests). It's a red herring. If Anand really thinks that it's the additional cores that give the battery life benefit then he's just wrong. What he's really saying is that their mere presence isn't causing a battery life drop, which is what you'd expect since we know they can be individually power gated off.

It has nothing to do with packaging, and TSMC 40nm was already pretty mature when the first Transformer rolled out. NEON could increase perf/W under some circumstances, I suppose, but I doubt it's being used much or at all in the web browsing test here.

No one is saying that having more cores on the die is bad and no one is saying that it means a hit to battery life. What I'm saying is that your claims that having more cores running at higher clock speeds is IMPROVING battery life are unsubstantiated. And there's a huge difference between an N270 with one core + HT and a full on quad core chip. I think everyone here agrees that dual core ARM is a big win over single core in phones and tablets, but that doesn't mean quad core is going to be as big of a win. But I don't know how you think I'm implying "multicore is a waste."
 
I have a hard time thinking that quad core will lead to better battery life because in mobile computing, there are very few tasks that scale linearly very well. Perhaps the best case scenario is batch photo manipulation, but that's a pretty rare occurrence. Adding two more cores that will rarely be used explicitly by developers would be playing towards marketing ("ooh 4 is more than 2!!") rather than improving performance of all apps.

Because lets face it, we all know that the A7 is going to be a 2xCortex-A15 system. A 1.5Ghz 2xCortex-A9 fits in nicely as an A6 chip. There are few, if any, developers or consumers who feel the iPad is "slow".

Perhaps, but you are missing the point, even if it delivers the same battery life, you will still have a smoother experience in general, and have extra power on tap for games, why would you not want 4 cores if it comes with no disadvantages?:rolleyes:

The fact that the only apples to apples comparison shows that battery life IMPROVED removed any argument against it IMHO.


Of course what the consumer would want and what Apple decides to spend die space on are 2 different things.

However with no flash, no proper multitasking and just like wp7 no fragmented buggy software, Apple could get away with just a duel core @ 1.5ghz..and that is exactly what i think they will do.
 
Because lets face it, we all know that the A7 is going to be a 2xCortex-A15 system. A 1.5Ghz 2xCortex-A9 fits in nicely as an A6 chip. There are few, if any, developers or consumers who feel the iPad is "slow".
IMHO A7 will be at least a quad-core (four cores exposed to the OS). Just 2x Cortex-A15 in 2013 doesn't make sense for Apple if they stick with just one new SoC for all their iOS devices. It's gonna be at least 2x Cortex-A7 plus 2x Cortex-A15.
 
The fact that the only apples to apples comparison shows that battery life IMPROVED removed any argument against it IMHO.

Not all tests show that Tegra 3 delivers improved battery life compared to Tegra 2 though. Check these battery tests for example: http://tweakers.net/reviews/2363/7/...te-quadcore-tablet-scherm-camera-en-accu.html

Note: 'Accu' means battery in Dutch and it's the lasts 2 graphs on that page you'll want to be looking at. In Tweakets.net's tests the ASUS T-Prime actually has the worst battery life of all Android tablets they tested when browsing the web.
 
Not all tests show that Tegra 3 delivers improved battery life compared to Tegra 2 though. Check these battery tests for example: http://tweakers.net/reviews/2363/7/asus-transformer-prime-de-eerste-quadcore-tablet-scherm-camera-en-accu.html

Note: 'Accu' means battery in Dutch and it's the lasts 2 graphs on that page you'll want to be looking at. In Tweakets.net's tests the ASUS T-Prime actually has the worst battery life of all Android tablets they tested when browsing the web.


Well fair enough, i only had Anands review to base my assumptions on, im not familia with the other tablets specs/screen resolutions/OS type so i can't really comment, very interesting if its a fair comparison.

I just had a quick scout around and could only find examples of video playback loops, one with a web browsing test, which showed the different power saving modes played a big role 7hrs-11hrs, but no direct comparison like the Anandtech one which uses same manufacturer and same specs..bar Tegra x.
As i understand it the 'normal' setting has all 4 cores running@ 1.3ghz..and i expected that setting to consume vastly more power than a tegra 2..but that is not what Anand sumised...

It's a red herring. If Anand really thinks that it's the additional cores that give the battery life benefit then he's just wrong. What he's really saying is that their mere presence isn't causing a battery life drop, which is what you'd expect since we know they can be individually power gated off

Well correct me if im wrong but that is not what he is saying, he uses the 'normal' mode in his comparisons, which im sure clocks all 4 cores @ 1.3ghz.
Its the other modes that chop down clocks and disable cores such as 'power saving'.
This seems to be validated by his direct quote i provided, rightly or wrongly that is what he said.

What I'm saying is that your claims that having more cores running at higher clock speeds is IMPROVING battery life are unsubstantiated. And there's a huge difference between an N270 with one core + HT and a full on quad core chip. I think everyone here agrees that dual core ARM is a big win over single core in phones and tablets, but that doesn't mean quad core is going to be as big of a win. But I don't know how you think I'm implying "multicore is a waste."

Well for a start that was not my claim, thats what Anand claimed and i based my tegra 2 v tegra 3 comparison off that.
No, my claim was i believe having a 'multi core' setup, and spreading the load across more cores at lower clock speeds should lead to better power consumption and a smoother experience...i stand by that.
I explicably stated that 4 A9s @1.3 should consume more power than 2 A9s @ 1ghz.. but if implemented properly, 4 cores should provides some benefit to battery, smoothness, and keep more power in reserve should you need it.

I have only used the Tegra 3 v tegra 2 because that is the only review i had of them by the same respected site, and also the only fair comparison of ANY 4 cores v 2 cores mobile face off....it was Anand that stated 4 cores @1.3 beat tegra 2.. i just quoted him.

I never thought Tegra 3 would be a success anyhow, as the A9s can't vary their clockspeed in the same way that Scorpion and Krait can, thats why Tegra has that silly companion core.

Also note that the review was not done on the more multicore friendly ICS, whether or not that would make a difference.
The true test comes with quad core Krait v duel core Krait on ICS...thats how the idea is suposed to be implemented..and the one i think works best.

And there's a huge difference between an N270 with one core + HT and a full on quad core chip. I think everyone here agrees that dual core ARM is a big win over single core in phones and tablets

Yea i know thats the example i was using to compare 4 threads v 2 threads, and the noticeble benefit you would get.. also to counteract your assertion that multiple tabs open makes little difference to perfprmance as they are idle..i state that on my net top any more than 3-4 tabs open ceases things up a bit...

I agree with you that a duel core A9:mad: 1.6ghz beats n270 @1.6ghz...funny enough that is also not what Anand implied in the Medfield preview either...

Besides i have made my views clear, the Tegra 3 v 2 comparison is unfairly swayed on the Tegra 2 side as it is clocked lower, with likely lower 2d & 3d clocks also,to make it a proper proof of concept you would have them all at the same clock speed, on ICS and same mature process.
 
Not all tests show that Tegra 3 delivers improved battery life compared to Tegra 2 though. Check these battery tests for example: http://tweakers.net/reviews/2363/7/...te-quadcore-tablet-scherm-camera-en-accu.html

Note: 'Accu' means battery in Dutch and it's the lasts 2 graphs on that page you'll want to be looking at. In Tweakets.net's tests the ASUS T-Prime actually has the worst battery life of all Android tablets they tested when browsing the web.

If you want to use a Dutch website, that the Tegra3 does badly, then please include this also:

Asus kon ons geen verklaring geven voor de slechte accuprestaties tijdens webbrowsen en heeft beloofd ons een nieuw exemplaar te sturen. We zullen deze review later dan ook opvolgen met nieuwe accucijfers.

Translation: Asus can not provide a explanation for the bad battery performance during the web browsing, and has promised them ship them a new model ( Asus Transformer Prime Table ). They will update the review with new battery numbers.

---

There is a know problem with the review models that have been send to most reviewers. They have bad battery life, even worse then the Tegra2. If you look at Anandtech there review, there battery numbers are with a new model they got, that fixed the problem. The problem being, that the Wifi was constantly going haywire, and draining the battery.

It was not the only reviewer that reported that Wifi problem.

Last page:

We hebben bij uitzondering geen subscore opgenomen voor accuduur. We willen onze tests herhalen op een nieuw testexemplaar alvorens conclusies te trekken.

They have with exception not added any subscore for the battery time. They will repeat there tests, when they have a new test ... device, before drawing conclusions.

-----

On, other words, this article has not been updated. Funny that people look at the pictures but do not read the text.

That the iPad2 is more power efficient is a know fact. But then again, we are looking at a Dual Core.

Here is the followup from Anandtech after he got a fixed tablet:

http://www.anandtech.com/show/5175/asus-transformer-prime-followup/

The shipping Prime does much better than the original tablet I reviewed a couple of weeks ago. It's clear that whatever was impacting WiFi performance also took its toll on battery life. What I suspected might be the case ended up being true: the implementation of Tegra 3 in the Transformer Prime delivers better battery life than Tegra 2 in the original Eee Pad Transformer. There are too many variables here for me to attribute the gains to NVIDIA's SoC alone, but seeing as how battery capacity hasn't changed it's likely that we do have Tegra 3 to thank for better battery life in the TF Prime.

That's the disadvantage of shipping a broken product out for testing. One's a review has been made, most reviewers do not update there review.
 
Perhaps, but you are missing the point, even if it delivers the same battery life, you will still have a smoother experience in general, and have extra power on tap for games, why would you not want 4 cores if it comes with no disadvantages?:rolleyes:

I'm not sure where you got the idea that 4 cores leads to a smoother base UI or application experience. That certainly doesn't hold true on any desktop platform (OSX, Windows, Linux, etc.) which would actually stress single and multicores.

Just like desktop computers, 2 cores (or to a slightly lesser extent HT) lead to a significant increase in how smoothly an OS runs compared to a single core. More cores than that typically lead to imperceptable improvements upon that nebulous experience.

Unless you run heavily threaded (and hence heavy power consumption) applications then you will never notice the benefits of a 4 core versus 2 core CPU. And if you are pushing 4 cores enough for its impact to be noticeable, prepare for your battery life to drop like a stone. Quad core CPUs have been available in the consumer market for almost half a decade now. Despite that there is still very little advantage to having it outside of a few speciality applications and a game or two.

So I'm not sure how it's an automatic win in the mobile space where battery life is important and pushing 4 cores enough for the impact to be noticeable will make that drop like crazy.

Regards,
SB
 
The average is low (just scrolling around), but the peak is very high (loading the page), and that peak performance is definitely very important to the user experience (IMO).

A 800MHz dual A9 seems to be able to parse 99% of the web pages out there in under a second. While I don't have an iPad 2, I've used them extensively and I honestly cannot think of a time when I was not bottlenecked by the network, even on WiFi.

AFAIK, the ultra-low-cost smartphone solutions (think Broadcom with VGA video and integrated 3G RF to give you an idea of how damn cheap it's supposed to be) will be moving from single-core A9 to dual-core A7. I agree that's an improvement but I'm not convinced it's really "ideal" for web browsing. I'd put the bar a fair bit higher than that (1.5GHz 2xKrait or 2GHz 2xA9 would definitely hit it though :))

Isn't A7 supposed to be fairly quick? Faster than the A8 and scale to around ~1GHz? Two of those would seem sufficient and would pretty much the iPad 2 today, which, as I've pointed out, does not lack in browser performance.

Probably a bigger argument against nVidia's claim is that synchronous tasks (the kinds that have any shot at being well balanced) will tend to want as much CPU as you can give them. So the only real way to get this better perf/W is at the deliberate sacrifice of perf, since users (or the OS) will prefer to run two cores as fast as they can run them when they need to be stressed.

I thought that was what the various power profiles were for. While they don't force the OS to downclock the CPU's permanently, it does profile the usage and anticipate total runtime. For those that would run for some sufficiently short amount of time, the CPU's won't be cranked up.

AMD's HSV proposal would seem to also be a walk in this direction. I think that in the future, intermediate firmware layers with live profiling will become more common.

french toast said:
Well correct me if im wrong but that is not what he is saying, he uses the 'normal' mode in his comparisons, which im sure clocks all 4 cores @ 1.3ghz.
Its the other modes that chop down clocks and disable cores such as 'power saving'.
This seems to be validated by his direct quote i provided, rightly or wrongly that is what he said.

That's not how mobile chips work. The OS will schedule (and clock) the processor depending on the workload and how heavy it is. It's not going to just straight up run all 4 cores at 1.3GHz for the hell of it. The application being run has to actually need that much processing speed. More-over, Android is actually somewhat intelligent with predicting how much work a thread will do over time and how much time it will need. It won't just load up a CPU full bore and then put it to sleep when finished.

The normal profile allows the CPU to get up to 1.3GHz. It doesn't mean that it's running at 1.3GHz all the time. The clock is adjusted dynamically based on workload.

french toast said:
No, my claim was i believe having a 'multi core' setup, and spreading the load across more cores at lower clock speeds should lead to better power consumption and a smoother experience...i stand by that.
I explicably stated that 4 A9s @1.3 should consume more power than 2 A9s @ 1ghz.. but if implemented properly, 4 cores should provides some benefit to battery, smoothness, and keep more power in reserve should you need it.

That point has been addressed. Very few consumer level use cases, let alone for mobile, will be able to spread its workload evenly amongst 4 threads. Most not even 2. Moreover, mobile browsers, kill flash instances that are in the background so even your most heavy laptop use-case with one of the most resource-hogging runtimes out there won't carry over to mobile.

Yes, in the perfect world, spreading a workload evenly to 4x 500MHz A9's will be far better than running it on 2x 1GHz A9's. In a perfect world, I'd also have my own volcano lair and the requisite disproportionately hot lair chick.

And a unicorn.
 
If you perform exactly the same tasks (web page loads for example) on a more powerful CPU and a less powerful CPU, the more powerful CPU will finish each task sooner, and the cores will return to idle state sooner (and are shut down or put into low power state). Assuming the program is well threaded (and the CPU is the bottleneck), the quad core CPU should finish each task in around half the time (and go to low power state). Usually quad core CPUs do not consume twice as much energy as dual cores at same clocks, so we could conclude that the power efficiency is generally slightly better for quad cores (on highly threaded loads).

Of course if you 100% stress both a dual core and a quad core for a long time, the quad will use considerably more energy, but it will also do more work (open more web pages, render game at higher frame rate, etc). If the multicore version of the program does more considerably more work (heavy synchronization with spinlocks for example) it will of course cost extra energy (and the gains in efficiency are lost).

Toms Hardware actually did an article about energy efficiency a few years ago. They were comparing different ATOMs to Core 2. While ATOM has a considerably lower TDP, it requires much longer time to complete same tasks. In their tests it actually used a bit more energy to complete the benchmarks than a Core 2: http://www.tomshardware.com/reviews/dual-core-atom-330,2141-10.html. A more powerful CPU can clearly safe energy in some cases. Of course Toms Hardware is comparing two different architectures, so this isn't exactly an apples to apples comparison.
 
Back
Top