Samsung Exynos 5250 - production starting in Q2 2012

  • Thread starter Deleted member 13524
  • Start date
Thanks. Interesting numbers. This reasserts that 800MHz was a pretty reasonable cap decision for Apple. Looks like the values are (very roughly) about 0.3W/core at 500MHz, 0.6W/core at 800MHz, 1W/core at 1GHz, 1.5W/core at 1.2GHz, and 2W/core at 1.4GHz. So the power curve definitely picks up a lot past 800MHz.

I wonder how other SoCs fare..

Out of curiosity, what was your stress test doing?
 
Last edited by a moderator:
Thanks. Interesting numbers. This reasserts that 800MHz was a pretty reasonable cap decision for Apple. Looks like the values are (very roughly) about 0.3W/core at 500MHz, 0.6W/core at 800MHz, 1W/core at 1GHz, 1.5W/core at 1.2GHz, and 2W/core at 1.4GHz. So the power curve definitely picks up a lot past 800MHz.

I wonder how other SoCs fare..

Out of curiosity, what was your stress test doing?
I will have to re-build my contraption to measure it on the 4412. Sadly it doesn't have a working columb counter in the fuel gauge chip.

The stress test does:
The cpu worker performs mathematical calculations ("prime crunching") in native code and verifies the results, while the ram worker does heavy c memcpy operations in native code in a different thread.
 

Given those numbers, what kind of power consumption could we expect while running a game or something? Just going by those numbers, fully loading both CPU cores at 1.2 GHz with the screen on would be roughly 4 Watts. That would drain the battery in 1.5 hours or so and you weren't even really using the GPU. Makes me glad most phones spent very little time running at full tilt.
 
Games don't use nearly that much CPU power so they'll never reach those power outputs on the CPU. There are CPU+GPU synthetic stress tests but I never measured those situations, but you can imagine that they further blow the power envelope. I can't understate what much of a difference 32nm HKMG made over 45nm, at least on the Samsung side, all things equal it's basically half the power.
 

In the whitepaper I can see for mobile devices up to 3000DMIPS (1.2GHz) and for enterprise markets up to 8000 DMIPs (3.2GHz).

Under the performance page they mention that for 40G it can go up to 4000DMIPs (1.6GHz) for which who really has used that one (apart from NV) and under 40LP they state 1+GHz. In fact the HTC OneX+ quad A9 with T3x clocks at 1.7GHz.

Yes of course does the whitepaper state over 2.0GHz but it obviously isn't necessarily for the mobile markets. That said I'd be very surprised if A15 won't clock =/>2GHz under 28nm which the majority of ARM's partners will use for it afaik. Exynos5250 under 32nm at 1.7GHz doesn't suggest that 2.5GHz won't be feasable under 28nm.
 
the 4-5W jump under load is really big. 1.7GHz isn't that high, either, considering that ARM was touting 2.5GHz, so voltage probably isn't that high (and why would Google bother with trying to get 10% more perf out of it anyway when battery life is it's biggest weakness?).
5.2W from idle->load in a Java benchmark is quite a lot. However this is not the full picture (NEON pipelines are idling, not much memory traffic at all, branch/pipeline/cache stalls). If we instead run a fully optimized (2+ average IPC) native NEON vector crunching algorithm that is equally compute and bandwidth bound (full memory controller traffic) I would expect to see at least 2W more to those Java figures. That's already 7.2W (difference from idle->load). It beats ATOM handily, but ULV Haswell's TDP is only 2.8W more. That's going to be an interesting battle... if we exclude the fact that ULV Haswells are likely going to cost 250$+ a piece (just for the CPU).
 
Yes of course does the whitepaper state over 2.0GHz but it obviously isn't necessarily for the mobile markets. That said I'd be very surprised if A15 won't clock =/>2GHz under 28nm which the majority of ARM's partners will use for it afaik. Exynos5250 under 32nm at 1.7GHz doesn't suggest that 2.5GHz won't be feasable under 28nm.
The 5250's CPU driver's populated with frequencies up to 2.2GHz, if that means anything at all. There aren't any ASV voltages set, but usually they don't create new entries in the driver they don't plan on using them at some point.
 
The 5250's CPU driver's populated with frequencies up to 2.2GHz, if that means anything at all. There aren't any ASV voltages set, but usually they don't create new entries in the driver they don't plan on using them at some point.

It could also be a 32nm refresh.
 
http://www.anandtech.com/show/6425/google-nexus-4-and-nexus-10-review

Anand has 5250 performance numbers up. It's interesting that likely due to different browser optimizations Cortex A15 and Apple's Swift have very different performance profiles either being clearly slower or clearly faster than each other depending on the benchmark rather than the Cortex A15 being consistently ahead as would otherwise be expected. The Mali-T604 seems to really shine in Egypt HD Offscreen over what you would expect based on the feature test results.
 
Some of the benchmarks are strange though. Especially with the Nexus 4

Several of the reviewers have mentioned their review units were not running finished software. Did Anand get lucky with his units or does he think there wont be any difference?
 
http://www.anandtech.com/show/6425/google-nexus-4-and-nexus-10-review

Anand has 5250 performance numbers up. It's interesting that likely due to different browser optimizations Cortex A15 and Apple's Swift have very different performance profiles either being clearly slower or clearly faster than each other depending on the benchmark rather than the Cortex A15 being consistently ahead as would otherwise be expected. The Mali-T604 seems to really shine in Egypt HD Offscreen over what you would expect based on the feature test results.

Such is the result of testing that's predominantly Javascript based using different JS engines even when targeting CPUs with the same ISA.
 
You know, call me crazy, but Atom really doesn't look all that bad for just how bad it is. :p I think my expectations for the Atom platform (in general) have substantially increased lately.
 
http://www.anandtech.com/show/6425/google-nexus-4-and-nexus-10-review

Anand has 5250 performance numbers up. It's interesting that likely due to different browser optimizations Cortex A15 and Apple's Swift have very different performance profiles either being clearly slower or clearly faster than each other depending on the benchmark rather than the Cortex A15 being consistently ahead as would otherwise be expected. The Mali-T604 seems to really shine in Egypt HD Offscreen over what you would expect based on the feature test results.
The Android CPU browser benchmarks are basically useless because of the Javascript discrepancy; on my 4412 I'm getting 974ms on SunSpider on 1.7GHz fixed to one core, 167k in Browsermark, 2048 on Octane and 15646ms on Kraken. All of the benchmarks don't scale at all beyond a single core.

So basically we have a bunch of severely browser limited benchmarks running on what seems to be a broken Javascript engine. I have no other explanation for the horrendous performance on both the Nexus 10 and Nexus 4; Google might have screwed up 4.2. The ChromeOS scores from Anand basically take a crap over the Android ones in this case.
 
Either way, you said that Clovertrail tablets won't use more than 4W under full load, after first saying they'll use 2W under tablet.
I've always said 2W idle, 4W load. Not sure why you think anything else.
Let's try this again: the presented Medfield phone uses 3.25W under full load. What you're basically saying is that you can take that, and add:

1) A second core
2) A multicore GPU with a base core that's stronger
3) A higher base speed
4) A much bigger, and in many cases higher resolution screen

And you think all of that will take less than 750mW? The second core alone will take at least that...
Why do you think Intel is lying about the TDP, or that 1.8GHz (Clovertrail) vs 2.0GHz (Medfield) is less relevant than base speed for determining load power consumption? We've seen plenty of processors in the past where adding the second core doesn't add a lot to power consumption due to various power management and throttling techniques (or, equivalently, reduced achieved turbo).
I have no idea what your expectations were based on.
I just gave you two links explaining my expectation, but whatever. If you don't think that's slow, fine.
TDP is supposed to be an honest number since the design relies on it, but the trick Intel pulls with their Atom SoCs is that they're allowed to throttle to stay within safe thermal ranges. The reason why this is a trick is because the throttling is based on reading temperature, not power, meaning that if you design the cooling to a higher specification than what the TDP requires it'll hit higher performance margins.
It's not that simple because you have to design power circuitry to handle a certain power limit as well.
The only thing that uses more power for N2600 is that the memory controller supports DDR3 over LPDDR2 and the DMI to connect to an NM10.
You can't say that with any certainty. There's a whole bunch of changes in the silicon that can happen without changing outwardly visible specs. Everyone knows Intel put an intentionally half-assed effort into Atom before Clovertrail. It was made just good enough to prevent others from grabbing the netbook market, but if they released 2W Clovertrail back then, 6-cell netbooks with 15 hour battery life would be somewhat attractive alternatives to high margin ULV CPUs (where Intel had no competition). There's no such motivation for the Exynos 5250.

BTW, some Nexus 10 data is now here:
http://www.anandtech.com/show/6425/google-nexus-4-and-nexus-10-review
The scores are down from the Chromebook review. It doesn't seem to make sense for much of that to be software, as Google would want to put its best foot forward for reviews.
(EDIT: nevermind, already posted.)
 
You know, call me crazy, but Atom really doesn't look all that bad for just how bad it is. :p I think my expectations for the Atom platform (in general) have substantially increased lately.

Maybe it's bad, but other stuff (including a lot of ARM stuff) is just worse? For some metric.

The recent numbers from Anand's reviews and the chart Nebuchadnezzar posted are the first I've seen anyone investigate peak power usage on mobile SoCs - normally one looks at power consumption under idle, light or specialized loads (web browsing and video playback respectively), and performance under heavy loads (for one thread anyway.. and unfortunately almost always Javascript)

And I'm sure this is how the industry wants it. There's certainly merit to seeing the CPU capabilities of phones being mostly only needed for burst activities; at least in web browsing scenarios this is the case and many appear to be viewing phones as little more than web browsers (despite Apple's massive "there's an app for that" marketing). Here Intel can especially capitalize with its turbo boost technology, and also because of having cores that are at the moment doing very well in Javascript vs ARM.

But there are of course cases where consistently moderate or high CPU load matters too. Some that I'm more interested in, like emulators, are pretty niche, admittedly. But at least some games will tax a lot more than web browsing. I wonder if there aren't more tasks that could be done but aren't because people don't want to push for those sorts of workload, or if the form factor naturally inhibits it.

Nonetheless, I'm pretty sure mobile SoCs are not just optimizing for idle/low CPU but are doing so at some expense to peak perf/W. At least, going with leakage optimized processes ups the active voltages. I know Intel is using such a process for Medfield (and have been for Z series in the past, AFAIK), but I don't know if they were using it for the old D variant 45nm Atoms.

What I'm interested in is if the first big.LITTLE A15 + A7 SoCs will follow the kind of specialization nVidia applied in Tegra 3. If Samsung's 28nm Exynos 5 comes designed this way it might be able to hit peak CPU utilization without nearly as much of a hit to power consumption.
 
I have no other explanation for the horrendous performance on both the Nexus 10 and Nexus 4
Well, we don't know the actual clock speeds, while running the test, do we? The Nexus 4 is said to get hot, and the Chromebook with the same chip as the Nexus 10 had high power consumption, so there could easily be some throttling.

I mean what's the logical explanation for Google putting an inferior Javascript engine in 4.2?
 
And I'm sure this is how the industry wants it. There's certainly merit to seeing the CPU capabilities of phones being mostly only needed for burst activities
Yup, I agree. Not a lot of sense emphasizing battery life in unusual usage cases, at least for reviews.

Nonetheless, I'm pretty sure mobile SoCs are not just optimizing for idle/low CPU but are doing so at some expense to peak perf/W.
Okay, that's a good point and explains the Chromebook's load consumption, but I'm still not convinced that A15 is much better than Clovertrail under a ~2W SoC thermal envelope.
 
Anand was using Chrome for Android, not the stock browser. Read this comment left on the preview:

It seems Google hasn't changed the V8 engine in Chrome for Android pretty much since they officially launched it. Not to mention Chrome for Android just got upgraded to version 18, when we're supposed to get version 23 in a week or two on the desktop. Incredible. The Chrome browser is close to a year behind in Javascript improvements compared to Apple and its Safari for iOS which probably got upgraded just before launching iOS6.

It's very disappointing to see the very same chip score twice as slow in Chrome, because Google can't be bothered to treat it as seriously as they do on the desktop.

Much better explanation than believing that a tablet (not a phone, a tablet) is thermally limited to running the Cortex-A15 at something like 800MHz peak.

The state of benchmarking on mobile platforms sucks because Javascript sucks (from a performance perspective). The JS engine coders are trying their hardest but it's an uphill battle and the performance is still grossly behind what you get in other languages.
 
Solid scores for the T-604 but 554mp4 is in a different league entirely.

Looking forward, the Mali roadmap doesn’t look terribly promising either. The best they have to offer is the t678 in mp8 configuration. That will have 4x the gflops of t604(at the same clocks) but texture throughput, pixel fillrate etc. will only get doubled, which is worrying because PowerVR is already there on those metrics.
It seems to me they are focussing too much on shader performance when a more balanced approach would have been better. They will really struggle against rogue..

Anyone else share the same concerns?
 
Well, we don't know the actual clock speeds, while running the test, do we? The Nexus 4 is said to get hot, and the Chromebook with the same chip as the Nexus 10 had high power consumption, so there could easily be some throttling.

I mean what's the logical explanation for Google putting an inferior Javascript engine in 4.2?
I don't know. It's as if the devices are stuck on power savings mode. Sadly, with the current incompetence of the technology websites out there, we won't have anybody able to investigate that until users actually have the devices in their hands. If thermal throttling would trigger then we'd see much much worse scores than these, and thermal throttling wouldn't trigger on single-core benchmarks at all.

Engadget posted some CF-Bench and Antutu benchmarks, and at least for the Nexus 10, they seem perfectly fine.
 
Back
Top