Samsung Exynos 5250 - production starting in Q2 2012

  • Thread starter Deleted member 13524
  • Start date
Driving the temperature up to throttling limits is also going to make the power consumption even worse. That could make the power consumption of all the CPUs at 1.6GHz more than four times the power consumption of one core at 1.6GHz, and therefore a bad proxy.

Nebuchadnezzar; Do you have a list of voltages corresponding with the power values you gave earlier?
https://github.com/AndreiLux/Perseu...mach-exynos/include/mach/asv-exynos5410.h#L56

Depends on the ASV of the chip you have, I have group 6, so 1162500µV at 1600MHz.
 
So 800MHz to 1.6GHz uses a 29.2% higher voltage and 100% higher clock speed. That should mean a ~3.34x increase in power consumption, but the listing is 4.8x. I wonder if those are for the worst case ASV, or if the power consumption really does go past the curve that much (increasing temperatures could do it.. although to be honest I don't know how much the ideal V^2 * F scaling works in real life, even when considering contribution of static which I'm not here) 1.2GHz vs 800MHz gives 2.17x power consumption vs an expected 1.95x, so that at least seems closer.

Please tell me if I'm looking at this all wrong >_>
 
So 800MHz to 1.6GHz uses a 29.2% higher voltage and 100% higher clock speed. That should mean a ~3.34x increase in power consumption, but the listing is 4.8x. I wonder if those are for the worst case ASV, or if the power consumption really does go past the curve that much (increasing temperatures could do it.. although to be honest I don't know how much the ideal V^2 * F scaling works in real life, even when considering contribution of static which I'm not here) 1.2GHz vs 800MHz gives 2.17x power consumption vs an expected 1.95x, so that at least seems closer.

Please tell me if I'm looking at this all wrong >_>
V² x f is generally just the gate switching power, for a complete power consumption the whole formula is



Samsung uses body biasing too and, although I didn't check out how it is on the 5410, they usually raise the body vs gate voltage ratio with raising frequency point so that also comes into play.

Another thing I'd like to mention the the cluster migration fiasco; I've been talking to some Linaro guys and another person which I think has some info on the matter: It seems that there's some inherent problems with Samsung's implementation of the CCI that simply cannot be solved, so core migration and HMP seem to be dead.
 
Do you have any links to more discussion about cluster coherency problems?

I'm very disappointed to hear that Samsung and/or ARM bungled something so fundamental. Doesn't it work okay on ARM's test chip?

I hope they at least fix it in a later revision.

Sad to see that Linus's ARM prejudice was proven right :( (that'd be this http://www.realworldtech.com/forum/?threadid=133089&curpostid=133428)

Enough people have been skeptical of the technology from day one, if it's basically broken at the offset that's a huge blow that can be hard for ARM to even properly recover from..
 
Do you have any links to more discussion about cluster coherency problems?

I'm very disappointed to hear that Samsung and/or ARM bungled something so fundamental. Doesn't it work okay on ARM's test chip?

I hope they at least fix it in a later revision.

Sad to see that Linus's ARM prejudice was proven right :( (that'd be this http://www.realworldtech.com/forum/?threadid=133089&curpostid=133428)

Enough people have been skeptical of the technology from day one, if it's basically broken at the offset that's a huge blow that can be hard for ARM to even properly recover from..
Samsung fucked it up. The discussions are private.

And Linus is wrong with those last statements, ARM's own TC2 platform works perfectly. But he's basically right here regarding the Exynos: http://www.realworldtech.com/forum/?threadid=133089&curpostid=133421

(I found it funny that I'm being discussed in that mailing-list :D )
 
Last edited by a moderator:
How long does the big hammer cluster migration actually take? There seems to be some talk about "user-visible delays" so is this taking more than 1/100s or so?
And I'm curious, how did Samsung manage to screw this up? I thought ever since L2 cache was no longer in some external macro it would actually take some effort to make it worse :).
 
Last edited by a moderator:
You can't notice the transition. During normal use the CPU rarely migrates from the A7 cluster, only instance is when opening apps or initially opening a webpage. Their CCI is broken.
 
Their? Aren't they using ARM CCI-400?
Samsung's SoC implementation of ARM's CCI-400 is broken.

The CCI is basically the internal bus connecting all IP blocks in the system, so it's something that is basically custom for every SoC out there. From what I've been told it's severely crippled as you cannot do any operations on it (accessing its registers).
 
Errr I discovered something regarding both 5250 and 5410.

Per CPU id the 5250 is a 410fc0f4 meaning r0p4, but per ARM reference manual that doesn't exist: http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0438h/CACFAACF.html

The 5410 again is a r2p3, but again per manual, that doesn't exist.

The funny thing is power management supposedly has only been added since the r3p0 variant, previous variants don't have WFI or WFE as per http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0438h/CJHHAIIJ.html ?!

Is Samsung using non-official core IPs? Can't imagine that the 5250 not having clock-gating, and surely not the 5410? I actually checked the code where they enable it in the power control register for both and it's there.
 
Last edited by a moderator:
Samsung's SoC implementation of ARM's CCI-400 is broken.

The CCI is basically the internal bus connecting all IP blocks in the system, so it's something that is basically custom for every SoC out there. From what I've been told it's severely crippled as you cannot do any operations on it (accessing its registers).
The CCI doesn't connect all internal blocks, it wouldn't make sense. As far as I know design time configurability is limited to the number of master and slave ports.

I wouldn't call what you describe for register hardware access as crippled, but as completely buggy, and I wonder how they achieved that...
 
Apparently the GS4 Mini has been announced and they say it's a Dual Core Exynos. AFAIK, the 4210 is too old and discontinued, and the 5250 is too much of a power hog fit a 4.3" phone.
Supposedly, there's also a 32nm 4212 with a 400MHz Mali 400.. but it's a chip from 2011 and only went into a Meizu phone.

Any suggestion on what it might be?
 
Apparently the GS4 Mini has been announced and they say it's a Dual Core Exynos. AFAIK, the 4210 is too old and discontinued, and the 5250 is too much of a power hog fit a 4.3" phone.
Supposedly, there's also a 32nm 4212 with a 400MHz Mali 400.. but it's a chip from 2011 and only went into a Meizu phone.

Any suggestion on what it might be?
Supposedly a 5210 with a dual big.LITTLE arrangement. Other than that rumour, no idea about the chip.
 
So there's a third Exynos 5 in the works?
Unless they skimp too much on the GPU, the thing should fly on a 540p screen.
 
Instead of canning project after project they might be better off after all just using 3rd party SoCs.
 
Instead of canning project after project they might be better off after all just using 3rd party SoCs.

But don't they have a cost advantage if they eat their own dog food?

I saw the headline of a WSJ article about why Samsung is doing well and it did seem like their supply-chain advantage -- components from other Samsung divisions -- was a big part of it.
 
Where's the cost advantage if you pour resources into R&D and start cancelling project after project?
 
Where's the cost advantage if you pour resources into R&D and start cancelling project after project?

And lose mindshare because your solutions are technologically inferior.

Samsung's implementation of BIG.little is outright broken going by Ned's investigations, but even if it works as intended it isn't certain it will provide the projected power savings/performance gains.

Exynos 5 series might very well end up being a performance regression, where everything runs on the A7 cores most of the time.

Cheers
 
Aren't we a bit overly dramatic about Samsung's SoCs? Also, I'm not aware of any cancelling of plans from Samsung, but to be honest, that's mainly because I don't know what their plans are or were.
 
The Exynos 5410 is practically just as good as the top-binned Snapdragon 600 in GS4, performance and battery-wise:

http://www.gsmarena.com/samsung_galaxy_s4_i9500_vs_i9505-review-930.php

The only discernible difference seems to be in talk time, but that probably has more to do with with the external baseband processor and drivers than the SoC.

I wouldn't call the Exynos 5410 a failure, it's just that the S600 turned out a lot better than most were expecting.

Honestly, I doubt that being able to use all the 8 CPUs at once (which would also require a kernel update) would change anything practical at all.
 
Last edited by a moderator:
Back
Top