Samsung Exynos 5250 - production starting in Q2 2012

  • Thread starter Deleted member 13524
  • Start date
Fastest of all GPUs at the older Egypt HD workload. Falls behind the 550 MHz Adreno 330 and the A7 Rogue in T-Rex, Basemark X, and the graphics tests of 3DMark.

CPU looks tops for Android land, though.
 
Last edited by a moderator:
Apparently, when the whole Gfxbench suite was run on one go, the processors were throttling by the time it got to the Egypt HD offscreen test.
 
Apparently, when the whole Gfxbench suite was run on one go, the processors were throttling by the time it got to the Egypt HD offscreen test.

This will not be a problem really. It seems it's the design these mobile chip manufacturers have gone for. My 5410 seems to handle much like you described in such a test suite.
Much like Intel Turbo Boost (tm) the mobile chips have a TDP based on the chips power draw and heat generation. The chips seems to be designed with a full tdp in mind but made to drop this down when in full load scenarios where otherwise the chip if kept running full speed would surpass the TDP that is determined to be safe for the chip. The manufacturers know this and build throttles that help keep power draw and heat within determined safe ranges. This is where it is just like Turbo Boost by Intel. On an Intel cpu be it a laptop or Desktop cpu it will drop the max mhz for full load based on the wattage it was drawing at the time and amperage. Since heat effects power draw linearly we see as heat rises for the same load we drop more mhz because the given mhz started to draw more power than before because of the increased temps.
This may seem like a huge drop in performance when in a mobile chipset but as I think Andrei has pointed out it is better and required to keep the extreme high end performance chips stable. We have no active cooling nor a matching heatsink. Any cooling even if there is a heatsink connected to our cpu is all passive. On a laptop in power options on windows pc you can select passive or active. Active being fan kicks up to lower temps before mhz is lowered. Passive where as temps rise it lowers mhz to keep temps down instead. Since this matches the mobile device description of no active cooling this passive method for now is the best option.
The chip should not throttle in as many real world scenarios as it has in that bench suite.
So you see just like Intel trying to push speed up unless it passes tdp limits Samsung, Qualcomm and the likes attempt the same. If they weren't they'd be clocked at only a max that would not ever bring it to overheating scenarios so it is clear that is not what they are all pushing for.
 
This will not be a problem really. It seems it's the design these mobile chip manufacturers have gone for. My 5410 seems to handle much like you described in such a test suite.
Much like Intel Turbo Boost (tm) the mobile chips have a TDP based on the chips power draw and heat generation. The chips seems to be designed with a full tdp in mind but made to drop this down when in full load scenarios where otherwise the chip if kept running full speed would surpass the TDP that is determined to be safe for the chip. The manufacturers know this and build throttles that help keep power draw and heat within determined safe ranges. This is where it is just like Turbo Boost by Intel. On an Intel cpu be it a laptop or Desktop cpu it will drop the max mhz for full load based on the wattage it was drawing at the time and amperage. Since heat effects power draw linearly we see as heat rises for the same load we drop more mhz because the given mhz started to draw more power than before because of the increased temps.
This may seem like a huge drop in performance when in a mobile chipset but as I think Andrei has pointed out it is better and required to keep the extreme high end performance chips stable. We have no active cooling nor a matching heatsink. Any cooling even if there is a heatsink connected to our cpu is all passive. On a laptop in power options on windows pc you can select passive or active. Active being fan kicks up to lower temps before mhz is lowered. Passive where as temps rise it lowers mhz to keep temps down instead. Since this matches the mobile device description of no active cooling this passive method for now is the best option.
The chip should not throttle in as many real world scenarios as it has in that bench suite.
So you see just like Intel trying to push speed up unless it passes tdp limits Samsung, Qualcomm and the likes attempt the same. If they weren't they'd be clocked at only a max that would not ever bring it to overheating scenarios so it is clear that is not what they are all pushing for.
It would be useful to know what clock speeds the CPU and GPU are running at before and after throttling. Intel's Turbo is supposed to be an opportunistic overclock where you get additional clock speed if the heat and power conditions permit it. When Intel claims an x GHz processor rated at y TDP, the processor can in fact sustain full x GHz for long durations at that TDP. Any Turbo is extra and is a separate spec.

Is this the same with the Note 3 or other mobile devices? Specifically, when they claim a 1.9GHz quad core CPU inside the Note 3, can the Note 3 actually sustain that 1.9GHz for long durations with any additional Turbo states being a bonus? Or are they actually claiming Turbo clock speeds that can't be sustained as their base marketing claim? Perhaps it makes some sense to claim x GHz CPU clock speed sustainable for say 1 min with alongside moderate GPU load since that should encompass many common use cases. But without a standardized set of reasonable conditions, with marketing competition, it won't be long before companies start claiming impressive sounding clock speeds even if they can only be achieved in very limited conditions that are not disclosed.
 
Is this the same with the Note 3 or other mobile devices? Specifically, when they claim a 1.9GHz quad core CPU inside the Note 3, can the Note 3 actually sustain that 1.9GHz for long durations with any additional Turbo states being a bonus? Or are they actually claiming Turbo clock speeds that can't be sustained as their base marketing claim? Perhaps it makes some sense to claim x GHz CPU clock speed sustainable for say 1 min with alongside moderate GPU load since that should encompass many common use cases. But without a standardized set of reasonable conditions, with marketing competition, it won't be long before companies start claiming impressive sounding clock speeds even if they can only be achieved in very limited conditions that are not disclosed.
All of today's SoCs are basically claiming Turbo clocks. Those 1.9GHz are not sustainable and they simply throttle down with elevated temperature.

Qualcomm is a bit notorious here since they openly admit this; http://www.fudzilla.com/home/item/31532-qualcomm-aims-at-25-to-3w-tdp-for-phones

The S800 for example throttles down at a default of 65°C if I'm not mistaken. Samsung lets their chips a bit higher at 90°C but those are also eventually reached.
 
Exactly and when I was on Galaxy S3 with qualcomm chip it was 55c they throttled at. On GS2 with the Exynos it is 65c. So we are seeing the same methodology they used a few years ago in the new chips today. GS2 would throttle to 800mhz and GS3 Qualcomm to 918mhz I think. No different then S800 dropping likely to 1200mhz and the 5410 to 1200mhz a7 (600 half clock) or the 5420 being reported to throttle to 1300mhz a7 (650 half clock) during a long bench suite like the glbenchmark.
 
All of today's SoCs are basically claiming Turbo clocks. Those 1.9GHz are not sustainable and they simply throttle down with elevated temperature.

Although we don't know what is going on with the A7, and apple in fairness don't generally quote a stated speed, I think apple socs are designed to run at a set clock. Is there any known thermal limiters ( that are typically hit) on apple socs?
 
Well I remember my GT-I9300 throttled at 80°C

Cool yeah that would be the Exynos GS3 which is why it differed from my post above about Qualcomm Lte GS3. Clearly both companies have very differed throttles but essentially they go for the same thing in mind of lower overall temps and well.. power draw that continues to increase for the same mhz as temps rise. Interesting then that means you can divulge exynos went gs2 65c gs3 80c and gs4 90c where as GS3 qualcomm 55c and GS4 65c.

Samsung either needs a higher throttle temp to maintain a similar performance or they are just okay chip wise with a higher throttle temp. I'd say it's likely a combo of both. I do know the 5410 i have put through the ringer has sustained loads for long term over 100c via a custom throttle set as long as mhz isn't raised to high it stays within the battery's power supply capability. Towards 1800+ it starts to draw far too much to work on the battery in the i9500 at some of the best bench workloads on the market right now. Getting to my point I have saw the highest thermal handling on the latest Exynos 5 chipset vs any previous qualcomm offering. My qualcomm GS3 and a friends GS4 qualcomm destabilize on high mhz or even stock 1890 if over 90+c which unless I go beyond the power supply of the battery has not been true on this 5410.
 
Apple SoCs would of course scale voltage and frequency to match the demands of the workload, and they'd obviously employ power management to warn the OS to take action if internal temperatures got too high.

The presumed difference from some of the recent high-end Qualcomm, nVidia, and Samsung SoCs would be that Apple would keep the limits within the bound before which you'd likely experience noticeable performance degradation from thermal throttling when running through a few benchmarks or, more to the point, playing an extended session of high-end gaming.
 
Last edited by a moderator:
Apple SoCs would of course scale voltage and frequency to match the demands of the workload, and they'd obviously employ power management to warn the OS to take action if internal temperatures got too high.

The presumed difference from some of the recent high-end Qualcomm, nVidia, and Samsung SoCs would be that Apple would keep the limits within the bound before which you'd likely experience noticeable performance degradation from thermal throttling when running through a few benchmarks or, more to the point, playing an extended session of high-end gaming.

Which was the point I was making. Apple can design their socs knowing exactly the enclosures and form factor they will be in. As a result I understand that fundamentally their clock speed isn't a 'turbo' but is mostly sustainable, except perhaps in warm external environments .
 
Who cares? As long as the migration works reasonably well, why do we need eight cores running simultaneously on a mobile device (I doubt many people can even make use of eight cores on a desktop...)? Am I missing something (power improvements?)?
 
Who cares? As long as the migration works reasonably well, why do we need eight cores running simultaneously on a mobile device (I doubt many people can even make use of eight cores on a desktop...)? Am I missing something (power improvements?)?
They currently don't have any migration, it's still stupid cluster switching. HMP is still more power efficient than IKS because scheduler switching is much less latency and finer granularity than the DVFS driver.

I don't get the logic at all here, you don't go HMP solely for having 8 simultaneous threads, but to use the cores in the most power efficient way. They could just cap power dissipation if that would be the problem.

So if true, pretty much they've had two chances of showcasing big.LITTLE this year and failed to do so on both occasions. Pretty outrageous if not sad.

If this new chip they comment about is the 5412, then that'd be pretty embarrassing for ARM.
 
HMP is still more power efficient than IKS because scheduler switching is much less latency and finer granularity than the DVFS driver.

Has this been demonstrated in practice (I'm not disagreeing, I'm curious)? Meaning, is it really worth the extra software overhead? If there's no "practical/real-world benefit", perhaps Samsung should have first tried to implement the "easier solution".
 
Back
Top