Samsung Exynos 5250 - production starting in Q2 2012

  • Thread starter Deleted member 13524
  • Start date
I still don't really understand why problems with the Exynos Octa SoC would lead Samsung Mobile to use it only some of the time. Are there different branches for different regions with different political influences? Or did they have some kind of minimum purchase obligation to uphold?



Could you link? If you post there as Nebuchadnezzar I can't find it because it's in a standard user title and I can't find a member list :(

At least in the US, several carriers prefer to use Qualcomm basebands. For CDMA carriers, it's practically required. Plus, I always considered it to be a yield thing, since the Note in the US has almost always been Exynos with a but higher clocks.
 
At least in the US, several carriers prefer to use Qualcomm basebands. For CDMA carriers, it's practically required. Plus, I always considered it to be a yield thing, since the Note in the US has almost always been Exynos with a but higher clocks.

Yes, that is the common sense explanation for why Galaxy S4 doesn't use Octa in most regions. But a lot of people give other reasons instead. Here's what I hear:

- Samsung can't make enough Exynos 5 Octa SoCs because they have capacity problems, possibly due to poor yield
- Samsung Mobile doesn't like Samsung LSI and doesn't want to use their SoCs.
- Exynos 5 Octa uses too much power, often coupled with Cortex-A15 uses too much power, or just "Cortex-A15 sucks"
- Exynos 5 Octa is too broken.

The first two could explain why Octa is used only some of the time. But the other two make no sense. If Octa has problems that make it unsuitable for Galaxy S4 then it should have never been used at all, not merely some of the time. Unless Samsung is trying to save face while minimizing financial impact.
 
Yes, that is the common sense explanation for why Galaxy S4 doesn't use Octa in most regions. But a lot of people give other reasons instead. Here's what I hear:

- Samsung can't make enough Exynos 5 Octa SoCs because they have capacity problems, possibly due to poor yield
- Samsung Mobile doesn't like Samsung LSI and doesn't want to use their SoCs.
- Exynos 5 Octa uses too much power, often coupled with Cortex-A15 uses too much power, or just "Cortex-A15 sucks"
- Exynos 5 Octa is too broken.

The first two could explain why Octa is used only some of the time. But the other two make no sense. If Octa has problems that make it unsuitable for Galaxy S4 then it should have never been used at all, not merely some of the time. Unless Samsung is trying to save face while minimizing financial impact.
Another popular reason thrown around that is tied to reason 3 is that the Octa together with an LTE modem is too power consuming. The heat generation between 2G/3G/4G on the i9505 is apparently quite noticeable. This was actually the first explanation coming from Korean sources back in February or so.
 
- Samsung Mobile doesn't like Samsung LSI and doesn't want to use their SoCs.

Probably hair splitting but I doubt it's anything "emotional" between the departments; they just seem to have a damn hard time to agree on several issues and that not just today. That's hardly uncommon within any company it just became more "apparent" or let's just call it part of a "hot topic" for Samsung since now shit hit the fan with the GalaxyS4 and co. mess.

There was a newsblurb about Samsung developing its own GPU in the press by the way even with some spare performance characteristics. If someone wants to dig deeper that might be a good starting point in order to find out how deep the whole enchilada might be.
 
Probably hair splitting but I doubt it's anything "emotional" between the departments; they just seem to have a damn hard time to agree on several issues and that not just today. That's hardly uncommon within any company it just became more "apparent" or let's just call it part of a "hot topic" for Samsung since now shit hit the fan with the GalaxyS4 and co. mess.

There was a newsblurb about Samsung developing its own GPU in the press by the way even with some spare performance characteristics. If someone wants to dig deeper that might be a good starting point in order to find out how deep the whole enchilada might be.
You mean the ISSSC reports like this one?

One thing we kind of forget is the possible flexibility of such a block on a SoC - not only playing GPU but also hardware decoder/encoder and ISP. What would be the die area of the latter two?

Just found something funny: anyone up for a job? There's more posts in the related section.
 
Another popular reason thrown around that is tied to reason 3 is that the Octa together with an LTE modem is too power consuming. The heat generation between 2G/3G/4G on the i9505 is apparently quite noticeable. This was actually the first explanation coming from Korean sources back in February or so.

That explanation seems weird too.

The thing can be throttled if power is forced too high, so what's important is that it has enough of a power budget for situations that people are actually interested in. Running the CPUs and/or GPU really hard while simultaneously running the LTE hard is not one of those situations. In other words, CPU + GPU + LTE isn't the important metric, but something more like max(all CPU, some CPU + all GPU, some GPU + little GPU + high data). I doubt the latter category would tip thermals for Octa but be fine for S600.

I don't know why the difference between 2G to 4G would be a new problem on Octa but be acceptable/fine on earlier Exynos phones. It's not like the modem interface would suddenly be a lot less efficient per byte, unless maybe there's something seriously broken with the SoC or perhaps the process.

That doesn't mean that S600 isn't (much?) more power efficient for LTE though, due to integration. Could have made a big difference in some battery life benchmarks.
 
That explanation seems weird too.

The thing can be throttled if power is forced too high, so what's important is that it has enough of a power budget for situations that people are actually interested in. Running the CPUs and/or GPU really hard while simultaneously running the LTE hard is not one of those situations. In other words, CPU + GPU + LTE isn't the important metric, but something more like max(all CPU, some CPU + all GPU, some GPU + little GPU + high data). I doubt the latter category would tip thermals for Octa but be fine for S600.

I don't know why the difference between 2G to 4G would be a new problem on Octa but be acceptable/fine on earlier Exynos phones. It's not like the modem interface would suddenly be a lot less efficient per byte, unless maybe there's something seriously broken with the SoC or perhaps the process.

That doesn't mean that S600 isn't (much?) more power efficient for LTE though, due to integration. Could have made a big difference in some battery life benchmarks.

That explanation actually makes sense, here in Brazil we have the qualcomm S4 with 4g and the exynos one for 3G only. I think it is one of the few markets where you have both models.
 
Could some of the tension between the Samsung divisions have to do with Apple trying to stop using Samsung to fab SOCs because of Samsung Mobile?

If Apple took away most or all of the SOC business, Samsung Mobile should take up the slack?

I know Mobile is their most profitable division but the semi guys probably would expect that Mobile replace all lost revenues and then some?
 
You mean the ISSSC reports like this one?

One thing we kind of forget is the possible flexibility of such a block on a SoC - not only playing GPU but also hardware decoder/encoder and ISP. What would be the die area of the latter two?

Just found something funny: anyone up for a job? There's more posts in the related section.

The last was posted December 28 from what I can see.

Yes for the first.
 
The Exynos 5420 is a 4.4 big.LITTLE chip, 1800 / 1200 Mhz, with a T6xx GPU currently at 600MHz, but supposed/planned to reach 700MHz. Can't identify which T6xx it is since they all have common platform drivers. Seems generally newer than the 5410 since it also has a new MFC (v7) with VP8 support.

It's currently on the Chromium Gerrit since a few days, so this'll probably end up in the next Chromebook.
 
The Exynos 5420 is a 4.4 big.LITTLE chip, 1800 / 1200 Mhz, with a T6xx GPU currently at 600MHz, but supposed/planned to reach 700MHz. Can't identify which T6xx it is since they all have common platform drivers. Seems generally newer than the 5410 since it also has a new MFC (v7) with VP8 support.

It's currently on the Chromium Gerrit since a few days, so this'll probably end up in the next Chromebook.

I'll put my money on a T624 for the bet; at least it makes more sense according to ARM's own roadmap.
 
The following might be a useful addition to the discussion on here about big.little implementation and the various strategies for switching between cores. Its a white paper from Renesas on their own big-little socs, and analyses the various usage models for the cores. Way above me, but clearly some here might appreciate it.

http://events.linuxfoundation.org/sites/events/files/lcjpcojp13_nakagawa.pdf
Their approach is insanely archaic and brute-force. It's a quick solution to using all 8 cores (Approach 2) but is nowhere near what the in-kernel MP mechanism does.

Approach 1 has no benefits at all over the existing IKS. I actually wonder if they can even properly do cluster power management from userspace, it's something that I don't think is possible.

Interesting "playing around", but the real deal lies in the Linaro implementations. I don't expect this to be used in any actual product.
 
Seems like the 5420 is the first product that will actually see proper big.LITTLE usage, Google already has core migration up and running and starting on booting all 8 cores at once.

It also seems that the CCI is not much of an interconnect in physical terms, it's just the ACE ports connecting blocks to the internal bus.
It allows broadcasting of TLB invalidates and memory barriers and it guarantees cache coherency at system level through snooping of slave interfaces connected to it.

The CCI (ACE ports) are only seem to be powered up while actual switching is done, so there is no power deficit at all for running the CCI. More fuel to the fire for the 5410 scandal.
 
Last edited by a moderator:
The CCI (ACE ports) are only seem to be powered up while actual switching is done, so there is no power deficit at all for running the CCI.
I'm not sure I understand what you mean: the CCI will still have to be partially powered as it has to at least forward memory/IO requests/answers between its slaves and its masters, no matter whether one of the clusters is on or off.
 
I'm not sure I understand what you mean: the CCI will still have to be partially powered as it has to at least forward memory/IO requests/answers between its slaves and its masters, no matter whether one of the clusters is on or off.
The ports are inactive and only do something when there is a switch request;
Code:
Once DT bindings for the A7 cluster and IKS
support is added, observe that CCI is turned on and off as CPUs switch.
Idle:
localhost ~ # cat /dev/bL_status
        0 1 2 3 L2 COMMON CCI
[A15]   0 0 0 0  0    0    0
[A7]    1 1 1 1  1    1    1
Load up 2 CPUs:
localhost ~ # cat /dev/bL_status
        0 1 2 3 L2 COMMON CCI
[A15]   1 0 1 0  1    1    1
[A7]    0 1 0 1  1    1    1
Load up 4 CPUs:
localhost ~ # cat /dev/bL_status
        0 1 2 3 L2 COMMON CCI
[A15]   1 1 1 1  1    1    1
[A7]    0 0 0 0  0    0    0
 
Looks to me like they both need to be powered up so long as both A7 and A15 clusters are active, which is exactly what you'd expect as the two need to retain cache coherency. Not sure what you mean by "switching" here. The comment probably really means that it's turned on and off as the cluster moves from no load to some load.
 
All I read here is that the port of the CCI that is connected to the powered off cluster is powered off, not that the CCI is powered off. Or did I misunderstand?
 
All I read here is that the port of the CCI that is connected to the powered off cluster is powered off, not that the CCI is powered off. Or did I misunderstand?
On every single instance (Kernel sources for Exynos, talks with some lead devs, Google's gerrit patches) points out that the CCI is powered off and inactive.

Of course it doesn't make much sense given some of the chip layouts we know of, even if just the ports are disabled: http://i.imgur.com/6wunhUQ.png
 
Back
Top