Qualcomm SoC & ARMv8 custom core discussions

LSI or Samsung mobile :?:

***edit: if there's nothing else wrong on the tablet here, 2GB devices should run out of memory at about 10 Manhattan runs in a row.
 
I imagine issues/bugs specific to that version of the app and the OS on which it's running would be the determining factor.

I haven't tried GFXBench in a while and definitely not on the latest iOS 8.1.3. Earlier versions of iOS 8 have been excessively buggy, more so than any specific version of Android I can recall when it comes to major OS, non-device specific features. While still great at design, Apple's software teams have gotten sloppy with polishing their product before release.
 
I find the Qualcomm news to be huge as a precedent for the future. Their modems had become an almost-requirement for the U.S. market.

With their in-house capabilities, Samsung doesn't have much need to look back for many future phones (though I'm sure Qualcomm will fight tooth-and-nail for design wins in Samsung's line-up.) And, maybe, the cost structure of their "in-house" SoC supply will finally have them seeing decent cost savings with Exynos versus outsourcing.

Other OEMs have been dabbling in SoC design, too. And MediaTek is still picking up momentum in Western markets.

Qualcomm finally ran out of the lessening thermal headroom they'd been leaving themselves each generation since like the S4 Pro/S600 to have given Samsung enough incentive to stay Exynos. Obviously, the effects on Qualcomm's bottom line will remain limited at first, but I don't see this as just some blip, soon to be forgotten in the Snapdragon story. I think the competitive landscape gets much more difficult for them here on out, and their follow-up designs will need to be something special indeed to turn the tide back. I don't see them losing their spot as the industry's Number One Supplier, but the monopoly will finally go away.
 
LSI or Samsung mobile :?:

***edit: if there's nothing else wrong on the tablet here, 2GB devices should run out of memory at about 10 Manhattan runs in a row.

SLSI of course..its a huge design win for them given that in the last few years they've been in less than 50% of the Galaxy S line.
I find the Qualcomm news to be huge as a precedent for the future. Their modems had become an almost-requirement for the U.S. market.

With their in-house capabilities, Samsung doesn't have much need to look back for many future phones (though I'm sure Qualcomm will fight tooth-and-nail for design wins in Samsung's line-up.) And, maybe, the cost structure of their "in-house" SoC supply will finally have them seeing decent cost savings with Exynos versus outsourcing.

Other OEMs have been dabbling in SoC design, too. And MediaTek is still picking up momentum in Western markets.

Qualcomm finally ran out of the lessening thermal headroom they'd been leaving themselves each generation since like the S4 Pro/S600 to have given Samsung enough incentive to stay Exynos. Obviously, the effects on Qualcomm's bottom line will remain limited at first, but I don't see this as just some blip, soon to be forgotten in the Snapdragon story. I think the competitive landscape gets much more difficult for them here on out, and their follow-up designs will need to be something special indeed to turn the tide back. I don't see them losing their spot as the industry's Number One Supplier, but the monopoly will finally go away.

True..pretty much agree with everything you said. I think Qualcomm have reached somewhat of a peak and I see their market-share and revenue slowly decreasing going forward (Compared to the staggering growth we've seen in the last few years). Samsung is a big customer and to lose the Galaxy S6 is a fairly significant impact on their revenue. And as you say..Mediatek is hitting them harder than ever in the mid-range segment (and this is where the majority of sales are today). Aside from loss of sales..I'm pretty sure they've taken a hit on ASPs and gross margin. The Chinese Government is also clamping down the royalty fees they are charging. As you say..the successor to Krait will really have to be something special if they are to continue their growth.
 
I find the Qualcomm news to be huge as a precedent for the future. Their modems had become an almost-requirement for the U.S. market.
It still pretty much is a requirement. The CDMA variants on Verizon on Sprint still use a Qualcomm modem due to simple lack of alternative.
 
I find a bit hard to believe a Snapdragon 600 series would bring a new A72 core while the 800 series are still using the A57.
Snapdragon 615 uses eight Cortex A53, with one of the quad-core modules being more performance-optimized (higher-clocked) and the other module being power-optimized.
The performance jump from S615 to S620 would be enormous..
 
I find a bit hard to believe a Snapdragon 600 series would bring a new A72 core while the 800 series are still using the A57.
Snapdragon 615 uses eight Cortex A53, with one of the quad-core modules being more performance-optimized (higher-clocked) and the other module being power-optimized.
The performance jump from S615 to S620 would be enormous..
Don't know, but given the 28nm and only 1.8GHz clock on that rumoured piece it seems like a perfect mid-range SoC even if it has A72 cores. You're throwing a bit more money on die size but by that time 28nm will get dirt cheap.
 
A couple of designs that I do not understand from Qualcomm's lineup.

(4 x A53) + (4 x A53) -> This is what we often hear as proof of certain region's preference for moar cores. But is that really true or is it simply a Qualcomm's excuse? Are there really benefits of Global Task Switching (or whichever big.LITTLE inner-working) for this configuration?

(2 x A57) + (4 x A53) -> I do not understand this design, either. Why introduce an imbalance that is seemingly unnecessary, assuming proper power-gating? If power is really the reason, wouldn't (2 x A57) + (2 x A53) design make more sense? Why haven't we seen 2+2 big.LITTLE yet?
 
A couple of designs that I do not understand from Qualcomm's lineup.

(4 x A53) + (4 x A53) -> This is what we often hear as proof of certain region's preference for moar cores. But is that really true or is it simply a Qualcomm's excuse? Are there really benefits of Global Task Switching (or whichever big.LITTLE inner-working) for this configuration?

Yes there are and they're mostly connected to power savings.

(2 x A57) + (4 x A53) -> I do not understand this design, either. Why introduce an imbalance that is seemingly unnecessary, assuming proper power-gating? If power is really the reason, wouldn't (2 x A57) + (2 x A53) design make more sense?

Actually I found out myself recently that ARM actually proposes a 2big + 4LITTLE scheme and IMHO its far more balanced then the above, because of the rare times you actually need "big" cores and their respective amount when they're needed.

Why haven't we seen 2+2 big.LITTLE yet?

Who says we haven't? http://www.mediatek.com/en/products/mobile-communications/tablet/mt8135/

It's just that "more cores" sell better.
 
You'd certainly think that a 2+4 big.LITTLE ought to be a good chip for mid to high-range device. Plenty of performance available from the two big cores when required and decent performance from the 4 little ones with a good reduction in die size as well.

However, it seems that the mid-range has instead been taken up by the 4+4 A53 (and previously A7) options. My phone uses a 4+4 A7 Mediatek chip and it provides perfectly capable performance. A little bit faster would be nice, but certainly no problems to speak of. Personally, I wonder when (or if) we'll see a bit more memory bandwidth find its way into mid-range devices which all use single-channel LPDDR3 at the moment.
 
The problem with 2+4 big.LITTLE is that it's more expensive than 4+4 little.LITTLE and it's not as good marketing-wise. This is why I quite like the idea of 2+4+4 big.little.LITTLE as I suggested regarding Denver+A53 (but the same applies to A72+A53/[...]).

There is very little point in increasing the die size of the little cores much further as they are meant to maximise perf/mm2 and perf/watt, and this is likely to go down slightly if optimising for single-threaded performance. So this should become cost-effective soon enough with Moore's Law... :)
 
The problem with 2+4 big.LITTLE is that it's more expensive than 4+4 little.LITTLE and it's not as good marketing-wise. This is why I quite like the idea of 2+4+4 big.little.LITTLE as I suggested regarding Denver+A53 (but the same applies to A72+A53/[...]).

Why is 2+4 more expensive than 4+4? (honest question).
 
Because A57 is more than 2x the size of A53 afaik. Of course the difference isn't as big as it was for A7 vs A15; who knows about A72...

Wait I'm still losing connection here....under 2+4 I understand 2*A57 + 4*A53 and under 4+4= 4*A57 + 4*A53. How can the first be more expensive than the latter?
 
Don't know, but given the 28nm and only 1.8GHz clock on that rumoured piece it seems like a perfect mid-range SoC even if it has A72 cores. You're throwing a bit more money on die size but by that time 28nm will get dirt cheap.

Yep 28nm pricing has finally been trending down thanks to introduction of more advanced technology nodes and increased production from UMC and SMIC. But it is still sounds a bit odd for them to be doing A72 on 28nm. By 2016 you would expect 20nm to be below 28nm in per transistor cost..and A72 is power hungry..so 20nm seems like a better choice.
You'd certainly think that a 2+4 big.LITTLE ought to be a good chip for mid to high-range device. Plenty of performance available from the two big cores when required and decent performance from the 4 little ones with a good reduction in die size as well.

However, it seems that the mid-range has instead been taken up by the 4+4 A53 (and previously A7) options. My phone uses a 4+4 A7 Mediatek chip and it provides perfectly capable performance. A little bit faster would be nice, but certainly no problems to speak of. Personally, I wonder when (or if) we'll see a bit more memory bandwidth find its way into mid-range devices which all use single-channel LPDDR3 at the moment.

Yep..2+2 or 2+4 big.LITTLE would be the ideal choice for pretty much any typical mobile workload. Apple has shown us how this approach can be both very high performance and low power (without even using LITTLE cores). But in Android land, unfortunately marketing seems to have trumped logic.

Single channel LPDDR3 (and heck some chips are still on LPDDR2) does have low bandwidth for the CPU power they have..but in most cases the graphics are underpowered so it seems to work fine. LPDDR4 should certainly help as we see larger and more powerful chips..but not before 2016 I think.
The problem with 2+4 big.LITTLE is that it's more expensive than 4+4 little.LITTLE and it's not as good marketing-wise. This is why I quite like the idea of 2+4+4 big.little.LITTLE as I suggested regarding Denver+A53 (but the same applies to A72+A53/[...]).

Wouldn't a 4+4 big.LITTLE work better for the likes of A72 and Denver though? You have per core power gating on the big cores anyway so it wont cost you any power when they aren't in use. And when you do need them..2 extra big cores would be more useful than 4 extra LITTLE cores IMHO.
Because A57 is more than 2x the size of A53 afaik. Of course the difference isn't as big as it was for A7 vs A15; who knows about A72...

AFAIK its at least 3X..and with A15 it was about 4-5X. But royalty would also be a factor. Wouldn't royalty on one A57 be more than that for 2 A53's?
 
By 2016 you would expect 20nm to be below 28nm in per transistor cost..and A72 is power hungry..so 20nm seems like a better choice.
If SMIC (which has been rumored to be the target foundry of some of these mid-low range SoCs for Qualcomm) or UMC have viable 20nm. Remember Nvidia saying that 20nm transistor cost doesn't go down compared to 28nm. I know they have to do double patterning at 20nm, so maybe 28nm will still remain cost-effective, and that's indeed what a lot of people have been saying in the industry.
 
By 2016 you would expect 20nm to be below 28nm in per transistor cost..and A72 is power hungry..so 20nm seems like a better choice.

Haven't you got the multiple message yet that 20nm will be more expensive than 28nm per transistors?
Look here for example: www.bnppresearch.com/ResearchFiles/31175/Semiconductors-230414.pdf

and A72 is power hungry..so 20nm seems like a better choice.

And also the information from ARM that the A72 is less power hungry than the A57 at the same process node (and the A57 was already less power hungry than the A15 from what I remember). See: http://www.realworldtech.com/forum/?threadid=147766&curpostid=147801
 
If SMIC (which has been rumored to be the target foundry of some of these mid-low range SoCs for Qualcomm) or UMC have viable 20nm. Remember Nvidia saying that 20nm transistor cost doesn't go down compared to 28nm. I know they have to do double patterning at 20nm, so maybe 28nm will still remain cost-effective, and that's indeed what a lot of people have been saying in the industry.

SMIC-Qualcomm isn't just a rumour..they signed an agreement last year and even produced Snapdragon 410 chips in December - Agreement and Production

However, I do not see them moving to 20nm for quite a while. UMC is more likely as they have licensed IBM's 20nm and FINFET technology (Source). Even TSMC's 20nm pricing should be lower in 2016. Actually Nvidia didn't say that 20nm transistor cost dosen't go down compared to 28nm..the graph they produced showed a crossover in terms of transistor cost in Q1'15..but the savings were marginal. This, apart from the increased design cost and time may make it unviable for some players but given that Qualcomm already has 20nm chips in production, it shouldn't be too hard for them. Either ways..while it is possible..I will take that rumour with a grain of salt for now.

L1vw7PG.jpg
 
Back
Top