Qualcomm SoC & ARMv8 custom core discussions

Discussion in 'Mobile Devices and SoCs' started by Nebuchadnezzar, Jan 20, 2015.

  1. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,446
    Likes Received:
    181
    Location:
    Chania
    LSI or Samsung mobile :?:

    ***edit: if there's nothing else wrong on the tablet here, 2GB devices should run out of memory at about 10 Manhattan runs in a row.
     
  2. Lazy8s

    Veteran

    Joined:
    Oct 3, 2002
    Messages:
    3,100
    Likes Received:
    18
    I imagine issues/bugs specific to that version of the app and the OS on which it's running would be the determining factor.

    I haven't tried GFXBench in a while and definitely not on the latest iOS 8.1.3. Earlier versions of iOS 8 have been excessively buggy, more so than any specific version of Android I can recall when it comes to major OS, non-device specific features. While still great at design, Apple's software teams have gotten sloppy with polishing their product before release.
     
  3. Lazy8s

    Veteran

    Joined:
    Oct 3, 2002
    Messages:
    3,100
    Likes Received:
    18
    I find the Qualcomm news to be huge as a precedent for the future. Their modems had become an almost-requirement for the U.S. market.

    With their in-house capabilities, Samsung doesn't have much need to look back for many future phones (though I'm sure Qualcomm will fight tooth-and-nail for design wins in Samsung's line-up.) And, maybe, the cost structure of their "in-house" SoC supply will finally have them seeing decent cost savings with Exynos versus outsourcing.

    Other OEMs have been dabbling in SoC design, too. And MediaTek is still picking up momentum in Western markets.

    Qualcomm finally ran out of the lessening thermal headroom they'd been leaving themselves each generation since like the S4 Pro/S600 to have given Samsung enough incentive to stay Exynos. Obviously, the effects on Qualcomm's bottom line will remain limited at first, but I don't see this as just some blip, soon to be forgotten in the Snapdragon story. I think the competitive landscape gets much more difficult for them here on out, and their follow-up designs will need to be something special indeed to turn the tide back. I don't see them losing their spot as the industry's Number One Supplier, but the monopoly will finally go away.
     
  4. Erinyes

    Regular

    Joined:
    Mar 25, 2010
    Messages:
    647
    Likes Received:
    94
    SLSI of course..its a huge design win for them given that in the last few years they've been in less than 50% of the Galaxy S line.
    True..pretty much agree with everything you said. I think Qualcomm have reached somewhat of a peak and I see their market-share and revenue slowly decreasing going forward (Compared to the staggering growth we've seen in the last few years). Samsung is a big customer and to lose the Galaxy S6 is a fairly significant impact on their revenue. And as you say..Mediatek is hitting them harder than ever in the mid-range segment (and this is where the majority of sales are today). Aside from loss of sales..I'm pretty sure they've taken a hit on ASPs and gross margin. The Chinese Government is also clamping down the royalty fees they are charging. As you say..the successor to Krait will really have to be something special if they are to continue their growth.
     
  5. Nebuchadnezzar

    Legend Veteran

    Joined:
    Feb 10, 2002
    Messages:
    1,007
    Likes Received:
    174
    Location:
    Luxembourg
    It still pretty much is a requirement. The CDMA variants on Verizon on Sprint still use a Qualcomm modem due to simple lack of alternative.
     
  6. mboeller

    Regular

    Joined:
    Feb 7, 2002
    Messages:
    922
    Likes Received:
    1
    Location:
    Germany
  7. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    11,076
    Likes Received:
    5,626
    I find a bit hard to believe a Snapdragon 600 series would bring a new A72 core while the 800 series are still using the A57.
    Snapdragon 615 uses eight Cortex A53, with one of the quad-core modules being more performance-optimized (higher-clocked) and the other module being power-optimized.
    The performance jump from S615 to S620 would be enormous..
     
  8. Nebuchadnezzar

    Legend Veteran

    Joined:
    Feb 10, 2002
    Messages:
    1,007
    Likes Received:
    174
    Location:
    Luxembourg
    Don't know, but given the 28nm and only 1.8GHz clock on that rumoured piece it seems like a perfect mid-range SoC even if it has A72 cores. You're throwing a bit more money on die size but by that time 28nm will get dirt cheap.
     
  9. lopri

    Regular

    Joined:
    Aug 4, 2004
    Messages:
    259
    Likes Received:
    1
    A couple of designs that I do not understand from Qualcomm's lineup.

    (4 x A53) + (4 x A53) -> This is what we often hear as proof of certain region's preference for moar cores. But is that really true or is it simply a Qualcomm's excuse? Are there really benefits of Global Task Switching (or whichever big.LITTLE inner-working) for this configuration?

    (2 x A57) + (4 x A53) -> I do not understand this design, either. Why introduce an imbalance that is seemingly unnecessary, assuming proper power-gating? If power is really the reason, wouldn't (2 x A57) + (2 x A53) design make more sense? Why haven't we seen 2+2 big.LITTLE yet?
     
  10. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,446
    Likes Received:
    181
    Location:
    Chania
    Yes there are and they're mostly connected to power savings.

    Actually I found out myself recently that ARM actually proposes a 2big + 4LITTLE scheme and IMHO its far more balanced then the above, because of the rare times you actually need "big" cores and their respective amount when they're needed.

    Who says we haven't? http://www.mediatek.com/en/products/mobile-communications/tablet/mt8135/

    It's just that "more cores" sell better.
     
  11. Mariner

    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    1,766
    Likes Received:
    459
    You'd certainly think that a 2+4 big.LITTLE ought to be a good chip for mid to high-range device. Plenty of performance available from the two big cores when required and decent performance from the 4 little ones with a good reduction in die size as well.

    However, it seems that the mid-range has instead been taken up by the 4+4 A53 (and previously A7) options. My phone uses a 4+4 A7 Mediatek chip and it provides perfectly capable performance. A little bit faster would be nice, but certainly no problems to speak of. Personally, I wonder when (or if) we'll see a bit more memory bandwidth find its way into mid-range devices which all use single-channel LPDDR3 at the moment.
     
  12. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    302
    Location:
    UK
    The problem with 2+4 big.LITTLE is that it's more expensive than 4+4 little.LITTLE and it's not as good marketing-wise. This is why I quite like the idea of 2+4+4 big.little.LITTLE as I suggested regarding Denver+A53 (but the same applies to A72+A53/[...]).

    There is very little point in increasing the die size of the little cores much further as they are meant to maximise perf/mm2 and perf/watt, and this is likely to go down slightly if optimising for single-threaded performance. So this should become cost-effective soon enough with Moore's Law... :)
     
  13. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,446
    Likes Received:
    181
    Location:
    Chania
    Why is 2+4 more expensive than 4+4? (honest question).
     
  14. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    302
    Location:
    UK
    Because A57 is more than 2x the size of A53 afaik. Of course the difference isn't as big as it was for A7 vs A15; who knows about A72...
     
  15. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,446
    Likes Received:
    181
    Location:
    Chania
    Wait I'm still losing connection here....under 2+4 I understand 2*A57 + 4*A53 and under 4+4= 4*A57 + 4*A53. How can the first be more expensive than the latter?
     
  16. Erinyes

    Regular

    Joined:
    Mar 25, 2010
    Messages:
    647
    Likes Received:
    94
    Yep 28nm pricing has finally been trending down thanks to introduction of more advanced technology nodes and increased production from UMC and SMIC. But it is still sounds a bit odd for them to be doing A72 on 28nm. By 2016 you would expect 20nm to be below 28nm in per transistor cost..and A72 is power hungry..so 20nm seems like a better choice.
    Yep..2+2 or 2+4 big.LITTLE would be the ideal choice for pretty much any typical mobile workload. Apple has shown us how this approach can be both very high performance and low power (without even using LITTLE cores). But in Android land, unfortunately marketing seems to have trumped logic.

    Single channel LPDDR3 (and heck some chips are still on LPDDR2) does have low bandwidth for the CPU power they have..but in most cases the graphics are underpowered so it seems to work fine. LPDDR4 should certainly help as we see larger and more powerful chips..but not before 2016 I think.
    Wouldn't a 4+4 big.LITTLE work better for the likes of A72 and Denver though? You have per core power gating on the big cores anyway so it wont cost you any power when they aren't in use. And when you do need them..2 extra big cores would be more useful than 4 extra LITTLE cores IMHO.
    AFAIK its at least 3X..and with A15 it was about 4-5X. But royalty would also be a factor. Wouldn't royalty on one A57 be more than that for 2 A53's?
     
  17. Nebuchadnezzar

    Legend Veteran

    Joined:
    Feb 10, 2002
    Messages:
    1,007
    Likes Received:
    174
    Location:
    Luxembourg
    If SMIC (which has been rumored to be the target foundry of some of these mid-low range SoCs for Qualcomm) or UMC have viable 20nm. Remember Nvidia saying that 20nm transistor cost doesn't go down compared to 28nm. I know they have to do double patterning at 20nm, so maybe 28nm will still remain cost-effective, and that's indeed what a lot of people have been saying in the industry.
     
  18. mboeller

    Regular

    Joined:
    Feb 7, 2002
    Messages:
    922
    Likes Received:
    1
    Location:
    Germany
    Haven't you got the multiple message yet that 20nm will be more expensive than 28nm per transistors?
    Look here for example: www.bnppresearch.com/ResearchFiles/31175/Semiconductors-230414.pdf

    And also the information from ARM that the A72 is less power hungry than the A57 at the same process node (and the A57 was already less power hungry than the A15 from what I remember). See: http://www.realworldtech.com/forum/?threadid=147766&curpostid=147801
     
  19. Erinyes

    Regular

    Joined:
    Mar 25, 2010
    Messages:
    647
    Likes Received:
    94
    SMIC-Qualcomm isn't just a rumour..they signed an agreement last year and even produced Snapdragon 410 chips in December - Agreement and Production

    However, I do not see them moving to 20nm for quite a while. UMC is more likely as they have licensed IBM's 20nm and FINFET technology (Source). Even TSMC's 20nm pricing should be lower in 2016. Actually Nvidia didn't say that 20nm transistor cost dosen't go down compared to 28nm..the graph they produced showed a crossover in terms of transistor cost in Q1'15..but the savings were marginal. This, apart from the increased design cost and time may make it unviable for some players but given that Qualcomm already has 20nm chips in production, it shouldn't be too hard for them. Either ways..while it is possible..I will take that rumour with a grain of salt for now.

    [​IMG]
     
  20. Laurent06

    Regular

    Joined:
    Dec 14, 2007
    Messages:
    743
    Likes Received:
    65
    I might be wrong but it looks like all of the studies that claim such things come from the same source: a projection from IBS. Has any other analysis been published?
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...