Low-cost emerging market SoC/phone discussion

Discussion in 'Mobile Devices and SoCs' started by ToTTenTranz, May 12, 2011.

  1. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,455
    Likes Received:
    186
    Location:
    Chania
    Compared to it's predecessor a X20 smartphone hasn't that much more battery life, if any. They just run notably cooler. On the side it's actually good to see QCOM catching up again especially in sales.

    That said Kishonti has now a quite demanding GPU bound battery life estimating benchmark running Manhattan 3.1; what I'd also want to see on top of that are temperatures at the end of the 30 loops of the benchmark. It's not like SoCs haven't any temperature sensors to reveal numbers, it's just that probably no one thought so far it's necessary. It doesn't do me any good if after a torture benchmark N SoCs isn't throttling by a worthwhile margin, but the device gets too hot to even hold. That has nothing to do with the SoCs mentioned here, but it has happened in other cases.

    Oh and last but not least: 20SoC has the problem that it's not really preserving enough power to really justify the investment compared to 28HPm: http://images.anandtech.com/doci/9762/P1030606.jpg?_ga=1.32351228.334006868.1458215934
     
  2. Erinyes

    Regular

    Joined:
    Mar 25, 2010
    Messages:
    728
    Likes Received:
    185
    If it runs noticeably cooler that should mean more battery life. But it seems like the SD650 beats it despite being on 28nm. From what I've read, the CPU governor on the MTK chip is doing a poor job of actually distributing loads across the right CPUs and this is part of the reason for the battery life.
    It still gets you a 25% power saving at ISO performance. MTK chose to go with higher clocks so they probably aren't seeing all the benefits though. You also have a lower die size due to the density increase and lower per transistor costs than 28nm AFAIK.
     
  3. Mariner

    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    1,811
    Likes Received:
    518
    Hmmm, I thought I'd read somewhere Nebuchadnezzar mentioning that the governor was pretty effective for the Big-Little-Little combination? Or is it just a poor software implementation? Or perhaps my mind is playing tricks on me!

    It is disappointing to see the X20 not performing a little better, IMO, but MTK have certainly continued to come on leaps and bounds over the past few years. I wonder if X30 will be able to compete a little better with the Qualcomm/Samsung/Huawei competitors.
     
  4. Lodix

    Newcomer

    Joined:
    Sep 8, 2014
    Messages:
    57
    Likes Received:
    3
    The Xiaomi is using the lowest binned/less efficient version ( Cortex A72@ 2'1GHz ) of the X20. And brings more performance than the SD650, so even if it is more efficient you are using that leap to have more performance.
     
  5. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,455
    Likes Received:
    186
    Location:
    Chania
    All cases or just one singled out case? And no work loads are fine on the X20, at least on the LeEco X620 implementation here.

    Whereby you're actually confirming what I said. I didn't say 20SoC is useless, I said it doesn't bring the power savings one would expect, which effectively makes it a very questionable investment compared to 28HPm.

    Probably the latter.

    Does it have to? Neither the X30 nor any former Mediatek high end SoC for their own product portofolio were ever really high end SoCs compared to every other competitor out there. It's good enough if the price/performance ratio is competitive enough. If I as a customer pay half the street price of a Samsung or whichever else smartphone but end up with =/>80% performance in the majority of cases I'll still jump on the MTK powered smartphone.

    The headache in the case of X30 could be that manufacturing on 10nm/TSMC isn't going to be cheap at all....
     
    #425 Ailuros, Nov 9, 2016
    Last edited: Nov 9, 2016
  6. Erinyes

    Regular

    Joined:
    Mar 25, 2010
    Messages:
    728
    Likes Received:
    185
    Well the Redmi Note 4 with X20 barely beats the SD650 in performance..and even with the benefit of the process..still has noticeably lower battery life. I'm not sure about the LeEco though.
    And what did you expect? TSMC claimed 25% lower power at ISO performance..that's nothing to sneeze at. If the power savings aren't as expected that is due to the fact that MTK chose to go with higher speeds and/or did a poor job with the implementation (Same with Qualcomm and the S810/S808). The Apple A8 showed us how it was possible to get both higher performance and lower power from 20SoC. (And probably lower per transistor costs like I mentioned)
     
  7. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,455
    Likes Received:
    186
    Location:
    Chania
    I expected GPU IHVs to stay away from 20SoC for a reason, and that's exctly what they did. Why an exception like Apple would suddenly make up for everything else is beyond me. Unless you have insight into Apple's manufacturing agreements (which no one really has) the last is nothing more but a gut feeling. Apple could eventually squeeze out a deal that goes for a lower per transistor cost, but that still doesn't make 20SoC a worthwhile investment over 28nm for everyone else.
     
  8. Erinyes

    Regular

    Joined:
    Mar 25, 2010
    Messages:
    728
    Likes Received:
    185
    I still dont get what you're trying to say. And we are not talking about GPUs..but SoCs. You said 20SoC is not saving enough power compared to 28hpm. Your own figures show that it brings a 25% power saving and I pointed that out. Then you said it dosent bring as much power as you would expect. So I asked what did you expect..but you havent answered that. How much did we get from 40 to 28nm?

    And you also said that the investment compared to 28nm is questionable. But you haven't explained why 20SoC is a questionable investment beyond just making that statement. If it was that bad a process and was not worthwhile, not only Apple, why did Samsung, Qualcomm, Nvidia, Mediatek, etc adopt it?
     
    #428 Erinyes, Nov 11, 2016
    Last edited: Nov 11, 2016
  9. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,455
    Likes Received:
    186
    Location:
    Chania
    If you consider 25% any kind of worthwhile persentage then yes of course 20SoCs is worth the investment, I don't. GPU chips are chips with transistors, they're just bigger and have a far smaller cadence then high end SoCs these days (more below). Theoretically the jump from 40 to 28nm was still better than 40nm to the cancelled 32nm, which coincidentially would had made an equivalent to 20SoC vs. 28nm.

    Because none of them had any other choice for their high end SoCs. Any manufacturer with a cut throat cadence needing to cram N more transistors into X die area didn't have much of a choice either way. For those manufacturers that had the luxury to skip 20SoC for 16FF+ Huawei's slide applies above. Manufacturers like Apple have actually two generations of SoCs on 16FF+, while they couldn't step over 20SoC fast enough.
     
    #429 Ailuros, Nov 14, 2016
    Last edited: Nov 14, 2016
  10. iMacmatician

    Regular

    Joined:
    Jul 24, 2010
    Messages:
    787
    Likes Received:
    215
    There doesn't seem to be a general MediaTek thread here, but
    Speccy posted an EETimes link on the Ryzen thread that contains a tidbit about a MediaTek SoC (emphasis mine).

    I don't think I've seen a nontrivial odd number of one type of CPU core (except the A8X) on a chip, I wonder why this decision was made. (Especially when that means there's at least one other odd group somewhere on the chip.)
     
    #430 iMacmatician, Feb 7, 2017
    Last edited: Feb 7, 2017
  11. Nebuchadnezzar

    Legend Veteran

    Joined:
    Feb 10, 2002
    Messages:
    1,028
    Likes Received:
    260
    Location:
    Luxembourg
    Interesting, does that mean there's also 3 A73's to reach the total of 10 cores? That's quite an unexpected boost. Until now I assumed it as an 4+4+2 design.
     
  12. kalelovil

    Regular

    Joined:
    Sep 8, 2011
    Messages:
    564
    Likes Received:
    101
    Could it just be a typo of the Helio X30's details (4xA-35 cluster):
    https://www.xda-developers.com/mediatek-officially-unveils-the-10-nm-helio-x30-and-16-nm-helio-p25/
     
    iMacmatician likes this.
  13. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,455
    Likes Received:
    186
    Location:
    Chania
    iMacmatician likes this.
  14. Lodix

    Newcomer

    Joined:
    Sep 8, 2014
    Messages:
    57
    Likes Received:
    3
  15. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,455
    Likes Received:
    186
    Location:
    Chania
  16. tangey

    Veteran

    Joined:
    Jul 28, 2006
    Messages:
    1,508
    Likes Received:
    260
    Location:
    0x5FF6BC
    The Mali G71 as implemented in the Hisilicon 960 gets quite a bashing in today's Anandtech article.
    Average System load power (not peak) during graphics test is 8.5W, which is x2-x4 more than others tested. Even allowing for the fact that it is a higher end GPU than some tested, the fps/W shows it to be quite an inefficient implementation. The A9 in the iphone6s+ get around 60% more fps/W in the T-rex off-screen test.

    http://www.anandtech.com/show/11088/hisilicon-kirin-960-performance-and-power
     
  17. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,455
    Likes Received:
    186
    Location:
    Chania
    I have the impression that QCOM changed policy with the Adreno540 and goes primarily for higher sustained performance. I would think or better hope that other SoC and device manufacturers will follow pace.
     
  18. Nebuchadnezzar

    Legend Veteran

    Joined:
    Feb 10, 2002
    Messages:
    1,028
    Likes Received:
    260
    Location:
    Luxembourg
    It's quite a shitty chip/GPU.
     
  19. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    11,508
    Likes Received:
    6,275
    Do you think this might be because of the 16FFC that was supposed to be used on small and low-clocked mid-end SoCs?


    I'd love to see the consumption-per-clock curves for the A73 cluster. ARM touts the A73 as being significantly more power efficient than A72, but in Kirin 960 it seems obvious the density-optimized process wasn't made to push large cores up to 2.4GHz.
    Which left me curious as to how the A73 cluster compares to the A72 cluster in Kirin 950 at lower clocks, e.g. 1.5GHz.
     
    Lodix likes this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...