Low-cost emerging market SoC/phone discussion

  • Thread starter Deleted member 13524
  • Start date
One could argue the X20 could have lower power consumption because it's actually a 3-module "big.LITTLE.LITTLER" arrangement and it's done using 20nm (unlike S650's old 28nm), but turns out the Note 3 S650 has longer battery life.

Compared to it's predecessor a X20 smartphone hasn't that much more battery life, if any. They just run notably cooler. On the side it's actually good to see QCOM catching up again especially in sales.

That said Kishonti has now a quite demanding GPU bound battery life estimating benchmark running Manhattan 3.1; what I'd also want to see on top of that are temperatures at the end of the 30 loops of the benchmark. It's not like SoCs haven't any temperature sensors to reveal numbers, it's just that probably no one thought so far it's necessary. It doesn't do me any good if after a torture benchmark N SoCs isn't throttling by a worthwhile margin, but the device gets too hot to even hold. That has nothing to do with the SoCs mentioned here, but it has happened in other cases.

Oh and last but not least: 20SoC has the problem that it's not really preserving enough power to really justify the investment compared to 28HPm: http://images.anandtech.com/doci/9762/P1030606.jpg?_ga=1.32351228.334006868.1458215934
 
Compared to it's predecessor a X20 smartphone hasn't that much more battery life, if any. They just run notably cooler. On the side it's actually good to see QCOM catching up again especially in sales.

If it runs noticeably cooler that should mean more battery life. But it seems like the SD650 beats it despite being on 28nm. From what I've read, the CPU governor on the MTK chip is doing a poor job of actually distributing loads across the right CPUs and this is part of the reason for the battery life.
Oh and last but not least: 20SoC has the problem that it's not really preserving enough power to really justify the investment compared to 28HPm: http://images.anandtech.com/doci/9762/P1030606.jpg?_ga=1.32351228.334006868.1458215934

It still gets you a 25% power saving at ISO performance. MTK chose to go with higher clocks so they probably aren't seeing all the benefits though. You also have a lower die size due to the density increase and lower per transistor costs than 28nm AFAIK.
 
If it runs noticeably cooler that should mean more battery life. But it seems like the SD650 beats it despite being on 28nm. From what I've read, the CPU governor on the MTK chip is doing a poor job of actually distributing loads across the right CPUs and this is part of the reason for the battery life.

Hmmm, I thought I'd read somewhere Nebuchadnezzar mentioning that the governor was pretty effective for the Big-Little-Little combination? Or is it just a poor software implementation? Or perhaps my mind is playing tricks on me!

It is disappointing to see the X20 not performing a little better, IMO, but MTK have certainly continued to come on leaps and bounds over the past few years. I wonder if X30 will be able to compete a little better with the Qualcomm/Samsung/Huawei competitors.
 
The Xiaomi is using the lowest binned/less efficient version ( Cortex A72@ 2'1GHz ) of the X20. And brings more performance than the SD650, so even if it is more efficient you are using that leap to have more performance.
 
If it runs noticeably cooler that should mean more battery life. But it seems like the SD650 beats it despite being on 28nm. From what I've read, the CPU governor on the MTK chip is doing a poor job of actually distributing loads across the right CPUs and this is part of the reason for the battery life.

All cases or just one singled out case? And no work loads are fine on the X20, at least on the LeEco X620 implementation here.

It still gets you a 25% power saving at ISO performance. MTK chose to go with higher clocks so they probably aren't seeing all the benefits though. You also have a lower die size due to the density increase and lower per transistor costs than 28nm AFAIK.

Whereby you're actually confirming what I said. I didn't say 20SoC is useless, I said it doesn't bring the power savings one would expect, which effectively makes it a very questionable investment compared to 28HPm.

Hmmm, I thought I'd read somewhere Nebuchadnezzar mentioning that the governor was pretty effective for the Big-Little-Little combination? Or is it just a poor software implementation?

Probably the latter.

It is disappointing to see the X20 not performing a little better, IMO, but MTK have certainly continued to come on leaps and bounds over the past few years. I wonder if X30 will be able to compete a little better with the Qualcomm/Samsung/Huawei competitors.

Does it have to? Neither the X30 nor any former Mediatek high end SoC for their own product portofolio were ever really high end SoCs compared to every other competitor out there. It's good enough if the price/performance ratio is competitive enough. If I as a customer pay half the street price of a Samsung or whichever else smartphone but end up with =/>80% performance in the majority of cases I'll still jump on the MTK powered smartphone.

The headache in the case of X30 could be that manufacturing on 10nm/TSMC isn't going to be cheap at all....
 
Last edited:
All cases or just one singled out case? And no work loads are fine on the X20, at least on the LeEco X620 implementation here.

Well the Redmi Note 4 with X20 barely beats the SD650 in performance..and even with the benefit of the process..still has noticeably lower battery life. I'm not sure about the LeEco though.
Whereby you're actually confirming what I said. I didn't say 20SoC is useless, I said it doesn't bring the power savings one would expect, which effectively makes it a very questionable investment compared to 28HPm.

And what did you expect? TSMC claimed 25% lower power at ISO performance..that's nothing to sneeze at. If the power savings aren't as expected that is due to the fact that MTK chose to go with higher speeds and/or did a poor job with the implementation (Same with Qualcomm and the S810/S808). The Apple A8 showed us how it was possible to get both higher performance and lower power from 20SoC. (And probably lower per transistor costs like I mentioned)
 
And what did you expect? TSMC claimed 25% lower power at ISO performance..that's nothing to sneeze at. If the power savings aren't as expected that is due to the fact that MTK chose to go with higher speeds and/or did a poor job with the implementation (Same with Qualcomm and the S810/S808). The Apple A8 showed us how it was possible to get both higher performance and lower power from 20SoC. (And probably lower per transistor costs like I mentioned)

I expected GPU IHVs to stay away from 20SoC for a reason, and that's exctly what they did. Why an exception like Apple would suddenly make up for everything else is beyond me. Unless you have insight into Apple's manufacturing agreements (which no one really has) the last is nothing more but a gut feeling. Apple could eventually squeeze out a deal that goes for a lower per transistor cost, but that still doesn't make 20SoC a worthwhile investment over 28nm for everyone else.
 
I expected GPU IHVs to stay away from 20SoC for a reason, and that's exctly what they did. Why an exception like Apple would suddenly make up for everything else is beyond me. Unless you have insight into Apple's manufacturing agreements (which no one really has) the last is nothing more but a gut feeling. Apple could eventually squeeze out a deal that goes for a lower per transistor cost, but that still doesn't make 20SoC a worthwhile investment over 28nm for everyone else.

I still dont get what you're trying to say. And we are not talking about GPUs..but SoCs. You said 20SoC is not saving enough power compared to 28hpm. Your own figures show that it brings a 25% power saving and I pointed that out. Then you said it dosent bring as much power as you would expect. So I asked what did you expect..but you havent answered that. How much did we get from 40 to 28nm?

And you also said that the investment compared to 28nm is questionable. But you haven't explained why 20SoC is a questionable investment beyond just making that statement. If it was that bad a process and was not worthwhile, not only Apple, why did Samsung, Qualcomm, Nvidia, Mediatek, etc adopt it?
 
Last edited:
I still dont get what you're trying to say. And we are not talking about GPUs..but SoCs. You said 20SoC is not saving enough power compared to 28hpm. Your own figures show that it brings a 25% power saving and I pointed that out. Then you said it dosent bring as much power as you would expect. So I asked what did you expect..but you havent answered that. How much did we get from 40 to 28nm?

If you consider 25% any kind of worthwhile persentage then yes of course 20SoCs is worth the investment, I don't. GPU chips are chips with transistors, they're just bigger and have a far smaller cadence then high end SoCs these days (more below). Theoretically the jump from 40 to 28nm was still better than 40nm to the cancelled 32nm, which coincidentially would had made an equivalent to 20SoC vs. 28nm.

And you also said that the investment compared to 28nm is questionable. But you haven't explained why 20SoC is a questionable investment beyond just making that statement. If it was that bad a process and was not worthwhile, not only Apple, why did Samsung, Qualcomm, Nvidia, Mediatek, etc adopt it?

Because none of them had any other choice for their high end SoCs. Any manufacturer with a cut throat cadence needing to cram N more transistors into X die area didn't have much of a choice either way. For those manufacturers that had the luxury to skip 20SoC for 16FF+ Huawei's slide applies above. Manufacturers like Apple have actually two generations of SoCs on 16FF+, while they couldn't step over 20SoC fast enough.
 
Last edited:
There doesn't seem to be a general MediaTek thread here, but
A lot of the mediatek discussion has taken place on here, so thought I'd use this thread.
Speccy posted an EETimes link on the Ryzen thread that contains a tidbit about a MediaTek SoC (emphasis mine).

EETimes said:
For its part, Mediatek described a ten-core smartphone SoC made in a 10nm process.

The chip uses a new cluster of three ARM Cortex A-35 cores to handle ultra-low power jobs such as video and MP3 playback. Engineers designed a new kind of program counter it embedded in each core to ease the job of debugging the chip.
I don't think I've seen a nontrivial odd number of one type of CPU core (except the A8X) on a chip, I wonder why this decision was made. (Especially when that means there's at least one other odd group somewhere on the chip.)
 
Last edited:
There doesn't seem to be a general MediaTek thread here, but

Speccy posted an EETimes link on the Ryzen thread that contains a tidbit about a MediaTek SoC (emphasis mine).

I don't think I've seen a nontrivial odd number of one type of core (except the A8X) on a chip, I wonder why this decision was made. (Especially when there's at least one other odd group somewhere on the chip.)
Interesting, does that mean there's also 3 A73's to reach the total of 10 cores? That's quite an unexpected boost. Until now I assumed it as an 4+4+2 design.
 
There doesn't seem to be a general MediaTek thread here, but
Speccy posted an EETimes link on the Ryzen thread that contains a tidbit about a MediaTek SoC (emphasis mine).

I don't think I've seen a nontrivial odd number of one type of CPU core (except the A8X) on a chip, I wonder why this decision was made. (Especially when that means there's at least one other odd group somewhere on the chip.)

Could it just be a typo of the Helio X30's details (4xA-35 cluster):
https://www.xda-developers.com/mediatek-officially-unveils-the-10-nm-helio-x30-and-16-nm-helio-p25/
 
The Mali G71 as implemented in the Hisilicon 960 gets quite a bashing in today's Anandtech article.
Power consumption for Kirin 960’s GPU is even worse, with peak power numbers that are entirely inappropriate for a smartphone. Part of the problem is poor efficiency, again likely a combination of implementation and process, which is only made worse by an overly aggressive 1037MHz peak operating point that only serves to improve the spec sheet and benchmark results.

Average System load power (not peak) during graphics test is 8.5W, which is x2-x4 more than others tested. Even allowing for the fact that it is a higher end GPU than some tested, the fps/W shows it to be quite an inefficient implementation. The A9 in the iphone6s+ get around 60% more fps/W in the T-rex off-screen test.

http://www.anandtech.com/show/11088/hisilicon-kirin-960-performance-and-power
 
I have the impression that QCOM changed policy with the Adreno540 and goes primarily for higher sustained performance. I would think or better hope that other SoC and device manufacturers will follow pace.
 
The Mali G71 as implemented in the Hisilicon 960 gets quite a bashing in today's Anandtech article.


Average System load power (not peak) during graphics test is 8.5W, which is x2-x4 more than others tested. Even allowing for the fact that it is a higher end GPU than some tested, the fps/W shows it to be quite an inefficient implementation. The A9 in the iphone6s+ get around 60% more fps/W in the T-rex off-screen test.

http://www.anandtech.com/show/11088/hisilicon-kirin-960-performance-and-power
It's quite a shitty chip/GPU.
 
It's quite a shitty chip/GPU.
Do you think this might be because of the 16FFC that was supposed to be used on small and low-clocked mid-end SoCs?


I'd love to see the consumption-per-clock curves for the A73 cluster. ARM touts the A73 as being significantly more power efficient than A72, but in Kirin 960 it seems obvious the density-optimized process wasn't made to push large cores up to 2.4GHz.
Which left me curious as to how the A73 cluster compares to the A72 cluster in Kirin 950 at lower clocks, e.g. 1.5GHz.
 
Back
Top