Windows 10 [2014 - 2017]

Status
Not open for further replies.
The high end ARM cores already are way above Atoms in performance in my opinion and experience.

High end ARM cores are still low end compared to Intel Core CPUs, Apples CPUs excepted (but Apple isnt in the chip merchant business).

Cheers
 
So this is probably why we are getting such great deals on Lumia 950 phones these days. I got an xl with a dock for 420 a while ago and saw an offer for 349 inc dock the other day straight from Microsoft. It has replaced my iPhone as phone and the iPhone is now an App and navigation only device.

I really like windows mobile better than the other two OSes at the moment, at least how they are straight out of the box (I'm sure you can get Android modded to something very cool). I also really like the hardware - it's a great screen and camera is also really good, but a better placed lens.

I hope the Win32 emu support will only help the native ARM support take off as it's a stopgap solution where you would want apps to eventually be all UAP with native ARM support, and the Win32 emu remains for those pesky old favorites that haven't received a replacement or are too expensive to upgrade.
 
Last edited:
High end ARM cores are still low end compared to Intel Core CPUs, Apples CPUs excepted (but Apple isnt in the chip merchant business).

Cheers

High end ARMs should come pretty close to the core M in single threading.
With 8 cores versus 2 in multi-threading guess who will be the fastest.
 
Why then PS4 and XBOne have 8 cores instead of 2 faster ones (and they are x86) ?

Because consoles are special purpose machines, where you need maximum performance/$ and maximum performance/Watt. Console games typically have one or two compile targets, allowing you to use CPU specific intrinsics and utilize specific system architectural knowledge (cache hierarchi, memory capacity/structure etc. )

For apps going to a plethora of devices, mobile or otherwise, that's just not the case.

Cheers
 
Because consoles are special purpose machines, where you need maximum performance/$ and maximum performance/Watt. Console games typically have one or two compile targets, allowing you to use CPU specific intrinsics and utilize specific system architectural knowledge (cache hierarchi, memory capacity/structure etc. )

For apps going to a plethora of devices, mobile or otherwise, that's just not the case.

Cheers

Games must be among the top Apps consuming CPU, so multihreading really helps.
And once you write multi-threaded code, it can work for any number of cores without extra effort.
Ported console games can make good use of a similar CPU having many cores.
 
Last edited:
Phones are like tablets and pcs devices that run a large number of processes in parallel. Additionally, they need to stay cool perhaps even more. So the well known benefits of having multiple cores at a lower frequency hold at least as much here as anywhere else.
 
Phones are like tablets and pcs devices that run a large number of processes in parallel.
Yeah, but almost none of the threads/processes are active (is in a run queue). As I type this, I have 1428 threads on my Windows 10 PC and CPU utilization is 1-2% (6core, 12 context CPU).

Modern apps utilizes async tasks to a large extent. Each active async task has its own thread. If you have a computationally heavy app you might use lots of cores, but in most apps you use async tasks to avoid blocking the main thread (which runs the GUI). Android and IOS both mandates network access be done in async tasks. So you spawn your task and send your network request, - in a matter of microseconds, the thread then waits 100 miliseconds for the response, and resumes.

You can have many tens of async tasks in flight with near zero CPU usage. This can cost power in a many core SOC, because you have plenty threads and the scheduler fires up all cores, then they mostly sit idle, but not idle enough to be powered down.

Look at the graphs on this page from Anandtechs multi-core investigation article. Notice how many of the cores in either the LITTLE or big cluster are active, and notice how CPU usage is only 30-40% and how cores are clock gated instead of power gated (especially true for the big cores). You have four cores running, but two cores could have done the same work, - at the same clock frequency. As a consequence, power is wasted.

Cheers
 
Yeah, but almost none of the threads/processes are active (is in a run queue). As I type this, I have 1428 threads on my Windows 10 PC and CPU utilization is 1-2% (6core, 12 context CPU).

Modern apps utilizes async tasks to a large extent. Each active async task has its own thread. If you have a computationally heavy app you might use lots of cores, but in most apps you use async tasks to avoid blocking the main thread (which runs the GUI). Android and IOS both mandates network access be done in async tasks. So you spawn your task and send your network request, - in a matter of microseconds, the thread then waits 100 miliseconds for the response, and resumes.

You can have many tens of async tasks in flight with near zero CPU usage. This can cost power in a many core SOC, because you have plenty threads and the scheduler fires up all cores, then they mostly sit idle, but not idle enough to be powered down.

Look at the graphs on this page from Anandtechs multi-core investigation article. Notice how many of the cores in either the LITTLE or big cluster are active, and notice how CPU usage is only 30-40% and how cores are clock gated instead of power gated (especially true for the big cores). You have four cores running, but two cores could have done the same work, - at the same clock frequency. As a consequence, power is wasted.

Cheers

But don't some chips already feature low power modes that disable cores when they are not needed? It seems more an issue of being overpowered and not being efficient with it than that multiple cores are worse per se.

Scanning the conclusion in the Anandtech article they actually agree with me?
 
Scanning the conclusion in the Anandtech article they actually agree with me?

Well, I don't think their own data supports their conclusion. The rely too much on the run-queue length read-outs ( "load" in unix parlance) and not enough on the actual cpu-usage. Case in point: The app update scenario, here the average run-queue depth is above 5, which Anandtech interprets as their being enough work for 5 cores, while in reality the four big cores are heavily underutilized. By eyeballing it, I'd say that each core is less than one third loaded on average (the green area in big cluster graph).

Android runs on Linux, and load not only reflects how many processes are ready to run, but also threads blocked, waiting for I/O. As an example, I just tried to copy two large files on one of our Linux servers, running apache, here:


Notice 327.6% of the 400% (ie, less than two thirds of one CPU used to do work) cpu resources are either in wait or idle, even though the load of the server is above 15.

I'm guessing that the app-update scenario is actually bottlenecked by file access and not CPU usage, and that is what the run queue lengths reflect.

Cheers
 
that sounds more correct, as of now they are just screenshots that change a bit some colours of the interface, and that's it
That's all a windows theme is. A collection of wallpapers and a UI color selection.
 
Status
Not open for further replies.
Back
Top