This is waxing philosophical, but maybe that’s OK in a thread with ”existential” in its title.For most legacy applications just running on a single processor will be fine. Only the stuff which would actually need to be run simultaneously on more cores than in a single processor would need to be re-engineered if they went for multiple processors to scale for the highest end.
I tend to see this as some kind of multidimensional local minima that depend on your application space, lithographic technology, and training. All of which change over time partly predictably.
Thing is, you need to cross barriers to go from one minimum to the next. It may be that other minima deeper than where you are exist, but you can’t even get there to explore without crossing a barrier that may simply be too high. You may even know that there is a better way, and still not be able to get there.
This used to annoy me a lot when my testosterone levels where higher. Still does for that matter. But I’ve also come to realise that having to pay a large price in order to hit a somewhat more optimal optimum, may not be the wisest use of resources. It depends.
One of the things it depends on is inertia. Training. As the decades move on something that was ”a” way of doing things becomes ”the” way of doing things. This isn’t unique to computing by any means, but as someone who got into computational science just after punch cards, it is also very obvious how standardised computer architecture and programming has become, predictably following the paths of commercial inertia.
This is not necessarily such a bad thing per se. It’s just boring. Which means human ingenuity is applied to other areas, for better or worse. (I may lament that bright chemists have spent so much of their ability to map chemical problems to computer architecture rather than primarily adressing chemical questions for instance, but things have been moving forward as a whole regardless.)
At the end of the day, it may be OK that futzing around with the tool box takes a back seat to actually using the tools at hand to solve problems.
Reconnecting to the ”Apple” and ”PC” parts of the thread title, from a computational point of view I’m glad Apple makes a move like this at all, and am curious as to what possibilities they will explore. Just extrapolating from commercial inertia it definitely wasn’t a given. On the other hand inertia demands that the transition is smooth for developers and transparent to end users. A more radical reimagining of software structure was never in the cards.