Apple is an existential threat to the PC

For most legacy applications just running on a single processor will be fine. Only the stuff which would actually need to be run simultaneously on more cores than in a single processor would need to be re-engineered if they went for multiple processors to scale for the highest end.
This is waxing philosophical, but maybe that’s OK in a thread with ”existential” in its title. :)
I tend to see this as some kind of multidimensional local minima that depend on your application space, lithographic technology, and training. All of which change over time partly predictably.
Thing is, you need to cross barriers to go from one minimum to the next. It may be that other minima deeper than where you are exist, but you can’t even get there to explore without crossing a barrier that may simply be too high. You may even know that there is a better way, and still not be able to get there.
This used to annoy me a lot when my testosterone levels where higher. Still does for that matter. But I’ve also come to realise that having to pay a large price in order to hit a somewhat more optimal optimum, may not be the wisest use of resources. It depends.
One of the things it depends on is inertia. Training. As the decades move on something that was ”a” way of doing things becomes ”the” way of doing things. This isn’t unique to computing by any means, but as someone who got into computational science just after punch cards, it is also very obvious how standardised computer architecture and programming has become, predictably following the paths of commercial inertia.
This is not necessarily such a bad thing per se. It’s just boring. Which means human ingenuity is applied to other areas, for better or worse. (I may lament that bright chemists have spent so much of their ability to map chemical problems to computer architecture rather than primarily adressing chemical questions for instance, but things have been moving forward as a whole regardless.)
At the end of the day, it may be OK that futzing around with the tool box takes a back seat to actually using the tools at hand to solve problems.

Reconnecting to the ”Apple” and ”PC” parts of the thread title, from a computational point of view I’m glad Apple makes a move like this at all, and am curious as to what possibilities they will explore. Just extrapolating from commercial inertia it definitely wasn’t a given. On the other hand inertia demands that the transition is smooth for developers and transparent to end users. A more radical reimagining of software structure was never in the cards.
 
Apple confirms Apple Silicon Macs will support Thunderbolt connection
“Over a decade ago, Apple partnered with Intel to design and develop Thunderbolt, and today our customers enjoy the speed and flexibility it brings to every Mac. We remain committed to the future of Thunderbolt and will support it in Macs with Apple silicon,” commented an Apple spokesperson in a statement to The Verge.

But what about driver support for older hardware? And TB3 means Apple Silicon SoCs would have a lot of PCIe lanes?
 
Thing is, you need to cross barriers to go from one minimum to the next. It may be that other minima deeper than where you are exist, but you can’t even get there to explore without crossing a barrier that may simply be too high. You may even know that there is a better way, and still not be able to get there.

That's exactly how I view it. Apple has been wanting to make this move for years, but the performance of Intel's offerings were just too high. Apple are in the premium computer market and you can't command a premium price if you don't a have competitive processor.

Intel's stumbling getting to new process nodes has reduced the barrier to make this switch; The power-performance metric is now so much in favour of Apple's own SOCs and outright performance close enough to provide a net value increase.

It is still a gutsy move, the safe bet would been to start sourcing AMD 4xxx series APUs to get the 2x power-performance increase (and to put pressure on Intel).

Cheers
 
Ming-Chi Kuo expects four different ARM Apple laptops late this year and next year.
Ming-Chi Kuo said:
We predict that Apple will launch new MacBook models including the new 13.3-inch ‌MacBook Pro‌ equipped with the ‌Apple Silicon‌ in 4Q20, the new ‌MacBook Air‌ equipped with the ‌Apple Silicon‌ in 4Q20 or 1Q21, and new 14- and 16-inch ‌MacBook Pro‌ models equipped with the ‌Apple Silicon‌ and all-new form factor design in late 2Q21 or 3Q21.
So the entire Apple laptop lineup could move to ARM within about 5 quarters from the announcement at WWDC 2020.
 
Apple confirms Apple Silicon Macs will support Thunderbolt connection


But what about driver support for older hardware? And TB3 means Apple Silicon SoCs would have a lot of PCIe lanes?
I would assume that they’ll be using TB4. So yes, they’ll need to support a number of PCIe lanes although the laptops wouldn’t need a lot of them since they presumably won’t use it to connect to a discrete GPU. If they want a future Mac Pro to be able to connect to discrete GPUs, they’ll have to design in more of course.
Ming-Chi Kuo expects four different ARM Apple laptops late this year and next year.
So the entire Apple laptop lineup could move to ARM within about 5 quarters from the announcement at WWDC 2020.
Laptops are pretty straightforward, insofar that they don’t really need other than standard I/O ports, and can do without expandable memory. The lowest power draw devices could use a straight iPad Pro chip (with PCIe lanes going off chip) and the higher tier could, for instance, double the off chip memory bus width, which would also buy them twice the RAM capacity. That’s pretty much a minimal effort (and validation and design time) proposition that would also be pretty damn performant. They could also plop either of these into a Mac Mini enclosure and be done.
But I wouldn’t necessarily assume that they won’t be a bit more creative. iMacs and Mac Pros give a bit more options in terms of power draw. And while the iMacs could use the MacBook Pro solution, Apple have clearly stated that they intend to produce more SoCs, so....? I guess if we wanted to, we could produce some exciting, but still reasonably believable list of specs that would then do the rounds on the internet as a credible rumour. :D
 
That's exactly how I view it. Apple has been wanting to make this move for years, but the performance of Intel's offerings were just too high. Apple are in the premium computer market and you can't command a premium price if you don't a have competitive processor.

Intel's stumbling getting to new process nodes has reduced the barrier to make this switch; The power-performance metric is now so much in favour of Apple's own SOCs and outright performance close enough to provide a net value increase.

It is still a gutsy move, the safe bet would been to start sourcing AMD 4xxx series APUs to get the 2x power-performance increase (and to put pressure on Intel).

Cheers

Intel seems to have been caught sleeping by AMD and Intel. However Intel has new gpu tech in the work and they have their big little tech coming out. I also wouldn't be surprised if we actually get a new cpu design from them for the first time in like 10 or 12 years.


AMD Zen keeps increasing in performance and core count while lowering power usage and they haven't even integrated navi 1 let alone navi 2 into the cpus. They do have the bobcat cores if they want to start implementing big little also. I think puma was the last release of it but thwat was 6 years ago on 28nm. I am sure if they dusted that off for 7nm or 5nm it be quite a power efficient design for a little brother to newer zen cores. 8 of those would be really tiny and would work well with 4-8 zen cores for an ultra portable. Then you have the amd k12 dunno what happened with that . However since MS does have an arm windows and x86-64 windows perhaps they can implement a hybrid solution of having a k12 variant on the same die as zen.

For apple it makes sense since they can control the majority of their pipe line for processors and save a lot of money not buying from intel or amd. Of course it remains to be seen if they switch off amd graphics chips for their mac line up.
 
Intel seems to have been caught sleeping by AMD and Intel. However Intel has new gpu tech in the work and they have their big little tech coming out. I also wouldn't be surprised if we actually get a new cpu design from them for the first time in like 10 or 12 years.
I wonder how their big.LITTLE gambit will play out. Not only will Windows scheduling have to be intelligent about it, but by shifting between a HP and an LP core you also switch instruction sets. If the OS has to reboot to see AVX, AVX2, AVX-512, TSX-NI, F16C och FMA, well....but they may extend their "Xmont" core capabilities.
 
I wonder how backwards compatibility will work on the new ARM macbooks. When apple transitioned from PowerPC to x86, few people cared because few people owned an apple PC, and with being able to run windows on their macs they increased significantly their available software overnight.


If they are going to offer substantially better performance other than in very niche benchmarks, they need to do something about the memory subsystem. The iPad solution has been going to a 128-bit wide memory path, to LPDDR4x in the 2018 model, and likely LPDDR5 in the next.
I agree. Apple will definitely not be stuck to 2*64bit DDR4/5 SO-DIMMs. They can use a single HBM2E stack to get 300-400GB/s on their new SoC and increase bandwidth by almost 10x over what they have now with Coffee Lake.
At the very least, they'll go with wide 256bit LPDDR5 (~100GB/s) or 128 GDDR6 (>224 GB/s) on their macbooks.

ARM was and is a distraction for Microsoft, one of many distractions Microsoft has saddled itself with.
I don't think Microsoft's efforts on ARM were a distraction. They were just Microsoft trying to stop being so dependent on Intel, and their only problem is that they've been failing miserably at every attempt.
Had Microsoft been successful at putting Windows running on ARM with performant and full win32 / win64 backwards compatibility, most of us could be using right now a Windows 10 smartphone that docks and turns into our work PC.

Instead, we have Microsoft pretty much being held back by Intel's inability to keep up in fab evolution for the past ~7 years (and AMD's inability to keep up in architecture up to 2017).
 
I wonder how their big.LITTLE gambit will play out. Not only will Windows scheduling have to be intelligent about it, but by shifting between a HP and an LP core you also switch instruction sets. If the OS has to reboot to see AVX, AVX2, AVX-512, TSX-NI, F16C och FMA, well....but they may extend their "Xmont" core capabilities.
windows 10 x will help solve a lot of that and its why its delayed. The whole point of that OS branch was to tackle those issues as you say. It should be seemless where the os will assign the task to the proper core. MS is investing into this in a big way. Its important for their business and bigger than windows arm for sure
 
I wonder how backwards compatibility will work on the new ARM macbooks. When apple transitioned from PowerPC to x86, few people cared because few people owned an apple PC, and with being able to run windows on their macs they increased significantly their available software overnight.



I agree. Apple will definitely not be stuck to 2*64bit DDR4/5 SO-DIMMs. They can use a single HBM2E stack to get 300-400GB/s on their new SoC and increase bandwidth by almost 10x over what they have now with Coffee Lake.
At the very least, they'll go with wide 256bit LPDDR5 (~100GB/s) or 128 GDDR6 (>224 GB/s) on their macbooks.


I don't think Microsoft's efforts on ARM were a distraction. They were just Microsoft trying to stop being so dependent on Intel, and their only problem is that they've been failing miserably at every attempt.
Had Microsoft been successful at putting Windows running on ARM with performant and full win32 / win64 backwards compatibility, most of us could be using right now a Windows 10 smartphone that docks and turns into our work PC.

Instead, we have Microsoft pretty much being held back by Intel's inability to keep up in fab evolution for the past ~7 years (and AMD's inability to keep up in architecture up to 2017).


The surface pro x is a great attempt by MS . The hardware is really nice with great battery performance and the x86 emulation is quick. They just weren't able to hit it with 64bit yet but a company can create an arm 64bit executable. They are also working on 64bit emulation but of course its going to take a bit longer.

Apple has a bit more control as many of the popular programs for them are made by them. So there is no reason all apple software wont be native arm at launch or near launch. Even Office from MS has a 64bit arm version because of arm windows.
 
I wonder how their big.LITTLE gambit will play out.

I expect it to a spectacular failure. Their low power cores have poorer effiency than Zen 2. So you either get piss poor effiency or piss poor performance compared to Zen 2 ultra lights. And, as you said, the Windows scheduler has to work all this out.

Cheers
 
I wonder how backwards compatibility will work on the new ARM macbooks. When apple transitioned from PowerPC to x86, few people cared because few people owned an apple PC, and with being able to run windows on their macs they increased significantly their available software overnight.

Apple does translation from x86 to arm behind the scenes. Rosetta2 was demoed using tomb raider game. This leads to an interesting question. As apple designs their own arm cores would the initial apple arm implementations have special sauce to make translation more performant?

https://www.theverge.com/21304182/apple-arm-mac-rosetta-2-emulation-app-converter-explainer#:~:text=That's where Rosetta 2 comes,that Apple's chips can understand.
 
I agree. Apple will definitely not be stuck to 2*64bit DDR4/5 SO-DIMMs. They can use a single HBM2E stack to get 300-400GB/s on their new SoC and increase bandwidth by almost 10x over what they have now with Coffee Lake.
At the very least, they'll go with wide 256bit LPDDR5 (~100GB/s) or 128 GDDR6 (>224 GB/s) on their macbooks.
I hope they do come up with HBM2E solution just because that would be exciting/interesting from a tech POV, but... that would limit RAM capacity to 24GB, compared to a current max of I think 64GB (MacBook Pro). I suppose they could offer 2 stack configurations to approach the current maximum. Or maybe HBM3 ups the per stack capacity again? Also, I wonder if there is enough HBM manufacturing capacity to supply the needs of consumer products. I'm not sure if that's a legitimate worry - I have no idea how much shipping a stack or 2 of HBM per Mac would increase HBM volume. Maybe enough high end GPU/AI/networking stuff gets shipped that it's not actually that bad?
 
Last edited:
I hope they do come up with HBM2E solution just because that would be exciting/interesting from a tech POV, but... that would limit RAM capacity to 24GB, compared to a current max of I think 64GB (MacBook Pro). I suppose they could offer 2 stack configurations to approach the current maximum. Or maybe HBM3 ups the per stack capacity again? Also, I wonder if there is enough HBM manufacturing capacity to supply the needs of consumer products. I'm not sure if that's a legitimate worry - I have no idea how much shipping a stack or 2 of HBM per Mac would increase HBM volume. Maybe enough high end GPU/AI/networking stuff gets shipped that it's not actually that bad?


They could go with HBM2E / HBM3 chips for the Mac Pro and higher-end 15/17" Macbook Pro, and wide 256bit (or more) LPDDR5 for the 13" Macbook.

I just don't think apple will go the DDR4/DDR5 route. Regular DDR seems pretty terrible in anything but price nowadays.
 
Last edited by a moderator:
windows 10 x will help solve a lot of that and its why its delayed. The whole point of that OS branch was to tackle those issues as you say. It should be seemless where the os will assign the task to the proper core. MS is investing into this in a big way. Its important for their business and bigger than windows arm for sure
The latest leak from Intel indicates that the issue of having different ISA support between the big and the small cores is only partially resolved. Using AVX-512, TSX-NI, F16C will require a system reboot, only allowing the big cores to be active.
Which is, lets be honest, butt ugly.
It could be rectified in later Intel CPUs of course, most straightforwardly by implementing those instructions into the small cores. Which, on the other hand, makes them not-so-small anymore. Having to support AVX-512 with the small cores represents a huge waste.
 
They could go with HBM2E / HBM3 chips for the Mac Pro and higher-end 15/17" Macbook Pro, and wide 256bit (or more) LPDDR5 for the 13" Macbook.

I just don't think apple will go the DDR4/DDR5 route. Regular DDR seems pretty terrible in anything but price nowadays.
The AMD 5600M that Apple commissioned, has a 40 CU (as the 5700XT) GPU coupled to a HBM memory at 50W. What this tells us is that Apple is/was willing to accept high cost rather than high TDP in order to achieve performant MacBooks. That TDP is still around 100W system wide. I would assume they would prefer to drop that to half to improve battery life and facilitate quieter cooling and a lighter product.
But even 50W is a lot more than the sub 10W of an iPad Pro SoC. If we assume that the rumoured 8x4 SoC is on 5nm, it could very well be targeted at an updated iPad Pro, and a fanless MacBook Air or some such. 128-bit LPDDR5 would seem to fit that.
But if we make three more assumptions
a, Apple will allow the SoC a 40W power draw
b, They will want to provide at least as high performance as they currently offer.
c, They will want to allow at least the same RAM capacity as they currently offer.
Then we are pretty much left with HBM as the memory solution of choice for the higher end MacBook Pros, but with a question mark in terms of memory capacity. Currently they offer 64GB options, and that might require going wider in interface than makes sense otherwise. Or their supplier(s) could increase capacity/stack (currently 24GB/stack) between now and launch.
Of course, given the memory market as of right now, LPDDR5 doesn't allow as high bandwidth if you stay at 256-bit interfaces, but it offers more flexibility in terms of total memory capacity and suppliers. Offering 64GB as a memory option is no problem at all and doing that lovely sell up ramp from 16GB to 64GB is trivial. So an option for MacBook Pros could still be LPDDR5. That might make hitting the performance of the 5600M a stretch, but they could be in the ballpark, and at a lot lower cost. Power draw could be quite low, since the memory subsystem doesn't allow all that much productive upscaling anyway from the fanless SoC. Say a factor of 2.5 up in effective GPU performance from that SoC at 30W.

The same reasoning goes for the Mac mini. It could use either a fanless SoC or the laptop SoC depending of market positioning.

The iMacs and of course the Mac Pros is where the large question marks are, because acceptable power draw is 100W+. Here there are three memory possibilities - HBM, GDDR6, or DDR5.
DDR5 would allow the greatest capacities and greatest flexibility at lowest cost/GB. And, unfortunately, the lowest bandwidth by far, unlikely to exceed 200GB/s even with a 256-bit interface. Assuming a dedicated SoC and unified memory, graphics performance would be adequate, but not really up to discrete mid-power GPUs at the time due to pure bandwidth constraints if nothing else.
GDDR6 would allow bandwidths up to 600GB/s or so on a 256-bit interface. The problem being capacities, the current memory offerings are targeting graphics cards and have much too small capacity to fit the iMac market where max RAM today is 64-256GB. That implies that they would have to source bespoke memory chips, which will increase cost/GB and might cause vendor lock-in, at least for the highest end capacities.
HBM would offer the greatest bandwidth, with the worst granularity of memory offerings and depending on the number of stacks, probably the lowest maximum RAM capacity. (We are still talking 48GB for a dual stack design right now, so what might be on offer a year and a half from now is bound to be a bit better).

I would gravitate towards the latter two for the iMac. The Mac Pro, however, would depend on if they would still allow discrete GPUs. Nothing indicates that discrete GPUs will remain a base level option, but if it remains a possibility either via PCIe or eGPU, then DDR5 might make sense on a Mac Pro in order to allow for really large RAM capacity.

There are a number of assumptions in the above that while hopefully reasonable may not be true, and merely serves to reduce the speculation space and organise the options.

Edit: digitimes just ran an article stating that TSMC would see significant uptick in wafer orders 2H 2021 due to Apple silicon macs. Which is a rather extraordinary claim, given the currently tiny volume of Macs (20 million/year) vs. iOS devices. It could be nonsense, or it could indicate significantly larger SoCs for upcoming Macs.
 
Last edited:
The latest leak from Intel indicates that the issue of having different ISA support between the big and the small cores is only partially resolved. Using AVX-512, TSX-NI, F16C will require a system reboot, only allowing the big cores to be active.
Which is, lets be honest, butt ugly.
It could be rectified in later Intel CPUs of course, most straightforwardly by implementing those instructions into the small cores. Which, on the other hand, makes them not-so-small anymore. Having to support AVX-512 with the small cores represents a huge waste.
Can't they do AVX-512 support by using two AVX-256 . Even still how big is the Atom p5900 ? Its an 8 core to 24 core cpu i think its quite small

We'd also have to see what amd has up their sleves
 
Can't they do AVX-512 support by using two AVX-256 . Even still how big is the Atom p5900 ? Its an 8 core to 24 core cpu i think its quite small

We'd also have to see what amd has up their sleves
Superficially, you'd think that you could use trap-and-emulate for AVX-512 and FP16 both (TSX-NI though...). I haven't tried to dig deep enough to know exactly what trips them up. This feels like a beta test, I'm sure that a couple of generations of band-aids and duct-tape will make it look better, and hopefully it'll bring tangible advantages once all the geese are in line. You raise an interesting point with AMD. I'm not sure they'll follow Intels approach at all, if they have other effective ways to manage low activity power draw.
 
Back
Top