Apple is an existential threat to the PC

It's scary how good Apple is at information security. I guess everyone had to sign a NDA which promised their first born child if they failed to uphold it.

Still, given that Intel's stockprice didn't budge much I guess everyone with money already knew.
 
Last edited:
I wonder what I/O that development Mac Mini has. It can drive the 6K screen.
This is an interesting question for the future in general. They have used, but also been limited by, Intels platform. Rolling their own silicon also means they get to decide what to offer and where. So many possibilities and questions!
 
This is an interesting question for the future in general. They have used, but also been limited by, Intels platform. Rolling their own silicon also means they get to decide what to offer and where. So many possibilities and questions!

Two USB-C ports (up to 10 Gbps)
Two USB-A ports (up to 5 Gpbs)
HDMI 2.0 port

It’s only $500 with 16GB memory and 512GB SSD.

You don’t own it though. You have to send it back at some point.
 
iOS 14 Includes Picture-in-Picture Mode for iPhone

biggest news. Lol. It just took them a decade to catch up.

BTW, interesting read:

Apple really undersold the A12


Can't wait for Macbooks with 15 hours of battery life.
 
Last edited by a moderator:
Rosetta worked fine as I recall the transition from PowerPC to Intel.

Back then it really helped that the previous PowerPC models were so, so much slower than the models with Intel CPUs that replaced them. PowerPC cpu development really fell behind Intel during the latter half of the G4 era, and G5 didn't come anywhere near catching up. However, Apple marketing somehow managed to keep most of their customers really happy about the CPUs, despite them being typically less than half the speed of contemporary x86 models.

By the time the x86 macs came out, the code that was emulated on them was designed for a CPU that was maybe a quarter or a third of the speed of the replacement, which meant that it was possible to emulate things at reasonable speeds.

The Apple chips in the new macs will not have nearly as much of a margin. Also, the memory model will be moving in the wrong direction. It's very easy to run code designed for a looser memory model on a stricter cpu, but the opposite is fraught with peril, unless Apple decides to just emit a load-acquire for every x86 load and a store-release for every x86 store.
 
Apple seems emphasising unified CPU and GPU memory as one of the key benefit/feature of the "Apple Silicon system architecture", and they are seemingly sticking to Imagination TBDR and so likely IMG GPUs for at least notebook SoCs. I wonder what that would imply for higher-end Mac SoCs for e.g. iMac and Mac Pro, and Apple's partnership with AMD (which seems to be better than ever with custom designs like Vega 12 and Navi 12). That and also whether they will scale up the accelerators in their SoCs for higher end machines.
 
Apple seems emphasising unified CPU and GPU memory as one of the key benefit/feature of the "Apple Silicon system architecture", and they are seemingly sticking to Imagination TBDR and so likely IMG GPUs for at least notebook SoCs. I wonder what that would imply for higher-end Mac SoCs for e.g. iMac and Mac Pro, and Apple's partnership with AMD (which seems to be better than ever with custom designs like Vega 12 and Navi 12). That and also whether they will scale up the accelerators in their SoCs for higher end machines.

I'd bet Apple will just continue to use AMD GPUs on Macs that needs dedicated GPUs. Not sure If Apple wants to pour resources in creating dGPUs for very small potion of entire Mac lineup.
 
https://developer.apple.com/videos/play/wwdc2020/10631/

Bring your Metal app to Apple Silicon Macs
Meet the Tile Based Deferred Rendering (TBDR) GPU architecture for Apple Silicon Macs — the heart of your Metal app or game's graphics performance. Learn how you can translate or port your graphics-intensive app over to Apple Silicon, and how to take advantage of TBDR and Metal when building natively for the platform. We'll look at how TBDR compares with the Immediate Mode Rendering pipeline of older Macs, go through common issues you may face when bringing an app or game over, and explore how to offer incredible performance when building with the native SDK. We've designed this session in tandem with “Optimize Metal Performance for Apple Silicon Macs.” After you've watched this session be sure to check that out next.

https://developer.apple.com/videos/play/wwdc2020/10632

Optimize Metal Performance for Apple Silicon Macs
Apple Silicon Macs are a transformative new platform for graphics-intensive apps — and we're going to show you how to fire up the GPU to create blazingly fast apps and games. Discover how to take advantage of Apple's unique Tile-Based Deferred Rendering (TBDR) GPU architecture within Apple Silicon Macs and learn how to schedule workloads to provide maximum throughput, structure your rendering pipeline, and increase overall efficiency. And dive deep with our graphics team as we explore shader optimizations for the Apple GPU shader core. We've designed this session in tandem with “Bring your Metal app to Apple Silicon Macs,” and recommend you watch that first. For more, watch “Harness Apple GPUs with Metal” to learn how TBDR applies to a variety of modern rendering techniques.

Apple ditches Intel on Macs for better apps for iPhone and iPad. :mrgreen:
 
Two USB-C ports (up to 10 Gbps)
Two USB-A ports (up to 5 Gpbs)
HDMI 2.0 port

It’s only $500 with 16GB memory and 512GB SSD.

You don’t own it though. You have to send it back at some point.
At some unspecified point in time in the future, where presumably there will be a selection of hardware available. Which is what caught my attention. Not only do they intend to replace Intel top to
Back then it really helped that the previous PowerPC models were so, so much slower than the models with Intel CPUs that replaced them. PowerPC cpu development really fell behind Intel during the latter half of the G4 era, and G5 didn't come anywhere near catching up. However, Apple marketing somehow managed to keep most of their customers really happy about the CPUs, despite them being typically less than half the speed of contemporary x86 models.
C’mon, do we really need to do those benchmark wars again, 15 years later?
The G4 was very lightweight compared to Dothan that replaced it. The G5, that went into iMacs, was not. For its time it was a FP monster, we used it for calculations, it’s combination of bandwidth, adressable memory and FP64 chops was the best departmental pocket change could buy.

By the time the x86 macs came out, the code that was emulated on them was designed for a CPU that was maybe a quarter or a third of the speed of the replacement, which meant that it was possible to emulate things at reasonable speeds.
The take home message here, and an opinion I share (I’ve used Macs for personal administration since 1985, so gone through all the transitions) is that performance wasn’t an issue generally. And for this transition Apples apps will be native, OS calls will be native (!), all bitcode distributed apps will be native, most supported apps will be native by the time customer systems are available.
Unsupported x86 apps seems as if they might be translated once, and then the translated binary is run, but that wasn’t totally clear to me. The Tomb Raider demo was x86 downloaded from the app store, but that doesn’t say anything about the stuff you already have on your disk and where the install is lost. (If you are critically dependent on such, you’re insane.) I basically only see very specialized cases such as unsupported filter plug ins for pro media apps, or drivers for old legacy hardware as stuff that will be dynamically translated.
I see no issues for most people, where "most" is way north of 90%. If you run absolute performance critical stuff under Boot Camp, you might want to build a dedicated small compute server (which you should have done in the first place). Boot Camp gamers are probably out of luck, but they have another couple of years to shop suitable Apple hardware if inexplicably that’s a high priority.

Apple has had a very long time to get the stars aligned. This will be the smoothest transition yet.

The Apple chips in the new macs will not have nearly as much of a margin. Also, the memory model will be moving in the wrong direction. It's very easy to run code designed for a looser memory model on a stricter cpu, but the opposite is fraught with peril, unless Apple decides to just emit a load-acquire for every x86 load and a store-release for every x86 store.
Well, you have a point. But as outlined above, I don’t see much of a practical issue. This will be explored in depth by various folks once consumer hardware is out, and realistically way before that since a ton of dev kits will be distributed now. $500 isn’t much of a barrier to the curious.

What is intrigueing is the statement that they would produce several SoCs. Now how would they design those? What specific use cases will they optimise for, given that they will target Macs? What supporting processing capabilities will they add? What I/O capabilities?
 
As an addendum, what I find so surprising and positive about this move is that Apple is investing in the pointer/keyboard/file system paradigm of personal computing.
I fully expected them to see that as a legacy market that they kept servicing with x86 hardware (where backwards compatibility is the entire raison d’etre), gradually shifting their customer base to ever more capable iPads.
That they are committing themselves to developing dedicated silicon to serve the Mac market says that they have a vision for the future of the Mac paradigm. What that might be, I can’t guess beyond the obvious, but that they aim for a developed traditional computing platform is great news for an old fart like me. I like files. And mice.
And if it had just been legacy use, well they had/have a solution for that, and Windows exists, so that’s not the justification. That’s not what’s interesting. What is interesting is where this might go. I’m fully prepared for disappointed teeth gnashing on my part once I find out, but the potential for a company to move forward in this segment rather than being totally preoccupied with backwards compatibility is exciting in itself.
 
A slightly OT news: the fastest supercomputer in the world is now ARM powered
One thing of note is the new supercomputer, Fugaku, does not rely on accelerators such as NVIDIA GPU, but is based on a Fujitsu designed custom ARM CPU with SVE extensions.
And the leader in four different benchmarks: Linpack, Graph 500, HPL-AI and HPCG.
Very consistent performer!
Something similar to that with 8GB of GDDR6 and a light Linux maybe could make a nice all purpose laptop!
The ARM with SVE reminds me the Xenon CPU (RISC with Vector).

The PowerPC ISA is opensource...
 
Well it’s happening. Unfortunately we are still mostly in the dark regarding actual products and configurations, and judging by Apples modus operandi, we are likely to remain so until they are ready to take orders.
However.
Apple has made quite a few statements, and I’ll refer to them and use them as a springboard for some speculation.
In the keynote, Johny Srouji made these statements:
1.27.48-1.30.48 (A whole new level of performance)1.31.23 (no blue field at ”desktop” power levels) (much higher level of performance while consuming less power)(A whole new level of graphics performance to the mac)(family of SoCs specifically for the mac product line)-1.33.16

In the developer State of the union video
https://developer.apple.com/videos/play/wwdc2020/102/
Sri Santhanam
08.30-09.30(Building a family of SoCs tailored specifically to the Mac. Practically speaking a whole new level of performance.)11.00 (Our goal is to provide maximum performance within each of (Mac) enclosures.)-12.00 (Bringing our high performance GPU architecture to the Mac)-13.00 13.15 (unified memory architecture) 14.25 (every mac will have powerful graphics)

And Craig Federighi in a video interview 48.30 about the dev kit.
“It gives a sense of what our silicon team can do when they are not even trying. And they’re gonna be trying.”

So - family of SoCs tailored to the power/thermal envelope of their respective product lines. Confident about the performance being much improved. Referring to unified memory and graphics power being strong across the board.

My speculation is that this implies that they won’t use a standard PC memory solution. Todays multicore CPUs pull data from narrow memory subsystems. And the bandwidth problem gets worse if you try to run graphics code at the same time.
If they are going to offer substantially better performance other than in very niche benchmarks, they need to do something about the memory subsystem. The iPad solution has been going to a 128-bit wide memory path, to LPDDR4x in the 2018 model, and likely LPDDR5 in the next.
But while that is comfortably better than what will be the going standard in x86 space, it doesn’t scale up nicely for systems that go beyond 10W power draw. I see two industry standard possibilities. One is GDDR6 a la consoles, the other is HBM.
Both offer bandwidth that allow graphics on a SoC to scale to high levels. (Next iPad Pro is likely to be roughly at PS4 level or thereabouts, so lets take that as a base line for fanless Macs.)
GDDR6 is relatively cheaper than HBM, but Apples volumes may reduce the difference, so HBM may be a more scaleable approach.

Of course, this means you can’t upgrade your memory capacity after purchase, but that is largely the case already for their product line, and the benefit is huge for performant unified memory systems.

Benchmarks, as always, will downplay the benefits of a strong main memory systems, so tech articles will likely describe it as mainly benefiting graphics (where it would make a huge difference) but all code that shuffle data to feed the SoC would gain.
(Also of note is that typical cross platform benchmarking like SPEC or GB will miss a large point of the change, which is the integration of dedicated coprocessors for various tasks.)

If the new Apple silicon Macs ship with a strong unified memory subsystem, that will in itself be a huge competitive advantage over what is currently projected in Win-x86 space.

I’m leaning towards HBM in stationary Macs, and either that or a 256-bit interface to LPDDR5 in MacBook Pros. We’ll see in a year or so. :D
 
Last edited:
I think we need to look at the system as a whole. If by going HBM it reduces the power requirements and gives PCB size reduction then that could offset the price difference.

Apple can control their margins much better with their own silicon. I’m very interested in seeing their answer to the large and expensive Xeon chips.

HBM2E gives 16GB per stack and over 400GB/s.
 
I think we need to look at the system as a whole. If by going HBM it reduces the power requirements and gives PCB size reduction then that could offset the price difference.

Apple can control their margins much better with their own silicon. I’m very interested in seeing their answer to the large and expensive Xeon chips.
Apple uses Xeons in both the iMac Pro, and the Mac Pro. I’d wager that the vast majority of their customers value bandwidth over max memory capacity, but there clearly is a trade off there. That said, I know tons of computational scientists who would buy high bandwidth general purpose systems without blinking. Bandwidth has developed at a snails pace compared to overall CPU capability of processing data.

I always felt AMD was missing a lovely opportunity, their APUs (and high core count CPUs) really cry out for higher bandwidth but the PC industry doesn’t seem inclined to try the concept. The next generation of consoles represent a nice conceptual template for low cost, high performance systems.

A day later edit: This is the "existential threat" that I see Apples transition representing. What if Microsoft in a year transitions the XBsx SoC to 5nm, slaps on 3 USB4 ports, and starts selling it as a standard PC at $500-600?
 
Last edited:
Back
Top