Apple is an existential threat to the PC

I think the time that windows/microsoft virually wont exist anymore is coming sooner than later. Im not ready to invest in the M1 Max yet, but next years M2 variant might be it.

Also think that they need to work on the virtualisation/BC tech more, the next playstation and xbox are most likely not going to run AMD hardware anymore, but either Apple or a variant of it (arm).
 
You'd be surprised, they have entire teams built around helping developers get the most out of AS, even writing an entire implementation from scratch given code access. But the optics are a bit different, Apple doesn't care about gaming nowhere near as much as professional workloads.

Gaming is important for iOS, hence why Metal was prioritised for iOS 8 (2014) and later landed on Mac with High Sierra (2017). Now they are leveraging basically the same frameworks across all platforms except watchOS, macOS will benefit from the gaming push for iOS.

Epic were fast out of the gate, Ark: Survival Evolved ,which launched in 2017 running on Unreal 4, supported Metal out of the gate.
 
Dont want to quote the whole post, but oh wow. Well lets agree then, Apple sillicon (and software) is the best and apple pc's is where it is at. So when do you expect the turn over, as per the topic, where Apple pc's will be what Windows/MS pc's are now in terms of marketshare? (for gaming aswell) With the release of the M1 laptop last year, with the thing being on another level compared to similar priced windows laptops, why arent people going for that instead of windows based laptops.

I'll let you answer your own hypothetical:

I think the time that windows/microsoft virually wont exist anymore is coming sooner than later. Im not ready to invest in the M1 Max yet, but next years M2 variant might be it.

But we all know that -- and I stated this in my original post -- Apple isn't going for mass market share. The vast majority of office crapboxes will continue to Windows only, serviced by the big name OEMs such as Dell, HP, and Lenovo.

Also think that they need to work on the virtualisation/BC tech more, the next playstation and xbox are most likely not going to run AMD hardware anymore, but either Apple or a variant of it (arm).

I have no idea what you're getting at here.

Hell will freeze over before Apple sells their chips to the Sony and Microsoft for use in a future console.
 
Apple isn't going for mass market share.

Why would they not? Not everyone needs even M1 2020 level of performance esp in a laptop. Half that power would be good enough for most tasks aside from serious gaming. Apple atleast ventured into the 399 dollar phone market with the SE (with a more powerfull chip than any android phone currently).
 
Looking at the die shot it appears that the NPU is doubled as well on the M1 Max.

So we have the GPU, display engine, media engine, neural engine, the extra 256-bit memory interface as well as the system level cache.

M1MAX.jpg
Some people, including Andrei Frumusanu, have speculated that the bottom portion of the M1 Max image has been edited to hide components for a multi-die interconnect.

Mark Gurman previously mentioned four codenames in the "Jade" series of SoCs:
  1. Jade C-Chop, 8 + 2 + 16
  2. Jade C-Die, 8 + 2 + 32
  3. Jade 2C-Die, 16 + 4 + 64
  4. Jade 4C-Die, 32 + 8 + 128
It's clear that the M1 Pro is Jade C-Chop and the M1 Max is Jade C-Die, but we have not seen chips corresponding to the other codenames yet. It has been widely speculated that Jade 2C-Die and Jade 4C-Die are MCMs of two and four Jade C-Dies respectively. If that is true, then Apple presumably wouldn't want to reveal any signs of it in their marketing, since any large and well-defined structure that can be seen in the Max die shot but completely absent (not just reduced in number) in the Pro die shot would quickly draw suspicion.
 
You can probably find a low power mobile U SKU that does even less. Doesn't change the fact that the same microarchitecture does 4.9GHz in an inferior process node.
The lowest boost clock in Zen 3 mobile is 4GHz:

Ryzen - Wikipedia

which is for a 4-core processor (5400U). The table implies it can go into a 10W TDP based system. There's a "10W" 8-core processor, 5800U, with a clock of 4.4GHz too.

So, the conclusion is that AMD will never be competitive in laptop/desktop power efficiency because a single microarchitecture covers all classes of customer application. Currently there's a ratio between highest boost and lowest base clocks of nearly 3x (1.8GHz to 4.9GHz), meaning that the design is hopelessly far from being IPC-optimised.

Meanwhile Intel will specialise and have a hope of being competitive...
 
So more games are showing promising performance.

First we have EVE Online running in 2.5K and 4K 144Hz.

Code:
M1 (8 core GPU) - MBA w/16GB RAM
4K: 16 / 22 fps (AA on / off)
2.5K: 28 / 35 fps (AA on / off)

M1 Max (32 core GPU) - 16” MBP w/32GB RAM
4K: 70 / 100 fps (AA on / off)
2.5K: 115 / 120 fps (AA on / off)

RTX 3080 Ti (with i9-11900K)
4K: 140 / 140 fps (AA on / off)
2.5K: 150 / 150 fps (AA on / off)

And then League of Legends running on their beta Metal build is showing significant performance boost from 90-120 FPS to 160-220 FPS at 3456x2234 with Very High graphic settings on a base model M1 Pro 16-core GPU.
 
Last edited:
The people who are benchmarking mac gaming on youtube do a really weird job of it. I don't play any of the games they benchmark, so I really have no idea what performance is good or bad, but the Counter-Strike performance looks really bad.
 
The people who are benchmarking mac gaming on youtube do a really weird job of it. I don't play any of the games they benchmark, so I really have no idea what performance is good or bad, but the Counter-Strike performance looks really bad.

It runs through Rosetta 2 and even uses OpenGL on macOS. It's probably the worst case scenario for Apple Silicon.

They are outright doing a bad job at it in my opinion. It just highlights the fact that very few developers care as they have had a full year with M1 hardware and still haven't done much.
 
I believe Rosetta does have a realtime translation mode, a bit like traditional emulation, which I think is designed for facilitate 80x86 code in virtualised environments, i.e. Intel code running inside environment outside of macOS.
That's also needed for software that generates code at runtime (any JIT for instance). I wonder how well/bad Rosetta2 behaves for such software.
 
That's also needed for software that generates code at runtime (any JIT for instance). I wonder how well/bad Rosetta2 behaves for such software.

It can be difficult to benchmark as most benchmarks tend to run the same codes many times, and that gives Rosetta 2 the opportunity to cache generated codes.
 
It can be difficult to benchmark as most benchmarks tend to run the same codes many times, and that gives Rosetta 2 the opportunity to cache generated codes.
In that case, Rosetta2 behaves as other JIT-based engines: rather than the static compiling stuff it does for non-dynamic code, it generates code on the fly and profiles it for further optimization (or not). I mean all software that does runtime code translation does caching on generated code that is more or less well optimized. The spectrum can be very wide here. For instance QEMU is rather primitive: it translates and caches code without doing expansive optimizations (it "only" does some primitive constant propagation/optimization and register allocation).
But I get your point, that might be very hard to characterize.
 
Back
Top