Apple is an existential threat to the PC

A lot of games are written in common engines that do a very good job of optimising for Apple's Metal API. Like Unreal engine and Unity. World of Warcraft was pretty much the first big game to support Apple's Metal API and other Mac-friendly developers like Creative Assembly (Total War, Alien Isolation), Paradox (Stellaris, Crusader Kings, Harts of Iron, Europa Universalis IV, Surviving Mars), and Firaxis (Civilization) embraced native Metal graphics engines years ago.

What do people think games are using to drive graphics on Mac? The deprecated OpenGL 4.1 driver code from July 2010? The first Metal API was introduced seven years ago. :yep2:

Yeah, I get your point and should have clarified.

No one has optimised for Apple's GPU as far as I am aware. As shown by GFXBench it is possible to eek out more competitive performance when coding directly for Apple's GPU architecture.

Baldur's Gate 3 seems to have been optimised and run great with Ultra settings in QHD.

 
Last edited:
But I do know that. The L1 cache's size and latency is a very clear indicator that the M1 cannot clock much higher.
And I'm saying they can clock it higher. They're very very far away from the voltages and any kind of implementations of AMD/Intel. It has nothing to do with the microarchitecture itself.
 
And I'm saying they can clock it higher. They're very very far away from the voltages and any kind of implementations of AMD/Intel. It has nothing to do with the microarchitecture itself.
And I'm saying they can't.

Yes, they can increase the voltage and increase switching speed. That's doesn't shorten wire delay of those massive L1 caches, ROBs and schedulers, so they can't clock it higher.

Cheers
 
A lot of games are written in common engines that do a very good job of optimising for Apple's Metal API. Like Unreal engine and Unity. World of Warcraft was pretty much the first big game to support Apple's Metal API and other Mac-friendly developers like Creative Assembly (Total War, Alien Isolation), Paradox (Stellaris, Crusader Kings, Harts of Iron, Europa Universalis IV, Surviving Mars), and Firaxis (Civilization) embraced native Metal graphics engines years ago.

What do people think games are using to drive graphics on Mac? The deprecated OpenGL 4.1 driver code from July 2010? The first Metal API was introduced seven years ago. :yep2:

I think the main issue is most games are running under Rosetta2, and even though they're using the metal api I'm not sure how the new apple silicon fits. I remember briefly reading and metal had gpu families with different feature sets depending on whether you were using apple gpus on ios, or nvida/amd/intel gpus on mac os.

Unity and Unreal Engine still only have alpha/beta support for apple silicon on mac os, so it'll be interesting to see how performance improves there over time.
 
And I'm saying they can't.

Yes, they can increase the voltage and increase switching speed. That's doesn't shorten wire delay of those massive L1 caches, ROBs and schedulers, so they can't clock it higher.

Cheers

Fight fight fight!

But seriously, it'll be interesting to see if Apple makes an attempt at a desktop Mac Pro using their own CPU/GPU. They'd need to support discrete GPUs as well. That's probably the only way we'd see how well they can really scale vs desktop class parts.
 
And I'm saying they can't.
I'm talking about matter of fact, not arguing pointless theoreticals. Apple has one overarching design goal with their products beyond any other and that is efficiency, and it was a choice to drive the silicon the way it is driven - again, this is not about how the silicon is able to be driven.
 
No one has optimised for Apple's GPU as far as I am aware. As shown by GFXBench it is possible to eek out more competitive performance when coding directly for Apple's GPU architecture.
The purpose of a good API is the app/game developer shouldn't have to optimise for particular graphics hardware, this is the job the engine and the API. In the real world, this obviously does happen when you hit unexpected issues and there likely will be more of these with a brand new configuration of graphics hardware.

I think the main issue is most games are running under Rosetta2, and even though they're using the metal api I'm not sure how the new apple silicon fits. I remember briefly reading and metal had gpu families with different feature sets depending on whether you were using apple gpus on ios, or nvida/amd/intel gpus on mac os.

Absolutely, and more games will be running under Rosetta than are running native and that could take a decade or more to change given how many games target 80x86 versus ARM. The CPU architecture is a different issue to games not supporting Metal and most modern games run on Metal because there's basically no other choice. There is the 2010-era OpenGL and you can install Vulkan but if Vulkan is installed on even 1,000 Macs I would be shocked.

To answer the question about how Rosetta and Metal API fit together; the entirety of macOS running on M1 Macs is native ARM code, including Metal. So 80x86 games get the Rosetta treatment - which involves a one time binary transition when first run (i.e. it's not emulating an Intel CPU in realtime) - then all Metal API usage is from games running translated ARM coder to ARM macOS code which is why the performance of Rosetta is pretty good overall. In many games/apps performance is about equal, in a fair chunk it's a bit better and there are some cases where it's worse. It would be nice to think Apple will keep Rosetta updated and provide tweaks resulting in a refresher binary translation on the next run.

I believe Rosetta does have a realtime translation mode, a bit like traditional emulation, which I think is designed for facilitate 80x86 code in virtualised environments, i.e. Intel code running inside environment outside of macOS.

Unity and Unreal Engine still only have alpha/beta support for apple silicon on mac os, so it'll be interesting to see how performance improves there over time.

Thanks, I wasn't aware of this. I know that the support for Metal in both engines is pretty good because engines are popular engines for games running on iDevices. The M1/Pro/Max isn't leaps a bounds different from Apple's phone APUs and both Unreal and Unity engines have supported iDevice development for a long time. In less litigious times, Epic have been on stage at new iPhone reveals showing off Unreal running great on Metal. So while 'Apple Silicon' (the desktop variants) may be in beta, it's probably quite solid because Apple didn't up-end the apple cart and introduce a totally new ARM architecture for Macs.
 
Last edited by a moderator:
The purpose of a good API is the app/game developer shouldn't have to optimise for particular graphics hardware, this is the job the engine and the API. In the real world, this obviously does happen when you hit unexpected issues and there likely will be more of these with a brand new configuration of graphics hardware.

The driver can't do much to "fix" an engine written with IM GPUs in mind to run fast on TBDR. This is definitely down to the developer. It's not hard to optimise but it has to be done on the game software level.
 
The driver can't do much to "fix" an engine written with IM GPUs in mind to run fast on TBDR. This is definitely down to the developer. It's not hard to optimise but it has to be done on the game software level.
How many macOS games have this issue?
 
I'm talking about matter of fact, not arguing pointless theoreticals.
You argued Apple can clock it way higher. Unless you have a M1 that clock significantly higher than 3.2GHz, I'm not the one arguing pointless theoreticals.

Apple has one overarching design goal with their products beyond any other and that is efficiency, and it was a choice to drive the silicon the way it is driven - again, this is not about how the silicon is able to be driven.
Wait! Wasn't that my point ? Apple built M1 for maximum efficiency, putting a hard upper limit on clocks by their microarchitectural choices. Choices that result a very high IPC, but a longer cycle time.

Cheers
 
Isn't lowering clocks also helping with yields on a cutting edge process? I can't imagine yields are great for the M1 Max.
 
I'd be willing to say all of them :LOL:

And yet, there are now five Macs (MacBook Air, MacBook Pro 13", Mac Mini, iMac 24" and 14" and 16" MacBook Pros) with Apple Silicon and reports of games running seem to be positive? I've seen somebody play Witcher 3 virtualised on a MacBook Air and it.. well..


This is a Windows game, running virtualised in macOS. DirectX to Metal API translation. In realtime. So utterly not-optimised for Apple Silicon or Metal.

On a MacBook Air! :-|
 
And yet, there are now five Macs (MacBook Air, MacBook Pro 13", Mac Mini, iMac 24" and 14" and 16" MacBook Pros) with Apple Silicon and reports of games running seem to be positive? I've seen somebody play Witcher 3 virtualised on a MacBook Air and it.. well..


This is a Windows game, running virtualised in macOS. DirectX to Metal API translation. In realtime.

On a MacBook Air! :-|

Yeah performance might be acceptable, even good! I'm mostly talking about what can be achieved when fully taking advantage of the featureset of these GPUs.
 
Yeah performance might be acceptable, even good! I'm mostly talking about what can be achieved when fully taking advantage of the featureset of these GPUs.
This is not a Mac or macOS-only problem. Nvidia and AMD take different approaches to rendering techniques which is why some games/engines favour one architecture in performance.

At the end of the day, users want the game to run well. It's impossible to know or quantify how much better the game could have run with more work or a different engine. ¯\_(ツ)_/¯
 
At the end of the day, users want the game to run well. It's impossible to know or quantify how much better the game could have run with more work or a different engine. ¯\_(ツ)_/¯

I can't talk about every engine but I've seen more than 2x speedup (in raw GPU perf) optimizing a typical AAA deferred renderer to run on AS.

AMD and Nvidia might have some differences but you simply can't do the same stuff you do in TBDR (abusing tile memory with framebuffer fetch, programmable blending etc.).

Also workload overlap opportunities are a bit different as Apple GPUs can overlap vertex/fragment/compute work so the frame order might need to be a bit different on these GPUs.
 
I can't talk about every engine but I've seen more than 2x speedup (in raw GPU perf) optimizing a typical AAA deferred renderer to run on AS.

I don't mean to sound rude but where are you going with this? The point I wanted to flag to people in this thread, who may not be aware that Metal is actually pretty well-adopted after seven years (because it's the only modern option on macOS) and that game engines have supported it for years as well. Epic rolled up to WWDC in 2015 extolling how well optimised Unreal was for Metal in iOS8. ¯\_(ツ)_/¯

Can APIs, engines and games get better? Sure. Will it take the engines aimed at desktop operating systems a while to be tuned for Apple's desktop APUs? Sure.
 
I don't mean to sound rude but where are you going with this?

My point is that these GPUs are very likely underperforming in gaming workloads even when using metal and building natively for AS.

If you want to get the 3080M equivalent performance out of the M1 Max we see in synthetics you have to optimize for the architecture.

Your average Mac user won't care anyway, even without optimisations it's way better than the Intel offerings of the past few years.
 
If you want to get the 3080M equivalent performance out of the M1 Max we see in synthetics you have to optimize for the architecture.
I'm don't see that Apple are chasing graphics performance. Much of their engineering resource - looking at where they have most improved Metal over time - is about compute. Because compute augments most of the pro apps that creatives use Macs for: Photoshop, Logic and FinalCut. Better graphics performance from the GPU is a side-bonus. It's compute and neural engines every time. :yep2:
 
I'm don't see that Apple are chasing graphics performance.

You'd be surprised, they have entire teams built around helping developers get the most out of AS, even writing an entire implementation from scratch given code access.

But the optics are a bit different, Apple doesn't care about gaming nowhere near as much as professional workloads.
 
Back
Top