Apple is an existential threat to the PC

Nothing is ever magic. But thinking the advanced node is the only reason why Apple lead is so large is a serious mistake.
Yep, it seems merely based upon IPC, Apple is years ahead. The single-threaded tests are limited seemingly by a 3.2GHz peak clock, versus 4.5GHz or more Intel/AMD.
 
Indeed, its running on an entirely different architecture aswell.
That's another advantage they have, but again that's not enough to explain their lead :) They also have one of the best if not the best micro architecture, and also a rather large silicon area. The lead they have can't be reduced to a few things, it's multidimensional, and that's what makes it so impressive IMHO.
 
Obviously not the only reason and they do have industry-leading engineers, but especially with the M1 Max we're now noticing how the more parallel execution units you have, the more complicated, power-consuming and area-expensive it gets to develop a fabric that maintains the necessary inner bandwidth and coherence to make things scale up.
At some point many people thought "if M1 can do this, then a bigger M1 will break the market and bring alien technology jumps". Well, the bigger M1 is here and other than Apple's very focused workloads (where the GPU is basically treated as a FP coprocessor), it's really not as perfect as some thought. Despite being a >400mm^2 5nm behemoth clocked at what is probably the best possible clocks for an ideal power-performance ratio and using many channels of the most expensive smartphone memory out there.
That's right, some people started making silly claims. But there are as many fanboys as there are haters. It's always funny to see x86 lovers trying to find excuses; for many years they were saying it was only a phone chip optimized to run Geekbench and that Apple would never kick Intel and that they'd better choose AMD. Some are still trying to make it look like it's nothing. As always the truth is in between those extremes, though I find those in denials funnier to read.
 
Apple also have a big advantage of not necessarily having to deal with decades of legacy support for CPU compatibility, an SoC that can be tuned for their OS and vice versa. It's something few platform companies can possibly achieve.
 
Apple also have a big advantage of not necessarily having to deal with decades of legacy support for CPU compatibility, an SoC that can be tuned for their OS and vice versa. It's something few platform companies can possibly achieve.
Yes, another advantage. They could get rid of 32-bit application code within a few years. While Intel/AMD still have to support their 16-bit ISA and Windows still has to support IA32.
 
Apple also have a big advantage of not necessarily having to deal with decades of legacy support for CPU compatibility, an SoC that can be tuned for their OS and vice versa. It's something few platform companies can possibly achieve.

It's a strategic choice. Apple forces devs (and partners and customers for that matter) regularly to move forward. Might be harder for other companies to pull off, but still, it's a choice.
 
Thats the thing I can't understand M1 has been out nearly a year. Surely theres at least one game title built for it.? (OK I answered this later)
There are a fair few games with native Apple Silicon code, the biggest probably being World of Warcraft, but most are not what you would call graphically-demanding games or games with Windows versions.

Yes, another advantage. They could get rid of 32-bit application code within a few years. While Intel/AMD still have to support their 16-bit ISA and Windows still has to support IA32.

They did in 2019. macOS Catalina released in 2019 dropped support for 32-bit apps. There are obviously ways if you really need to, through virtualisation of a 32-bit maOS system.
 
Last edited by a moderator:
Despite being a >400mm^2 5nm behemoth clocked at what is probably the best possible clocks for an ideal power-performance ratio

I think that is the luxury that only Apple can afford to do atm since they are fully vertically integrated being able to supply their own SoC. Technically, there is nothing stopping AMD and Nvidia to make big die/conservative clock approach with much better performance/power ratio, but wouldn't be economically viable for them.
 
Yep, it seems merely based upon IPC, Apple is years ahead. The single-threaded tests are limited seemingly by a 3.2GHz peak clock, versus 4.5GHz or more Intel/AMD.
Comparing IPC is pointless when the M1 is clearly designed around a much lower operating frequency.

Cheers
 
iphones are immensly popular, macs not so.

iPhones, like Macs, don't have the marketshare lead of Android. Apple isn't interested in selling <$299 phones to the mass market, which inevitably makes up the vast majority of the market.

Where the iPhone is dominating are the enthusiast, premium, and flagship markets. As such, if you broke down revenue or profit share of smartphones by manufacturer, Apple dominates everyone else.

People dismissing what Apple is doing, im not one of them atleast,

...OK...

i think its very impressive what they achieve in synthetic benchmarks and special use cases (video encoding/decoding, streaming etc etc).

I suggest you look at the benchmark numbers again for M1 and M1 Pro/Max.

The "special use" cases you refer to are encoding/decoding, editing, compiling, rendering, simulations... You know, the kind of stuff that people who rely on their computer to earn an income do.

The reality is that Apple Silicon is changing the paradigm: It dominates in the vast majority of casual use and productivity tasks, and really only struggles with games (which are clearly just bad ports or run in x86 emulation). Gaming is becoming the "special use" case where PCs retain an advantage.

But as per this topic (Apple pc taking over Windows pc's), i just dont think its all that impressive in a different light, you pay 3, what, 4 times more to equal a windows laptop performance? Probably 5 times.
The M1 MacBook Air pretty much decimated the important $900-1100 price range. The performance per watt, display and build quality, and battery life was unrivalled in that class. Unless you absolutely required Windows, there was basically no point in purchasing a Windows laptop.

And then your matching 3080m laptop performance in benchmarks. In real world use, why would the average joe/windows user opt for a Mac pro/max other than less power draw?

Windows laptops with a 3080M aren't exactly chump change.
A 14" MacBook Pro with a top-bin M1 Max (and requisite 32GB of RAM) is $3300. You're also getting a laptop with arguably one of the best displays (mini-LED) and sound systems in the industry. Not to mention battery life (17 hours video playback, 11 hours web surfing), 7.4GB/s SSD, and TB4 I/O.

I'm not sure where you're finding competing devices with a 3080M for $660, $825, or even $1100 as you suggested.
 
It turns out all along that tile-based rendering architectures weren't a good fit for high-end graphics performance. The outcome could've been far worse if games started using indirect rendering to push higher geometry density because sorting earlier in the graphics pipeline as seen in tile-based GPUs would've exhibited significant negative consequences. Most high-end graphics developers don't care about optimizing their render passes or will never deal with it either so it's no surprise as to why the M1 Max does poorly at higher resolutions as well especially in the presence of deferred renderers where the cost of G-buffers is proportional to the resolution and not optimizing for render passes effectively means throwing out the advantage of an on-chip tile memory but the problem is only going to get more dreadful as time goes on since developers are planning on ballooning up the size of the G-buffer by storing in more attributes to specialize/optimize their RT shaders ...

IMR architectures has been the industry standard for years now when it comes to high-end graphics because it's easier for developers to extract more performance out of them in this case ...
 
TBDR is still a very good fit for deferred rendering, as you can avoid writing/sampling the GBuffer back into system memory entirely. But it requires practically a different rendering path so it's unlikely we'll see such optimisations from most games ported to Mac.
 
no point in purchasing a Windows laptop.

Dont want to quote the whole post, but oh wow. Well lets agree then, Apple sillicon (and software) is the best and apple pc's is where it is at. So when do you expect the turn over, as per the topic, where Apple pc's will be what Windows/MS pc's are now in terms of marketshare? (for gaming aswell) With the release of the M1 laptop last year, with the thing being on another level compared to similar priced windows laptops, why arent people going for that instead of windows based laptops.

To clarify (as per your post), for me personally i wouldnt mind Apple instead of Microsoft. If the latter evaporates into thin air (windows devices and probably consoles), then i'd guess that developers will port and focus their games on apple pc's/hardware. In fact i'd be happy to have monstrous performance at a fraction of the power draw which also means smaller sized pc's or just a laptop would suffice.
It doesnt matter whats powering my boxes, be it apple or something else, whos got the performance is king.
 
Comparing IPC is pointless when the M1 is clearly designed around a much lower operating frequency.
Milan Zen server processors top-out at around 4GHz "boost" it seems, with 64 core processors having a maximum boost of 3675MHz (7713):

Epyc - Wikipedia

so "clearly" is not looking like a justifiable excuse.
 
Very underwhelming gaming performance. You get 3080m performance in synthetic benchmarks for over double the price, not even half that performance in practice.
126684.png

This is an example of something that have been optimised for M1.

Unfortunately nearly no games are optimised for macOS and Metal but productivity applications really shine and show the power of these SoCs.

Performance is the same whether plugged in or not, which is impressive. Can't say that for any of the competing products.

The potential is certainly there but no game developer have gone the extra mile yet.
 
Last edited:
Unfortunately nearly no games are optimised for macOS and Metal but productivity applications really shine and show the power of these SoCs.
A lot of games are written in common engines that do a very good job of optimising for Apple's Metal API. Like Unreal engine and Unity. World of Warcraft was pretty much the first big game to support Apple's Metal API and other Mac-friendly developers like Creative Assembly (Total War, Alien Isolation), Paradox (Stellaris, Crusader Kings, Harts of Iron, Europa Universalis IV, Surviving Mars), and Firaxis (Civilization) embraced native Metal graphics engines years ago.

What do people think games are using to drive graphics on Mac? The deprecated OpenGL 4.1 driver code from July 2010? The first Metal API was introduced seven years ago. :yep2:
 
You don't know that. Apple purposefully avoids top frequency and higher voltages just to keep efficiency up.

But I do know that. The L1 cache's size and latency is a very clear indicator that the M1 cannot clock much higher.

As you say, this is a purposeful design choice.

For active logic, power dissipation rises with the frequency cubed, so building a slower and much wider cpu core increases effiency while (at least!) maintaining performance. It also costs more because silicon is not free. Some structures are less dependent on switching time and more on wire delays, these can be increased without hitting the cycle time limit, like ROB, schedulers and caches.

AMD's zen/2/3 are very clearly engineered for highend desktop. Intel has three large market segments: Mobile, datacenter and desktop, the latter two demands fast processors. I'm kind of surprised Intel hasn't done a mobile centric design á la M1 for mobile, with the kind of market share and resources they have. The big-Little they have going on is stillborn IMO.

Cheers
 
Milan Zen server processors top-out at around 4GHz "boost" it seems, with 64 core processors having a maximum boost of 3675MHz (7713):

You can probably find a low power mobile U SKU that does even less. Doesn't change the fact that the same microarchitecture does 4.9GHz in an inferior process node.

Cheers
 
Back
Top