Apple is an existential threat to the PC

Does Apple make a desktop that is somehow "bigger" than the M3 Max?

No, the desktop with the fastest SOC is the Mac Studio, which is still in the M2 series.

Rumors are that it will skip the M3 SOCs and go to M4 but maybe not until mid 2025. They will first rev the MacBook Pros to M4, because that is where there volumes are.

Mac desktops have often used mobile CPUs, like the Intel Core Duo. Main exceptions have been the Mac Pros but those are updated very infrequently. With the advent of the Mac Studio a couple of years ago, they started making M series Ultra chips, which aren't available on any of their laptops.

In any event, the M series Max and Ultra SOCs aren't expected to competed against high end discrete GPU cards.
 
In any event, the M series Max and Ultra SOCs aren't expected to competed against high end discrete GPU cards.
That's not according to Apple, they made silly claims about competing with 3090 with previous M SoCs.
Could you please start using basic math concepts right? When you say "2.2 times faster" you actually mean "1.2 times faster" or "2.2 times as fast", it's not 2.2 times faster.
No, it's more confusing this way, and not straight forward at all, If product A is doing 120fps and product B is doing 60fps, then it's much easier to say that A is 2x faster than B, instead of saying A is 100% faster than B.

And if you slow down the PC to match the scores the power consumption plummets down a lot faster than performance, could be close match or win for either, we don't know 'till someone tests it.

Wouldn't the reasonable thing be to test laptops to laptops? 😅
The 4090 laptop was also tested, it is a severely cut down and power limited 4090 (150w, PCIe 8X), however it is 40% faster than M3 Max in Adobe Premiere Pro GPU Effects. It is also 40% faster in Davinci Resolve Studio GPU score, 2.3x faster in AI score, 74% faster in Cinebench GPU score and 2x faster in Blender GPU score. So at minimum it is 40%, and on average it's 70% faster.

If the pc is 3 times faster then it will use 6 times the energy to accomplish that.
Nope, check the links.
 
That's not according to Apple, they made silly claims about competing with 3090 with previous M SoCs.
Not to mention how damn big those things are. M3 Max has over 3.25x times the transistor count of GA102. And no, CPU, NPU etc don't take the majority of it.

No, it's more confusing this way, and not straight forward at all, If product A is doing 120fps and product B is doing 60fps, then it's much easier to say that A is 2x faster than B, instead of saying A is 100% faster than B.
It's not a matter of "easier", it's a matter of this is how math works. It's just as easy to say it's twice as fast, which is true for your example, while your example isn't true. A being 2x faster than B means A is 180.
People using math wrong doesn't make the wrong math right.
 
Not to mention how damn big those things are. M3 Max has over 3.25x times the transistor count of GA102. And no, CPU, NPU etc don't take the majority of it.
100% agree here.

while your example isn't true. A being 2x faster than B means A is 180.
I asked ChatGPT and Gemini and their answers agree with me, 120 is 2x or two times bigger than 60. Am I missing something here?

If 𝐴 is "2x times bigger than 𝐵" it means that
A is twice the size of B, or that 𝐴 equals 2𝐵.

However, this phrase can sometimes be confusing or interpreted differently, so it's clearer to say:

"A is twice as large as B"
"A is 200% of B"
"A is 100% larger than B"
All of these mean that
A is double the value of B
 
Last edited:
I've heard that the process which the M3 is on hasn't been as successful as hoped.

Apple is looking to move the MacBook Pros to M4 this fall. I don't think M4 will be that much faster than their M3 counterparts though. Maybe more efficient, better yields.
 
I asked ChatGPT and Gemini and their answers agree with me, 120 is 2x or two times bigger than 60. Am I missing something here?

What is 80% (or 0.8x) bigger than 60? We instinctively know that the math that we need to apply is (1+0.8)*60 = 108 and not 0.8*60 = 48. The latter would be absurd.
This intuitive definition of "bigger than" applies all the way to 100% (or 1.0x) bigger than 60, which is (1+1.0)*60 = 120.

We can't change the definition of "bigger than" the moment the ratio crosses 1.0x. Which is what we would have to do if 120 were to be 2x "bigger than" 60. And yet, I agree with you that this is what many people would casually use. Which is why ChatGPT and Gemini agree -- they've been trained on human patterns, warts and all.

"A is 2x as big as B" is simple, precise and consistent.
 
I asked ChatGPT and Gemini and their answers agree with me, 120 is 2x or two times bigger than 60. Am I missing something here?
Yeah, that generative ai isn't too bright and can use math wrong too (because much of the learning material does)
(Sorry for the share via, scrolling screenshot seems to be bugged a bit)

1000013740.jpg

Edit: also what neckthrough said above
 
The 4090 laptop was also tested, it is a severely cut down and power limited 4090 (150w, PCIe 8X), however it is 40% faster than M3 Max in Adobe Premiere Pro GPU Effects. It is also 40% faster in Davinci Resolve Studio GPU score, 2.3x faster in AI score, 74% faster in Cinebench GPU score and 2x faster in Blender GPU score. So at minimum it is 40%, and on average it's 70% faster.
150w you say?

“So at minimum it is 40%, and on average it's 70% faster.”
Nope, check the links.

I don’t think you are right.
With the mobile 150w 4090, the apple silicon GPU should consume 90watts to have parity.
I am saying it will use less than 45watts

If it scales linear, then a 150w apple silicon GPU should offer 100% faster performance compared to 4090 at the very minimum
 
I've heard that the process which the M3 is on hasn't been as successful as hoped.

Apple is looking to move the MacBook Pros to M4 this fall. I don't think M4 will be that much faster than their M3 counterparts though. Maybe more efficient, better yields.
M1,M2 and M3 are the same basically with M4 being the ‘successor’, M4 already exists in iPad Pro for months btw
 
That's not according to Apple, they made silly claims about competing with 3090 with previous M SoCs.
We don't even know what relative performance means or what was tested to get these scores. The graph is misleading, I'l agree and Apple was also ridiculed by this weird comparison.

I can't find the "fine print" to go with this graph, unfortunately.

No one but Apple would compare their integrated graphics with the highest end discrete GPUs on the market 😅 Worth noting is that Apple hasn't made such comparison since if I recall correctly.

Apple really need to downplay the performance per watt for their desktop parts as the GPU is very conservatively clocked comparatively to other GPUs (M4 GPU is clocked at 1.47GHz for example, the M2 Max GPU is clocked at 1.39GHz).

Apple_M1_Ultra_gpu_performance_01.jpg


The performance per watt is silly, especially in something like the desktop Mac Studio.

Here is an example when maxing out my resources.

M2Max-PerformacePerWatt.png
 
Last edited:
Yeah, that generative ai isn't too bright and can use math wrong too (because much of the learning material does)
I agree with you that this is what many people would casually use
Even German sites like ComputerBase and PCGH use my methodology.

Here the 4090 is doing 61 fps at 4K path tracing vs the 7900XTX which is doing 14 fps, here ComputerBase shows the 4090 with 418% advantage, which is the same as 4.2x as fast as 7900XTX.


Here PCGH has the 4090 being 400% faster than 7900XTX at 4K, which corresponds well to the 4090 doing 40fps vs 10fps for the 7900XTX.

 
M1,M2 and M3 are the same basically with M4 being the ‘successor’, M4 already exists in iPad Pro for months btw
Unfortunately, just due to the nature of PC review sites, we have very little good evidence to examine the IPC increases of M1, M2, and M3. If only reviewers could aside their teenage angst and actually benchmark Apple products, it would do everyone a lot of good.

EDIT: From what I have seen though, we have seen decent 5-10% increases each generation in CPU performance. M4 does look promising. How much of that is due to architectural changes and how much of that is due to the much improved TSMC process over the M3's, I'm not sure.
 
Unfortunately, just due to the nature of PC review sites, we have very little good evidence to examine the IPC increases of M1, M2, and M3. If only reviewers could aside their teenage angst and actually benchmark Apple products, it would do everyone a lot of good.

EDIT: From what I have seen though, we have seen decent 5-10% increases each generation in CPU performance. M4 does look promising. How much of that is due to architectural changes and how much of that is due to the much improved TSMC process over the M3's, I'm not sure.
Again, all differences between 1 and 3 are not really due to new architecture. The new architecture is M4. Anandtech or arstechnica already warned pc fans years ago when they were compiling server benchmarks to run on iPhones which predicted x86s demise
 
Again, all differences between 1 and 3 are not really due to new architecture. The new architecture is M4. Anandtech or arstechnica already warned pc fans years ago when they were compiling server benchmarks to run on iPhones which predicted x86s demise
Sure, technically we can say that the Apple M1 is based on the ARMv8.5 ISA, M2 and M3 on ARMv8.6 ISA and M4 is based on ARMv9.2a ISA.

That doesn't mean there wasn't any architecturally difference between them. It wasn't just the node that increased IPC.
 
Back
Top