Apple M1 SoC

That said, I'm not seeing any exceptional "Apple magic" here. It may be doing this at a very low power draw but it seems to be achieving this through a massive node/density advantage resulting in enormous but low frequency chips. And of course Raptor Lake/Zen 4 and Ada/RDNA 3 are going to drastically change the equation in the next 3 months.
RDNA 3/ADA will definitely tilt the GPU performance well back in the PC's favour no doubt, sure - albeit Apple is still rumoured to be introducing a true PowerMac replacement, who knows how many M1 Ultra's will be stitched together for that. I doubt it will be price competitive with Nvidia/AMD for GPU performance alone though when it arrives, albeit it will probably take up less space than a 4090 by the sounds of it. :)

Perhaps it's because I haven't kept up that closely on Zen4/Raptor Lake development though, but are they reported to have big perf/watt gains? Everything I've read about Zen4 indicates the main performance increases will be from mhz, and that if anything comparable chips vs. Zen3 will actually have their wattage increased. Raptor Lake's improvements sound very marginal, certainly nothing that will 'drastically' change the equation in a few months - what am I missing?
 
RDNA 3/ADA will definitely tilt the GPU performance well back in the PC's favour no doubt, sure - albeit Apple is still rumoured to be introducing a true PowerMac replacement, who knows how many M1 Ultra's will be stitched together for that. I doubt it will be price competitive with Nvidia/AMD for GPU performance alone though when it arrives, albeit it will probably take up less space than a 4090 by the sounds of it. :)

Perhaps it's because I haven't kept up that closely on Zen4/Raptor Lake development though, but are they reported to have big perf/watt gains? Everything I've read about Zen4 indicates the main performance increases will be from mhz, and that if anything comparable chips vs. Zen3 will actually have their wattage increased. Raptor Lake's improvements sound very marginal, certainly nothing that will 'drastically' change the equation in a few months - what am I missing?
No idea on Raptor Lake but Zen 4 is going to be 10-20% single core perf increase from a combination of clock speed and architecture improvements according to AMD.
 
No idea on Raptor Lake but Zen 4 is going to be 10-20% single core perf increase from a combination of clock speed and architecture improvements according to AMD.

Well yeah that's pretty much what I'm hearing, so I just don't get the implication that this is going to be any kind of sea change. The numbers I'm seeing also indicate to get that performance increase there's going to be a power increase as well, like if that's what they're getting by moving to 5nm then if anything it further undercuts the argument that Apple's main advantage is being on a higher process node.
 
RDNA 3/ADA will definitely tilt the GPU performance well back in the PC's favour no doubt, sure - albeit Apple is still rumoured to be introducing a true PowerMac replacement, who knows how many M1 Ultra's will be stitched together for that. I doubt it will be price competitive with Nvidia/AMD for GPU performance alone though when it arrives, albeit it will probably take up less space than a 4090 by the sounds of it. :)

Perhaps it's because I haven't kept up that closely on Zen4/Raptor Lake development though, but are they reported to have big perf/watt gains? Everything I've read about Zen4 indicates the main performance increases will be from mhz, and that if anything comparable chips vs. Zen3 will actually have their wattage increased. Raptor Lake's improvements sound very marginal, certainly nothing that will 'drastically' change the equation in a few months - what am I missing?
AMD is reporting a >25% performance per watt gain for Zen 4.


Raptor Lake is delivering 30% - 40% more multicore performance on the same process, so I wouldn't call the improvement "marginal", though we don't have performance per watt figures AFAIK. Anyway, Zen 4 will probably deliver close to 5950X performance in a 60W laptop chip, so we would hope Intel could match that with Meteor Lake.
 
M1 max/Ultra also cost much more than a alder lake 12900k does.... all the while the 12900k still is the better pick all-around performance wise.
Also production costs should be much, much higher. All those transistors ... This Apu is really really big. You can pair a threadripper + rtx 3090 and you still have used less transistors.
Apple did that because they can. They don't offer lower end CPUs and have exclusive prices, so they can use designs that cost much more.
 
Last edited:
Also production costs should be much, much higher. All those transistors ... This Apu is really really big. You can pair a threadripper + rtx 3090 and you still have used less transistors.
Solle did that because they can. They don't offer lowen end CPUs and have exclusive prices, so they can use designs that cost much more.
I really think you’re doing Apple engineers a disservice.
 
Also production costs should be much, much higher. All those transistors ... This Apu is really really big. You can pair a threadripper + rtx 3090 and you still have used less transistors.
Solle did that because they can. They don't offer lowen end CPUs and have exclusive prices, so they can use designs that cost much more.

TR/3090 would also cost less for the consumer, while giving much more performance all-around. Even for creators aside from where the media encoders do their thing. Then again Intel cpu's for example also have hardware accelerated units build into the cpu's for such workloads.
As the DF video's conclusion was, theres no reason for amd/intel etc to go the arm route. And yea, the M1 max/ultra die is very large, you'd be looking at server/pro grade stuff in both size and cost if you want a comparison. Anyway normal consumer equipment exceeds it comfortably in most cases, sometimes trading blows.

I dont really see the point of this DF video to begin with, its not really targetting their audience who are primary gamers.
 
I really think you’re doing Apple engineers a disservice.

What makes you think that given similar conditions of a better node and enormous transistor budget that Intel or AMD engineers couldn't have achieved something similar?

It's seems to me the PC CPUs are simply making different trade offs with similar or better performance using a much smaller and cheaper chip that consumes much more power on an inferior node.

I expect Raptor Lake to change the landscape primarily because of its greater core count. Single thread performance should open the gap up between it and the M1 a little, but with 24 cores vs the 12900k's 16 cores it should push multi core performance ahead of the M1 (20 cores) in those scenarios where it can currently beat the 12900k.
 
What makes you think that given similar conditions of a better node and enormous transistor budget that Intel or AMD engineers couldn't have achieved something similar?

It's seems to me the PC CPUs are simply making different trade offs with similar or better performance using a much smaller and cheaper chip that consumes much more power on an inferior node.

I expect Raptor Lake to change the landscape primarily because of its greater core count. Single thread performance should open the gap up between it and the M1 a little, but with 24 cores vs the 12900k's 16 cores it should push multi core performance ahead of the M1 (20 cores) in those scenarios where it can currently beat the 12900k.
Do we have data on how many transistors are dedicated to the CPU as opposed to the GPU and high speed fabric? I think intel's previous 7-8 years of engineering says more than I possibly could. They are 1 node behind Apple. A single node jump doesn't bring a 12900k from 250+ watts to 60.
 
Do we have data on how many transistors are dedicated to the CPU as opposed to the GPU and high speed fabric? I think intel's previous 7-8 years of engineering says more than I possibly could. They are 1 node behind Apple. A single node jump doesn't bring a 12900k from 250+ watts to 60.

You dont need 250+ watts to beat the M1.
 
You dont need 250+ watts to beat the M1.
12900Ks are always benched at their default highest power setting AFAIK. That enables them to trade blows with the M1 Ultra. They also run at near 100 degrees with the largest air cooler on the market. How fast would the M1 ultra be if Apple just completely ignored power draw and thermal output?

BTW what is the transistor count of the 12900k? Can't seem to find it in any reviews.
 
12900Ks are always benched at their default highest power setting AFAIK. And that enables them to trade blows with the M1 Ultra. They also run at near 100 degrees with the largest air cooler on the market. How fast would the M1 ultra be if Apple just completely ignored power draw and thermal output?

*At a much lower cost. M1 CPU's run quite toasty too when the media encoders arent the only ones working. How fast the M1 Ultra could have been? we will never know, my guess is that efficiency goes out the window then, while still not really competing with intel or AMD products.
 
*At a much lower cost. M1 CPU's run quite toasty too when the media encoders arent the only ones working. How fast the M1 Ultra could have been? we will never know, my guess is that efficiency goes out the window then, while still not really competing with intel or AMD products.
It already is competing with Intel's best. Given 4x+ the power budget It’s silly to think they wouldn't trounce it. They are using substantially lower level cooling than the massive Noctua tower/water cooling the 12900k basically requires.

Can you provide data on the cost of production for the CPU portion of the M1 Ultra and the 12900k?
 
Do we have data on how many transistors are dedicated to the CPU as opposed to the GPU and high speed fabric? I think intel's previous 7-8 years of engineering says more than I possibly could. They are 1 node behind Apple. A single node jump doesn't bring a 12900k from 250+ watts to 60.

No Apple clearly have a big lead in power draw but as I say, Intel seem to targetting different priorities. A node change won't get them to Apples energy efficiency but it will allow for an even bigger performance lead and/or an even cheaper chip.

I don't know the split between CPU and GPU of the M1 Ultra but if you take an Alderlake + GA102 you are still far smaller than the M1 Ultra with quite a bit more performance, albeit with much higher power draw.

Comparing to discreet components is probably the wrong approach though.

Look at what AMD has achieved with SoCs in the consoles in terms of energy efficiency. Now imagine they were releasing a console today based on Zen4/RDNA3 using Apples node. I'd wager that would be competitive with the M1 Ultra on all fronts.
 
It already is competing with Intel's best. Given 4x+ the power budget It’s silly to think they wouldn't trounce it. They are using substantially lower level cooling than the massive Noctua tower/water cooling the 12900k basically requires.

Can you provide data on the cost of production for the CPU portion of the M1 Ultra and the 12900k?

Competing in certain workloads with CPU's that cost substantionally less.

Edit: there is AMD/X86 hw in the consoles, may i remind you. You're barking up the wrong tree.
 
Look at what AMD has achieved with SoCs in the consoles in terms of every efficiency. Now imagine they were releasing a console today based on Zen4/RDNA3 using Apples node. I'd wager that would be competitive with the M1 Ultra on all fronts.

Or look at what AMD hardware does in performance laptops. Quite impressive how far weve come.
 
No Apple clearly have a big lead in power draw but as I say, Intel seem to targetting different priorities. A node change won't get them to Apples energy efficiency but it will allow for an even bigger performance lead and/or an even cheaper chip.

I don't know the split between CPU and GPU of the M1 Ultra but if you take an Alderlake + GA102 you are still far smaller than the M1 Ultra with quite a bit more performance, albeit with much higher power draw.

Comparing to discreet components is probably the wrong approach though.

Look at what AMD has achieved with SoCs in the consoles in terms of energy efficiency. Now imagine they were releasing a console today based on Zen4/RDNA3 using Apples node. I'd wager that would be competitive with the M1 Ultra on all fronts.

One thing to remember about AMD is the purchase of xillanx. Integration of their technology should prove very beneficial to AMD. I believe Zen 5 is when we most are expecting it to show up.

Furthermore it will be interesting to see what AMD's 5nm line up does in terms of competing with Apples 5nm line up.
 
Do we have data on how many transistors are dedicated to the CPU as opposed to the GPU and high speed fabric? I think intel's previous 7-8 years of engineering says more than I possibly could. They are 1 node behind Apple. A single node jump doesn't bring a 12900k from 250+ watts to 60.
Power consumption increases non-linearly (quadratically?) with clock speed, and the 12900K is at the limit of its clock speed envelope. A competitor to the M1 Ultra would probably be an 8 P-core 32 E-core CPU, clocked under 5 GHz on Intel 4.
 
Power consumption increases non-linearly (quadratically?) with clock speed, and the 12900K is at the limit of its clock speed envelope. A competitor to the M1 Ultra would probably be an 8 P-core 32 E-core CPU, clocked under 5 GHz on Intel 4.
Well when you limit a 12900k to 75 watts it loses over half of its performance. When have CPUs ever seen 100+% perf/watt increases in a single node jump?

Looking at 12900HK at 45 and 75 watts you get 13413 and 16700 in Cinebench multicore. 15413 would seem a reasonable estimate for a theoretical 60 watt limit. Reaching the M1 Ultra’s 23908 is not a realistic expectation for their next release.
 
Last edited:
Back
Top