Apple is an existential threat to the PC

Yeah not having discrete GPU support is a liability but I guess Apple decided the market for people who need that is not going to offer them the ROI.

They know what their MP sales were. Probably not enough to justify iterating regularly.
 

Very interesting results, and it's the first benchmarks I've seen of AI/LLM.
M4 Max actually manages to outperform the RTX 4090 once model sizes become too big for the 4090's RAM pool.
 
That's what tensor parallelism is for in AI/LLM ... just plug in more RTX 3090s (don't really need 4090 with low batch sizes). Even across PCIe tensor parallelism works good enough.

Though unless it's for porn, you can just do it in the cloud too.
 
It's interesting to see the E cores in the M4 nearing the P cores of the M1. Faster in some and still slower in other areas.

m134coreperf4.png


More observations here: Inside M4 chips: CPU core performance (by Dr. Howard Oakley at The Eclectic Light Company).
 
Seems Valve is having another go at steam machines, they have a "powered by steamos" mark now.

Will they actually spend some money on it and handle it like Google handles Chromebooks, with Valve handling all software including firmware updates, or will it be another failure?
 
Seems Valve is having another go at steam machines, they have a "powered by steamos" mark now.

Will they actually spend some money on it and handle it like Google handles Chromebooks, with Valve handling all software including firmware updates, or will it be another failure?
They've been adding features for competing handheld class PCs for a while now, but no signs so far it would be getting readty for desktop release
 
Even for other handhelds, if they don't commit to handling the full update responsibility (including firmware and bios) and QA their releases on them, it will be a clusterfuck.

If they put their mark on it, they need make sure they can deliver similar quality as for their own hardware. Otherwise it will just hurt the brand.
 
Seems Valve is having another go at steam machines, they have a "powered by steamos" mark now.

Will they actually spend some money on it and handle it like Google handles Chromebooks, with Valve handling all software including firmware updates, or will it be another failure?

The infrastructure for doing all this is way more advanced these days compared to when the Steam machines were a thing. Now we even got things like fwupd.
 
That's really the least of it. Taking on the responsibility of testing across all the devices they certify and the long term responsibility of software support and bugfixing equally for those devices as for the Deck, that's where the costs are.

Companies want to shirk responsibility, Valve with its clique organization structure probably more than most. Can't build a really solid brand (or ecosystem) like that though.
 
The M3 Ultra runs Assassin’s Creed Shadows at 45fps using 4K medium settings and upscaling at Performance (meaning 1080p).


Is that even good? That’s a $4000 computer before any upgrades?

How much performance can you get with a $2500 to $3000 PC, including whatever GPU you can fit into that price range?
 
Given the clickbaity thumbnail graphic, I feel like they're trying to suggest this is "unexpectedly bad" performance. I'm not really interested in watching the video one way or another, however for a purely ARM processor to run any X86 game at those speeds still seems a pretty decent feat to me.

I know the M-series silicon is highly performant on native workloads; I'm not convinced anyone buys a modern Apple device primarily for Steam gaming. Although, given the sorts of folks whom I would expect buy these, a bit of gaming capability will certainly be appreciated. And to that end, even with the clickbait, I personally feel that's an acceptable score.
 
I know the M-series silicon is highly performant on native workloads
I only posted this score because Assassin's Creed Shadows is a native apple port.

Is that even good? That’s a $4000 computer before any upgrades?
This is beyond bad. M4 Max scores worse than M3 Ultra by the way. This is how other GPUs perform at max settings.

performance-rt-3840-2160.png


Is there some bigger compilation of Mx GPU performance in games to see how it compares to AMD/Intel/NVIDIA?
Unfortunately there is none. However you can do a little digging and find out. There are basically 3 other AAA Apple silicon native games implementations: Resident Evil 8, Resident Evil 4 Remake and Death Stranding.

Resident Evil 4 Remake performs (at 45 to 60 fps) on the M4 Max at native 4K high settings (so essentially medium), at this level of performance the M4 Max is below a 3060.

Death Stranding performs (at 30 to 40 fps) on the M3 Max at upscaled 4K max settings, at this level of performance,. the M3 Max is basically a 2060 level.

Resident Evil 8 performs (at ~90 fps) on the M2 Ultra at upscaled 4K high settings (~ medium), again maybe performing at the level of a 3060.
 
I found out that there are other AAA titles that also have a native apple silicon releases.
  • Resident Evil 2 Remake performs anywhere between (30 and 60 fps depending on the area) on the M3 Max at 4K native with max settings, which makes it slower than RTX 2080.
  • Resident Evil 3 Remake performs any where between (55 and 80 fps depending on the area) on the M3 Max at 4K native with max settings, which makes it slower than RTX 2080Ti.
  • Lies of P performs any where between (45 and 75 fps depending on the area) on the M4 Max at 1440p native with max settings, which is definitely slower than a 3060.
  • Stray performs at ~60 fps on the M3 Max at native 4K high settings, which is about the performance of the 3070.
  • Buldur's Gate 3 performs under 30fps on the M4 Max at native 1800p and Ultra settings, which is way slower than a 3060.
All in all, even when the games are optimized for Apple Silicon/Metal, the results are mixed, the M3 Max/M4 Max GPU lines are around the level of 3060, maybe 3070/2080Ti if they are lucky, 4K resolution remains very hard on Apple GPUs for unknown reasons, performance falls off a cliff there when compared to 1440p. Also, people have frequently observed that M3 Max and M4 Max laptops do throttle when gaming for extended periods of time, a Mac Studio is preferable for long hours gaming.
 
I guess it's common to underestimate the value of driver optimizations. The huge efforts made by both NVIDIA and AMD over the years are not trivial and can't be easily replicated. Intel's Arc is a good example (and they are already doing pretty well).
When doing simpler workloads such as running a LLM, a M3 Ultra is roughly equivalent to a 3090 or 3090 Ti (the much larger memory is of course invaluable), so I guess it's not unreasonable to expect a M3 Max (which is half of a M3 Ultra) to be something in the range of a 3060 Ti or 3070.
 
I guess it's common to underestimate the value of driver optimizations. The huge efforts made by both NVIDIA and AMD over the years are not trivial and can't be easily replicated. Intel's Arc is a good example (and they are already doing pretty well).
When doing simpler workloads such as running a LLM, a M3 Ultra is roughly equivalent to a 3090 or 3090 Ti (the much larger memory is of course invaluable), so I guess it's not unreasonable to expect a M3 Max (which is half of a M3 Ultra) to be something in the range of a 3060 Ti or 3070.
In the era of 'explicit' gfx APIs, driver app optimizations wouldn't truly serve as a sufficient reason to explain the performance gap that's larger than a factor of >2x. The performance gap is most likely down to their hardware design being suboptimal for AAA PC/console games since their graphics division prioritizes their resources towards enhancing mobile games or productivity applications rather than optimizing their hardware around PC/console graphics technology ...
 
In the era of 'explicit' gfx APIs, driver app optimizations wouldn't truly serve as a sufficient reason to explain the performance gap that's larger than a factor of >2x. The performance gap is most likely down to their hardware design being suboptimal for AAA PC/console games since their graphics division prioritizes their resources towards enhancing mobile games or productivity applications rather than optimizing their hardware around PC/console graphics technology ...

Even with those "low level" API, driver optimizations are still important. They need to know which use cases are more common and what could be the major bottlenecks. For example, if a commonly used operation is slow in some way, it won't perform to the full potential of the hardware. This is not necessarily a hardware limitations, but the knowledge of what's the most important operations to optimize is critical in both hardware and software designs.
 
Back
Top