Apple is an existential threat to the PC

CPU scaling is excellent, GPU scaling not so much, oh and the 3080Ti is several times faster than the M1 Ultra.

What does that mean, you pair it with a fast CPU and system and it's several times faster than a Mac Studio?

Or faster in GPU tasks?

Or benchmarks like FLOPS?
 
Hopefully apple will make some improvements on the gpu side, and provide better optimization information to developers. They seem to be doing a lot better on the cpu-side, and the gpu is behind.
They will but you can be certain that most of Apple's near-future GPU improvements will be focussed on pro app performance. Apple are not architecting M1 (and probably not M2) to run DirectX, OpenGL or Vulkan better.

That's not to say they aren't great for a variety of games, I much prefer to play games like Stellaris, Civlization and Rome Remastered on the Mac simply because it's much easier to game, sleep, wake, resume, switch to other apps, on macOS than on Windows. And my Windows laptop is an ASUS Zephyrus G14. It's just the ease which Apple allow sliding between apps in their OS and their OS seems much "happier" sleep/resuming games than Windows. That is really hit and miss on Windows for reasons I don't understand.
 
Hopefully apple will make some improvements on the gpu side, and provide better optimization information to developers. They seem to be doing a lot better on the cpu-side, and the gpu is behind.

Isn't that a matter of committing a lot of silicon?

The 3080 Ti is a $1200 card which is using more die area and transistors than whatever cores are in the M1 Ultra?

Seems like the market for Mac Studio would be far different than the market for GPUs like the 3080 Ti, which is for hardcore gaming, crypto, some high end rendering, etc.?
 
Isn't that a matter of committing a lot of silicon?

The 3080 Ti is a $1200 card which is using more die area and transistors than whatever cores are in the M1 Ultra?

Seems like the market for Mac Studio would be far different than the market for GPUs like the 3080 Ti, which is for hardcore gaming, crypto, some high end rendering, etc.?

I'm not necessarily saying that they should try to compete with a 3090 ti, though they do have a mac pro coming and it will only be successfull if they have very strong gpu performance. What they do need is better scaling of gpu performance when you jump between m1, m1 pro, m1 ultra.
 
I'm not necessarily saying that they should try to compete with a 3090 ti, though they do have a mac pro coming and it will only be successfull if they have very strong gpu performance. What they do need is better scaling of gpu performance when you jump between m1, m1 pro, m1 ultra.
I agree, but Apple aren't Nvidia and AMD and they aren't needing to provide an array of GPU performance options, sometimes questionably and arguably unnecessary.

E.g. there is considerable performance bleed across Nvidia's many variations of xx50, xx60, xx70, xx80 and xx90 GPUs with some lower-numbered configurations outperforming high-numbered configurations which does little to add clarity, and can depend on the CPU driving the GPU, the bus configuration, the amount of local RAM and the clock speeds. Some of this is more about branding than technology. It's all a bit crap.

Apple's situation is a long way from perfect but it could be a lot worse.
 
That's not to say they aren't great for a variety of games, I much prefer to play games like Stellaris, Civlization and Rome Remastered on the Mac simply because it's much easier to game, sleep, wake, resume, switch to other apps, on macOS than on Windows. And my Windows laptop is an ASUS Zephyrus G14. It's just the ease which Apple allow sliding between apps in their OS and their OS seems much "happier" sleep/resuming games than Windows. That is really hit and miss on Windows for reasons I don't understand.

I have a G15 with 3070m, compared it to a family members m1 pro 16. I wouldn't want to trade it for gaming purposes thats for sure, not even light gaming. Windows 11 with its latest updates is doing fantastic too.

Seems like the market for Mac Studio would be far different than the market for GPUs like the 3080 Ti, which is for hardcore gaming, crypto, some high end rendering, etc.?

A 3080/alder lake combo is a very compotent creator setup too.
 
Looks like we'll have another fully native M1 game to compare performance against the PC soon. Today's WWDC revealed that RE:Village is coming to the Mac later this year, utilizing the Metal 3 API and also takes advantage of Apple's version of something akin to FSR 2.0, so that an M1 Air can get 60fps at "1080p" using temporal upscaling, and Mac Studio can run native 4k 60+. Still somewhat annoying there is no DLSS/FSR 2.0 in Re:Village PC except for the Interlaced mode, which while certainly better than past implementations is pretty weak, would be interesting to compare how Apple's implementation fares in the same game otherwise.

Metal 3 also apparently has something akin to DirectStorage. Being just a presentation though no real details atm.
 
Looks like we'll have another fully native M1 game to compare performance against the PC soon. Today's WWDC revealed that RE:Village is coming to the Mac later this year, utilizing the Metal 3 API and also takes advantage of Apple's version of something akin to FSR 2.0, so that an M1 Air can get 60fps at "1080p" using temporal upscaling, and Mac Studio can run native 4k 60+. Still somewhat annoying there is no DLSS/FSR 2.0 in Re:Village PC except for the Interlaced mode, which while certainly better than past implementations is pretty weak, would be interesting to compare how Apple's implementation fares in the same game otherwise.

Metal 3 also apparently has something akin to DirectStorage. Being just a presentation though no real details atm.
I don't think they talked about the performance whatsoever. They only talked about resolutions and their upscaling tech. From what I saw the clips they showed of no man's sky and RE8 were really bad and the image quality was not good tbh. RE8 looked like it was running on lower settings. Their upscaling tech is a spatial tech just like DLSS 1.0 and FSR 1.0.
 
I'm going to guess it will remain mostly a total shitshow until Apple releases its own console, that's the most surefire way to get good support.

I think we're close to the point where treating a tiler as an immediate mode renderer will become untenable. Engines will have to go tile native.
 
They only talked about resolutions and their upscaling tech. From what I saw the clips they showed of no man's sky and RE8 were really bad and the image quality was not good tbh.

I really wouldn't judge image quality from a 1080p webstream, a little premature to judge what settings it's using atm. We also have no idea what system they were running it on.

I'm just saying it's good that we'll have a modern, native title to compare gaming performance outside of Shadow of the Tomb Raider (which isn't fully native).

Their upscaling tech is a spatial tech just like DLSS 1.0 and FSR 1.0.

Yeah listening to it again it's not entirely clear - the presenter says it works by applying 'spatial upscaling and temporal anti-aliasing'. That could mean just spatial and using the game's existing AA, but a little odd that they would mention how TAA is part of it - spatial doesn't care about the AA method. If it was purely spatial why also would it need developer specific support?

And DLSS 1.0 was spatial?
 
Last edited:
M2 is supposedly 18% faster than M1. No idea what workloads though. Iphone 7 and below wont be getting IOS16 so i guess A11 (bionic) and up from now on, you need A12 to get newer features. Though strangely enough IpadOS16 will support the A9 and A10 SoC's.
 
DLSS 1 was a temporal solution, not spatial.
No. It was a spatial solution and that's why nvidia changed it in 2.0 to a temporal solution.

For GPU junkies, many of you will recognize this as a similar strategy to how NVIDIA designed DLSS 1.0, which was all about spatial upscaling by using pre-trained, game-specific neural network models. DLSS 1.0 was ultimately a failure – it couldn’t consistently produce acceptable results and temporal artifacting was all too common. It wasn’t until NVIDIA introduced DLSS 2.0, a significantly expanded version of the technology that integrated motion vector data (essentially creating Temporal AA on steroids), that they finally got DLSS as we know it in working order.
 
No. It was a spatial solution and that's why nvidia changed it in 2.0 to a temporal solution.
Completely different than what my mind remembers, as I always thought of it as a TAA replacement, including the temporal component.
 
You can see upscaling and high rez TAA as part of a combination, so it was always temporal.

It just wasn't motion compensated it seems. The motion compensation is the big deal.
 
The way they described it in their video is different than the documentation. There they said spatial upscaling with TAA. No mention of Temporal upscaling.
The way they describe it in the documentation is more expansive than a keynote video, as one would expect. Like I said, there would be no reason to mention TAA as a requirement for spatial upscaling at all. Entirely reasonable to say "Well we'll see when this is explained in more detail then I guess" than steadfastly declaring what the the implementation is based on a promotional video intended for a layman.
 
Back
Top