DavidGraham
Veteran
CPU scaling is excellent, GPU scaling not so much, oh and the 3080Ti is several times faster than the M1 Ultra.
CPU scaling is excellent, GPU scaling not so much, oh and the 3080Ti is several times faster than the M1 Ultra.
They will but you can be certain that most of Apple's near-future GPU improvements will be focussed on pro app performance. Apple are not architecting M1 (and probably not M2) to run DirectX, OpenGL or Vulkan better.Hopefully apple will make some improvements on the gpu side, and provide better optimization information to developers. They seem to be doing a lot better on the cpu-side, and the gpu is behind.
Hopefully apple will make some improvements on the gpu side, and provide better optimization information to developers. They seem to be doing a lot better on the cpu-side, and the gpu is behind.
Isn't that a matter of committing a lot of silicon?
The 3080 Ti is a $1200 card which is using more die area and transistors than whatever cores are in the M1 Ultra?
Seems like the market for Mac Studio would be far different than the market for GPUs like the 3080 Ti, which is for hardcore gaming, crypto, some high end rendering, etc.?
I agree, but Apple aren't Nvidia and AMD and they aren't needing to provide an array of GPU performance options, sometimes questionably and arguably unnecessary.I'm not necessarily saying that they should try to compete with a 3090 ti, though they do have a mac pro coming and it will only be successfull if they have very strong gpu performance. What they do need is better scaling of gpu performance when you jump between m1, m1 pro, m1 ultra.
That's not to say they aren't great for a variety of games, I much prefer to play games like Stellaris, Civlization and Rome Remastered on the Mac simply because it's much easier to game, sleep, wake, resume, switch to other apps, on macOS than on Windows. And my Windows laptop is an ASUS Zephyrus G14. It's just the ease which Apple allow sliding between apps in their OS and their OS seems much "happier" sleep/resuming games than Windows. That is really hit and miss on Windows for reasons I don't understand.
Seems like the market for Mac Studio would be far different than the market for GPUs like the 3080 Ti, which is for hardcore gaming, crypto, some high end rendering, etc.?
I don't think they talked about the performance whatsoever. They only talked about resolutions and their upscaling tech. From what I saw the clips they showed of no man's sky and RE8 were really bad and the image quality was not good tbh. RE8 looked like it was running on lower settings. Their upscaling tech is a spatial tech just like DLSS 1.0 and FSR 1.0.Looks like we'll have another fully native M1 game to compare performance against the PC soon. Today's WWDC revealed that RE:Village is coming to the Mac later this year, utilizing the Metal 3 API and also takes advantage of Apple's version of something akin to FSR 2.0, so that an M1 Air can get 60fps at "1080p" using temporal upscaling, and Mac Studio can run native 4k 60+. Still somewhat annoying there is no DLSS/FSR 2.0 in Re:Village PC except for the Interlaced mode, which while certainly better than past implementations is pretty weak, would be interesting to compare how Apple's implementation fares in the same game otherwise.
Metal 3 also apparently has something akin to DirectStorage. Being just a presentation though no real details atm.
They only talked about resolutions and their upscaling tech. From what I saw the clips they showed of no man's sky and RE8 were really bad and the image quality was not good tbh.
Their upscaling tech is a spatial tech just like DLSS 1.0 and FSR 1.0.
DLSS 1 was a temporal solution, not spatial.Their upscaling tech is a spatial tech just like DLSS 1.0 and FSR 1.0.
No. It was a spatial solution and that's why nvidia changed it in 2.0 to a temporal solution.DLSS 1 was a temporal solution, not spatial.
For GPU junkies, many of you will recognize this as a similar strategy to how NVIDIA designed DLSS 1.0, which was all about spatial upscaling by using pre-trained, game-specific neural network models. DLSS 1.0 was ultimately a failure – it couldn’t consistently produce acceptable results and temporal artifacting was all too common. It wasn’t until NVIDIA introduced DLSS 2.0, a significantly expanded version of the technology that integrated motion vector data (essentially creating Temporal AA on steroids), that they finally got DLSS as we know it in working order.
Completely different than what my mind remembers, as I always thought of it as a TAA replacement, including the temporal component.No. It was a spatial solution and that's why nvidia changed it in 2.0 to a temporal solution.
Ehm, the API documentation says... both?Their upscaling tech is a spatial tech
The way they described it in their video is different than the documentation. There they said spatial upscaling with TAA. No mention of Temporal upscaling.Ehm, the API documentation says... both?
The way they describe it in the documentation is more expansive than a keynote video, as one would expect. Like I said, there would be no reason to mention TAA as a requirement for spatial upscaling at all. Entirely reasonable to say "Well we'll see when this is explained in more detail then I guess" than steadfastly declaring what the the implementation is based on a promotional video intended for a layman.The way they described it in their video is different than the documentation. There they said spatial upscaling with TAA. No mention of Temporal upscaling.