Apple is an existential threat to the PC

They may be the most disliked, but they also have the strongest cult following to balance it out

This reminds me of the 90's. When reviewing Apple hardware, anything that doesn't use a benchmark that shows Apple hardware in the absolutely best light should be ignored. Because you know, people never use software or business applications that aren't optimized for the latest Apple hardware. :p

Thankfully, at least the M1 and MacOS aren't as bad now as the CPUs and MacOS were back in the 90's. But that attitude that only the benchmarks that perform best on Apple hardware should be used appears to still be there. :p

It's still a relatively new CPU, it's fine if sites benchmark applications that haven't been ported and optimized for the M1 yet, because you know there are still people that use those applications and will continue to use those applications until there's a optimized version for M1.

Obviously as applications get ported and optimized, the earlier reviews will become less relevant, but it would be a disservice by some sites to hold off on showing performance on some applications just because at some nebulous time in the future a ported and optimized version of that application might exist.

Regards,
SB

Reasonable benchmark can and should* include things Apple chips have problems with.
*they do exist in apps people use, thus, should

These are all good points.

We should also include widely used apps, which are not particularly well optimised for macOS/M1, in reviews of the M1. But motivations for why they're included and a disclaimer that performance tanks due to a lack of software optimisation should be made clearly. Conversely, in reviews of x86/Nvidia/AMD products, we should also include widely used macOS apps which are typically more performant on macOS/M1. Consistency requires that treatment be applied in both directions.

Secondly, my main issue with the LTT review is the lack of distinction between hardware limitations and software limitations. If an RTX 3080 performs like an RTX 3050 due to crappy drivers or unoptimised software, then a review should clearly state "due to software bottlenecks, the RTX 3080 and 3050 perform identically". Making a sweeping generalised statement like "The RTX 3080 is equivalent in hardware performance to an RTX 3050 because our benchmarks say so" is factually incorrect. Proper review sites like LTT, GN, HUB, DF, and so on have the tools and indeed the expertise to make this distinction, and so they should. I shouldn't have to refer to an Apple fanboy channel like MaxTech for some basic facts and clarity when it comes to hardware performance.
 
Blender (alpha build) shows 5x improvement on the M1 Max, bringing performance to parity with a 95w RTX 3080 Mobile.
Lame testing, they are using blender on both the GPU and the CPU together, so it's no longer a GPU vs GPU comparison. Also, he didn't test the RTX 3080m with Optix, which accelerats the rendering through RT cores, depriving the 3080m of a huge performance boost.
 
Lame testing, they are using blender on both the GPU and the CPU together, so it's no longer a GPU vs GPU comparison.

So it's an M1 Max vs i7-11800+3080M (230w and 95w) comparison. Not lame at all, and actually quite interesting.

Also, he didn't test the RTX 3080m with Optix, which accelerats the rendering through RT cores, depriving the 3080m of a huge performance boost.

If only there were channels dedicated to testing with all these options, exploring the full capabilities of the underlying hardware instead of kneecapping performance with crap software settings. :-?

Actually, could you post what kind of performance uplift the 3080M gets when it uses Optix in Blender? I'm quite curious.
 
Actually, could you post what kind of performance uplift the 3080M gets when it uses Optix in Blender? I'm quite curious.
It seems to be quite significant, a little bit less than 2: https://techgage.com/article/nvidia-geforce-rtx-3080-rendering-performance/
Blender-2.90-Classroom-CUDA-and-OptiX-Render-Time-Cycles-NVIDIA-GeForce-RTX-3080.jpg


But did the guy mention he didn't use OptiX?

What I find more concerning is that he compared Blender 3.1 vs 2.9.
 
It seems to be quite significant, a little bit less than 2: https://techgage.com/article/nvidia-geforce-rtx-3080-rendering-performance/
Actually, those results have no data for the 3080M. The mobile variant has notably less CUDA /RT cores than a "real" 3080, at notably less clock speed. Also, in a lot of cases, the 3080M performs worse than the 3070M thanks to thermal limits, obviously depending on the laptop design and what programmable TDP the manufacturer chose to use.

I only now realized there's a 3080 Max Q variant, which is basically the 3070 desktop card at lower clocks, and then there's a 3080M which has less CUDA/RT cores than a desktop 3070 but at higher clocks. Hmm...
 
Wait did someone use Blender 3.x on one platform and 2.9x on other?
3.x is leaps and bounds faster on GPUs no matter the brand or API (except for opencl) IIRC
Here's one example
https://www.phoronix.com/scan.php?page=article&item=blender-30&num=2
How did you end up quoting the wrong person? :)

Like for the of OptiX vs CUDA, the version was given by comments and I'm unable to tell the difference looking at the video. Anyway the comparison looks dubious.

Here is one interesting by LTT (did I say I would never watch them again? :D)
It looks like larger render models do much better against the 3060m (IIRC that's the one in Zephyrus M16).
 
Actually, those results have no data for the 3080M. The mobile variant has notably less CUDA /RT cores than a "real" 3080, at notably less clock speed. Also, in a lot of cases, the 3080M performs worse than the 3070M thanks to thermal limits, obviously depending on the laptop design and what programmable TDP the manufacturer chose to use.

I only now realized there's a 3080 Max Q variant, which is basically the 3070 desktop card at lower clocks, and then there's a 3080M which has less CUDA/RT cores than a desktop 3070 but at higher clocks. Hmm...
I was not trying to infer results for 3080M from 3080 results, but merely trying to get a hint of CUDA vs OptiX, but that might indeed be meaningless.
 
Actually, could you post what kind of performance uplift the 3080M gets when it uses Optix in Blender? I'm quite curious.
A 2060 can improve its performance two times by using Optix compared to CUDA, a 3060 can do three times better, so it's significant.

Blender-3.0.0-Cycles-OptiX-Render-Performance-Still-Life-680x383.jpg


Also funny to discover he is using the version 2.9 for the 3080m while using version 3 on the mac, the improvements from version 3 alone would make the 3080m twice as fast, not to mention the Optix renderer thing.

Here the 3060 is twice as fast in version 3 compared to 2.9.

Blender-3.0.0-vs-2.93-Cycles-OptiX-Performance-BMW-680x383.jpg


https://techgage.com/article/blender-3-0-gpu-performance/

Just shows how most of these Youtubers are just amateurs and don't really know what they are doing.
 
I only now realized there's a 3080 Max Q variant, which is basically the 3070 desktop card at lower clocks, and then there's a 3080M which has less CUDA/RT cores than a desktop 3070 but at higher clocks. Hmm...
The 3080m is often limited by two things: a 80w power limit, and thermals, it has a performance profile that varies wildly according to design, it can drop below the 2060 level of performance in many cases, so it's not that aspiring figure to begin with.
 
A 2060 can improve its performance two times by using Optix compared to CUDA, a 3060 can do three times better, so it's significant.

Also funny to discover he is using the version 2.9 for the 3080m while using version 3 on the mac, the improvements from version 3 alone would make the 3080m twice as fast, not to mention the Optix renderer thing.

Here the 3060 is twice as fast in version 3 compared to 2.9.

https://techgage.com/article/blender-3-0-gpu-performance/

Just shows how most of these Youtubers are just amateurs and don't really know what they are doing.

Good info.

The Max Tech review with the 3.1a build clocks the M1 Max at approx 42 seconds for the BMW Blender render. On version 3.0 it was at 3:20 minutes.

It sounds like we need a proper tech journalist to do some benchmarks once version 3.1 is out of alpha/beta stage.
 
The Max Tech review with the 3.1a build clocks the M1 Max at approx 42 seconds for the BMW Blender render. On version 3.0 it was at 3:20 minutes.
The 3080m scored 30 seconds in that benchmark.

He is running the projects at very small screen resolution, likely 720p, as the final render occupies a small part of the screen. For comparison, running the BMW scene @1440p, the desktop 3060 went from 241 seconds in version 2.9 with CUDA, to 177 seconds in version 3 with CUDA, to 93 seconds with OptiX. The desktop 3080 went from 122 seconds in version 2.9 with CUDA, to 87 seconds in version 3 with CUDA, to 56 seconds with OptiX.

It's an informative article, but I don't agree with the synthetic benchmark remark, it's just one very flawed test!
 
A 2060 can improve its performance two times by using Optix compared to CUDA, a 3060 can do three times better, so it's significant.

Also funny to discover he is using the version 2.9 for the 3080m while using version 3 on the mac, the improvements from version 3 alone would make the 3080m twice as fast, not to mention the Optix renderer thing.

Here the 3060 is twice as fast in version 3 compared to 2.9.

https://techgage.com/article/blender-3-0-gpu-performance/

Just shows how most of these Youtubers are just amateurs and don't really know what they are doing.
As @Albuquerque rightly pointed out you can't directly translate these results to M variants. Surely 3.x and OptiX bring speedup versus 2.9 Cuda but you can't quantify it using these tests.

I provided what I think is a better comparison (vs 3060M) just above.
 
Apple sell more high-end laptops ... than Nvidia sell high-end graphics cards.
Yes.
(probably to content creators)
No ... god no. They don't have the numbers. There are orders of magnitude more college kids with an Apple laptop than professional content creators.
I guess this is the reason why Apple continue to develop pro software like Logic, Final Cut and Motion and also why Apple felt compelled to develop the Afterburner card for the Mac Pro - which the new M1 Macs far exceed in terms of performance.

Pros are willing to spend a lot of hardware and software because they can pass the cost on to their customers.
They also ignored the Mac Pro for ages ... pro's will make do when forced to for a while, switching workflow is too much of a headache.

Pro's are for halo, normies are for revenue.
 
Last edited:
Can we have some insight on how on earth exactly did Apple achieve this? I mean Intel/AMD could go wider as well, but they would be nowhere close to what Apple is doing.

Combine it with N5 and it's very close ... but the combination of just buying area for lower power consumption and using the cutting edge node would mean then they'd have to be able sell at the same margins as Apple and they can't (not for consumer products any way).

In large part because Microsoft has been a poor steward of the PC ecosystem.
 
Back
Top