Apple is an existential threat to the PC

The M1 Ultra seems to start at 50.500kr (over 5k in dollars) here in sweden, for that you get 64gb ram, 48core gpu, 1tb ssd. In the US its about 4000 dollars for the base m1 ultra mac studio model. For the 64core gpu maxed variant we have to pay close to 10k in dollars, and that is supposedly going to match 3080 performance. If i'd be willing to spend over 10k on a rendering machine i wouldnt know what hardware would be a better fit, probably depends on what you're gonna do.

Also, SE2022/SE 3 announced, its the iphone 8/SE2020 with A15 chip (no other differences) at 399USD, in sweden 5700kr which translates to about 600 dollars.
 
I just sold my old Mac Pro in anticipation for this. I'll be honest, I have no use for the M1 Ultra and might just get the base line model with the M1 Max.

Did you have a 5,1 or the new cheesegrater?


The M1 Ultra seems to start at 50.500kr (over 5k in dollars) here in sweden, for that you get 64gb ram, 48core gpu, 1tb ssd. In the US its about 4000 dollars for the base m1 ultra mac studio model. For the 64core gpu maxed variant we have to pay close to 10k in dollars, and that is supposedly going to match 3080 performance. If i'd be willing to spend over 10k on a rendering machine i wouldnt know what hardware would be a better fit, probably depends on what you're gonna do.

That's if you max out the storage, which is always foolish (unless you absolutely need it).

To get the max 7.4GB/s of storage performance, you only need to get the 4TB SSD. Maxing everything and keeping storage at 4TB is $6,800 US. Otherwise, save the money, stick to 1TB and just buy some external storage -- heck, anyone who needs more than 1TB of data should be storing things on a RAID array or a NAS. A max M1 Ultra config with 1TB of storage is $5,800 US. Oh, and it seems like you can apply the Education Discount to shave off 10% from the price.

The new iPad Air looks bonkers too. M1 and Centre Stage are both good upgrades.
 
Did you have a 5,1 or the new cheesegrater?




That's if you max out the storage, which is always foolish (unless you absolutely need it).

To get the max 7.4GB/s of storage performance, you only need to get the 4TB SSD. Maxing everything and keeping storage at 4TB is $6,800 US. Otherwise, save the money, stick to 1TB and just buy some external storage -- heck, anyone who needs more than 1TB of data should be storing things on a RAID array or a NAS. A max M1 Ultra config with 1TB of storage is $5,800 US. Oh, and it seems like you can apply the Education Discount to shave off 10% from the price.

The new iPad Air looks bonkers too. M1 and Centre Stage are both good upgrades.

Thats more sane, still 7000usd, or about 85.000kr (over 8000 dollars) for 1tb 64core gpu ultra. Nothing for the ordinary computer user to consider, not any of those models. Server grade hw at lower wattage perhaps.

Ipad air 2022 missed that one, its a 8 core soc it seems, 8gb ram and a high resolution ips panel. 600 euro for the base model, much better than the 600dollar se 2022 which equals a scam.
 
I love what Apple is doing. I don't particularly care where they fall short on marketing claims. Really curious to see how well the mcm design scales. It won't be 2x the M1 Max. Their pricing is still pretty nuts for certain things like storage, but I'd love to see someone play around with a decked out $10K mac studio that has the full 128GB of RAM. The GPU having full access to all of that memory with 800 GB/s of bandwidth is pretty cool. Really curious to see when they adopt raytracing, because I think it'll be necessary for any professional rendering/engineering/architecture apps soon, if it isn't already.
 
Did you have a 5,1 or the new cheesegrater?

The old 5,1 with VEGA FE.


The M1 Ultra seems to start at 50.500kr (over 5k in dollars) here in sweden, for that you get 64gb ram, 48core gpu, 1tb ssd. In the US its about 4000 dollars for the base m1 ultra mac studio model. For the 64core gpu maxed variant we have to pay close to 10k in dollars, and that is supposedly going to match 3080 performance. If i'd be willing to spend over 10k on a rendering machine i wouldnt know what hardware would be a better fit, probably depends on what you're gonna do.

Also, SE2022/SE 3 announced, its the iphone 8/SE2020 with A15 chip (no other differences) at 399USD, in sweden 5700kr which translates to about 600 dollars.

Remember US prices are without VAT.

Would you ever consider buying anything Apple related though?
 
@Pressure is saying that SSD performance scales up with storage capacity on the Apple T2, M1, and M1 Pro/Max chips (and all Apple A-series SoCs). You would need either a 4TB or 8TB MacBook Pro to hit peak NAND storage performance on M1 Pro/Max.
Just to be clear, that's how current-gen SLC/MLC/TLC/QLC multichannel controller-based flash storage works on every platforn. An individual flash cell chip by itself can't sustain those speeds; bonding multiple of them in parallel with a sufficiently capable flash controller is what makes that throughput possible.

Example for the Sammy 980Pro: Samsung 980 PRO PCIe 4.0 NVMe SSD Review - StorageReview.com
StorageReview-Samsung-980-Pro-intelligent-turbo-write-1024x323.png


Also keep in mind: Reads on traditional consumer flash cells are always faster than writes. Did Apple specifically state whether that throughput number was reads or writes? The above chart is very specifically about write speeds and how caching or using a write combiner method to functionally reserve a block of MultiLC flash to be treated as SingleLC (all available layers store a singular charge bit-state.)

For what it's worth, the Sammy 980Pro NVMe 1TB drive on a PC platform can exceed 7Gbytes/sec back when it was released in 2020. Way to catch up, Apple ;)
 
Last edited:
For what it's worth, the Sammy 980Pro NVMe 1TB drive on a PC platform can exceed 7Gbytes/sec back when it was released in 2020. Way to catch up, Apple ;)
Yeah one has to be careful when reading benchmarketing from companies. Many comparisons Apple (and Apple specialized sites too) do are against previous Intel-based Apple products many of which were lagging behind the competition long before M1-based Mac were released.
 
Apple wont be any danger if their hw would run ms windows natively. Another hw vendor in the pc space.
They wouldn't be PCs, they would be Apple windows macs.

It was never that I thought or think Apple or Arm has some magnificent lead in performance or efficiency they didn't buy with process and mm2. Windows running on M1 is irrelevant to me.

Apple's delivery of a more user friendly, more reliable and more all encompassing ecosystem which is also not held back by 3 decades of backwards compatibility, together with vertical integration will increase their market/margin share yet more and bleed PC hardware of investment.
 

They wouldn't be PCs, they would be Apple windows macs.

Apple computers are pc's, Personal Computer. If Apple hardware can run Windows natively one could consider it for specific purposes, maybe even regular windows games if they ever catch up in the GPU space and raw cpu performance that doesnt rely on DSP's and other hardware acceleration. it doesnt matter to me who is providing the hardware, be it NV, apple, evil intel or AMD etc. I wouldnt mind Apple enabling native windows support on their arm hardware, would be nice for long-battery life laptops perhaps.
 
Last edited:
Yup. The ability to easily interconnect two semiconductors and have a flat 2x boost - with pretty much no overhead
The overhead is likely cost, but that simply doesn't matter to Apple. They have the volume and margins to sell it any way. For well programmed embarrassingly parallel workloads, interconnect really isn't that important and there are limits to how much people will pay to run bad software faster on PC, limits Apple can break.

Large silicon interposers are plain expensive, small silicon interposers thrown on top to stitch edges of dies together would have even more expensive extra processing steps. Too rich for non Apple user's blood.

PS. The problem with scaling parallel rendering always has been a bad software problem. I think it's likely Apple will solve it more with better software than just the interconnect bandwidth.
 
Last edited:
The more I'm thinking about the SSD thing, the more it confuses me.

What storage provider / device must they be using to only get "full bandwidth" at the 4TB size? Apple doesn't usually stumble blindly into these decisions; what "upside" do you suppose they prioritized here to take the slower SSD speeds?
 
The more I'm thinking about the SSD thing, the more it confuses me.

What storage provider / device must they be using to only get "full bandwidth" at the 4TB size? Apple doesn't usually stumble blindly into these decisions; what "upside" do you suppose they prioritized here to take the slower SSD speeds?
They don’t use all available channels on lower capacities I guess.
 
They don’t use all available channels on lower capacities I guess.
Right, that would be a decision made by the SSD manufacturer, unless of course Apple is building their own? Even then, I would suspect they're using an existing (eg non-Apple) flash controller chip. I dunno, it just seems a strange decision to me. I guess it's another way to upsell the size?
 
The overhead is likely cost, but that simply doesn't matter to Apple.
Performance overhead. Previously, no amount of money could X2 any two processing units and have anywhere near X2 performance because there was always a heavy performance overhead.

UltraFusion's interposer has 10,000 signal interconnects and provides 2.5TB/s of low latency, inter-processor bandwidth. That's new. Wy hasn't AMD. Intel or anybody done this? No idea.. ask them.

The barrier has never been one of cost, it's been about the loss of efficiency has you parallize operations

PS. The problem with scaling parallel rendering always has been a bad software problem. I think it's likely Apple will solve it more with better software than just the interconnect bandwidth.
It's not a software problem, it's 100% a hardware problem. Software has been able to effectively abstract and parelise for decades, it's been hit induced by hardware as the distinct processing cells work together inefficiently that has been the issue.
 
Previously, no amount of money could X2 any two processing units and have anywhere near X2 performance because there was always a heavy performance overhead.

If you went to NVIDIA and paid them a billion dollars to put some reticule busting GPUs on a wafer size silicon interposer they'd have called you insane and took your money. AMD uses an organic substrate instead of interposers because they can't justify the cost of interposers.

Hell, if you pay enough money to a fab they will just let you use the entire wafer and let you expose parts of the wafer at reduced density so you can stitch stuff together that way. See Cerebras, 20 Petabytes/sec of interconnect. Hell, maybe that is what Apple has done too, leaks can be wrong. It's expensive either way.
It's not a software problem, it's 100% a hardware problem.

Sort middle at the transformed triangle level is bad software. Sort first at hierarchical scene description is good software.

Even ignoring the dual GPUs, the good way to implement something like Nanite on a tiled GPU is by rendering tile by tile inside the engine ... not doing that is bad. Once you do do it tile by tile, you can parallelize it across multiple tilers with minimum interconnect bandwidth needed.
 
Last edited:
If you went to NVIDIA and paid them a billion dollars to put some reticule busting GPUs on a wafer size silicon interposer they'd have called you insane and took your money. AMD uses an organic substrate instead of interposers because they can't justify the cost of interposers.

Both AMD and Nvidia make components for servers farms, costing almost as much as the most expensive Mac Studio configuration cost, that do not offer this. So it's not about money or the willingness of customers to pay more for performance.

Apple are putting this in a product that isn't for server budgets, this is for professionals for whom this expense is very accessible.

Even ignoring the dual GPUs, the good way to implement something like Nanite on a tiled GPU is by rendering tile by tile inside the engine ... not doing that is bad. Once you do do it tile by tile, you can parallelize it across multiple tilers with minimum interconnect bandwidth needed.

The software has to be able to parallize and software has been competent at this almost since the introduction of the Core2Duo. The barrier of linearly-scalable performance has been been configuring different elements together and having the performance boost proportionately. Not adding a second processing element and only getting a 40% improvement because there is is no low-latency, high-bandwidth mechanism for the distinct pieces of hardware to communicate quickly enough.
 
Back
Top