Apple M1 SoC

That's Strange isn't even a15 already strong enough for sustained 60 fps genshin? Like on iPad mini

Hmm I wonder... Whether future iPad mini with m1 will be worth it or it'll be just a small jump from a15. Due to limited tdp setup

The ipad is running it at a higher resolution, so thats something to consider (it impacts performance). There are no mobile devices that can run Genshin Impact at a true locked 60fps, not even before throttling starts doing its thing. I tried Genshin impact on a 13 pro (not mine), after around 7 to 10 minutes throttling starts and its basically unplayable when having set to 60fps, aside from screen dimming.
Its the same for android devices. Unless you have these gaming oriented phones which have a active cooling fan and all that.

Still, we need to think that these phones are tiny and cooling will be a problem. Its impressive for what it is.
 
The ipad is running it at a higher resolution, so thats something to consider (it impacts performance). There are no mobile devices that can run Genshin Impact at a true locked 60fps, not even before throttling starts doing its thing. I tried Genshin impact on a 13 pro (not mine), after around 7 to 10 minutes throttling starts and its basically unplayable when having set to 60fps, aside from screen dimming.
Its the same for android devices. Unless you have these gaming oriented phones which have a active cooling fan and all that.

Still, we need to think that these phones are tiny and cooling will be a problem. Its impressive for what it is.

whoa! thanks for the info. i was in the middle of lamenting between getting a 64GB or 256GB iPad mini with a15. as it have no microsd card slot. or wait for a new M1 mini. or get the M1 air.

i guess i need to wait for lenovo legion y700 to come to my region
(i will mostly use the tablet for reading manga, but i would like the ability to play games when cousins comes to visit, etc. Currently I'm using android on switch and... yeesss... very long load time. The performance is still good tho)
 
Last edited:
whoa! thanks for the info. i was in the middle of lamenting between getting a 64GB or 256GB iPad mini with a15. as it have no microsd card slot. or wait for a new M1 mini. or get the M1 air.

i guess i need to wait for lenovo legion y700 to come to my region
(i will mostly use the tablet for reading manga, but i would like the ability to play games when cousins comes to visit, etc. Currently I'm using android on switch and... yeesss... very long load time. The performance is still good tho)

If you go Apple then go for the Air with M1 chip..... i think its more capable for gaming. Though, if mobile gaming is your primary thing then you probably want a gaming oriented tablet/device. Performance is very good for most tablets for the first 10 or so minutes, then it falls of a steep cliff. Peak performance has little bearing on SoC performance, no one is intending to play these games for just some minutes.
I have no idea about that lenovo y700 (didnt even knew it existed). From that video it seems its capable enough, you always will have to content with resolution on any mobile device if you play longer than 10 minute sessions.

If you have the budget or want to spend that much for a mobile device then maybe Asus Z13 flow, its a windows tablet with a 3050Ti/Intel i9 alder lake. Probably the best performance specced tablet, but its rather large (13''). It can act as ultra-laptop aswell. I would rather go for a laptop from there but ok.

For playstore/appstore only gaming it wont matter much between the Air or the lenovo y700 just pick your poison i guess. Where it really comes down to for me personally is emulators (unlimited amount of games basically, and graphically up there) and then you want android/windows as the OS.

Edit: that y700 is just 469 usd. It is equipped with the sd870 which is known for its very good gaming capabilities. For that price i wouldnt consider any of the other options listed above :p

Also i want to say (my personal thoughts) for mobile devices at the least, it seems strategical to go for the mid-end products, for example that Y700. It may not be the fastest right now or last as long performance wise, but at almost half the initial cost, you can enjoy it for some years and then buy a new one when the time comes (mid ranger). This way you keep up with the more demanding tech/games that will/may come.
Yes you can buy the more expensive/best right now for longevity, but the price is (much) higher aswell. Saving almost half the money is essentially giving you the same longevity as you can just opt for a next new tablet in some years. The same effect, but your on a totally new device, new battery, screen etc and all other tech that use to move on (bluetooth, wifi, added features etc etc).

Seeing from the video you linked, the y700 is giving very good performance currently in the heaviest of games at the highest settings (minus resolution), so i think it will provide you with enough performance for the coming years for that money.
 
Last edited:
@PSman1700 lenovo y700 actually have higher resolution than iPad air m1. i wonder, maybe the throttling of M1 iPad air is because apple optimized it for burst performance instead of long sustained performance. Ideally the performance governor automatically understand the workload and setup the clock speeds and voltages accordingly. But i have never found a tablet or phone that properly switch performance governor automatically properly (other than "cheating" by switching to max allowed thermal for benchmarks....)

Does iPad M1 have a performance profile option? IIRC I've read somewhere that on M1 MacBook, apple provides a performance profile option. But i can't find anything about it for iPad.
 
i wonder, maybe the throttling of M1 iPad air is because apple optimized it for burst performance instead of long sustained performance.

That'd be logical i think, it is not a gaming phone in the sense that Apple designed it around long intensive gaming sessions, even if their chips could absolutely do it. That's why i say, if you want a device for gaming, get a device thats made for gaming :p

Does iPad M1 have a performance profile option? IIRC I've read somewhere that on M1 MacBook, apple provides a performance profile option. But i can't find anything about it for iPad.

No idea, probably not, not a single reviewer has ever mentioned it. Golden Reviewer usually does quality-in dept analysis, he should have found it if it existed.

For you, i'd probably go for the Y700 if it is as cheap for you as it is elsewhere. It's a bargain for the performance you get, and with the money you save on a more expensive tablet you could get another in some years if that would be needed.
 

Very good performance, the highest of any device he has tested so far (Genshin Impact). Doesnt even run too hot and quite good power consumption. 8100 is the upper midrange from MTK.
 
Is the M1 gpu still based on IMG patents/tech ? Or they're done with them and have a licence just for backward compatibility stuff ?
 
Is the M1 gpu still based on IMG patents/tech ? Or they're done with them and have a licence just for backward compatibility stuff ?
It is still a TBDR GPU at its core, so most likely it is bound to IMG patent licensing. Otherwise, there isn't much public info to say whether how far Apple has gone on the road of a custom architecture.
 
One thing of particular interest is whether they have adapted IMG's "multi-core GPU" IP (in A/B-series) for 2-die M1 Ultra (and allegedly the 4-die Jade-4C for Mac Pro) in some ways.
 
Last edited:
It is still a TBDR GPU at its core, so most likely it is bound to IMG patent licensing. Otherwise, there isn't much public info to say whether how far Apple has gone on the road of a custom architecture.
Base TBDR patents are expired by now. As you say it’s really difficult to say from the API and testing just how the silicon is organized and whether it would infringe on any current patents.
It’s a bit of a moot point given that Apple does have a patent licensing agreement with ImgTech. It may be that this is just to avoid having to spend resources on future legal battles, but it’s difficult to know, much less what Apples future plans might be.
From a tech geek point of view, I’m thrilled that there is someone with deep pockets pushing a TBDR GPU architecture forward. Yes, there are points to standardisation but total homogenity is also a bit boring. Apples GPUs seem to do very well running all kinds of code and do really well on code that explicitly target them. It would be a different situation if they pushed something that sucked just for the sake of being proprietrary.
 
One thing of particular interest is whether they have adapted IMG's "multi-core GPU" IP (in A/B-series) for 2-die M1 Ultra (and allegedly the 4-die Jade-4C for Mac Pro) in some ways.

I severely doubt it has anything to do with the Albiorix architecture; last time I heard anything Apple hadn't licensed any architecture above Rogue, as they've re-written a healthy portion of that base architecture (mostly ALUs?). Unless I'm understanding their data for their GPU wrong each cluster or "core" consists still of 16 SIMD lanes capable of 2 FMACs each, 2 TMUs and 1 ROP/RBE. Anything Albiorix would have to have single FMAC 128 SIMD lanes/cluster.

Base TBDR patents are expired by now. As you say it’s really difficult to say from the API and testing just how the silicon is organized and whether it would infringe on any current patents.
It’s a bit of a moot point given that Apple does have a patent licensing agreement with ImgTech. It may be that this is just to avoid having to spend resources on future legal battles, but it’s difficult to know, much less what Apples future plans might be.
From a tech geek point of view, I’m thrilled that there is someone with deep pockets pushing a TBDR GPU architecture forward. Yes, there are points to standardisation but total homogenity is also a bit boring. Apples GPUs seem to do very well running all kinds of code and do really well on code that explicitly target them. It would be a different situation if they pushed something that sucked just for the sake of being proprietrary.

IHVs tend to one way or the other renew or expand old patents. For the last paragraph I'd love to see Apple pushing for some sort of geometry tiling as mfa suggested here in the forums at one point. You can search for JohnH's last conversations here with a developer, where it's obvious that Apple tends to handle quite a few cases different than IMG and that not always for the better.
 
Last edited:
IHVs tend to one way or the other renew or expand old patents. For the last paragraph I'd love to see Apple pushing for some sort of geometry tiling as mfa suggested here in the forums at one point. You can search for JohnH's last conversations here with a developer, where it's obvious that Apple tends to handle quite a few cases different than IMG and that not always for the better.
Yes, this was one of my main concerns when Apple started recruiting with the expressed purpose of creating "their own world class graphics IP", and then followed up by not only not buying ImgTech, but saying they wouldn’t license their IP. Just how many hoops would Apple have to jump through to dodge patent infringements, and what ultimately would the costs be in terms of die area/efficiency/power draw? I want the architecture to be as efficient as it can possibly be at a given time, without the constraints of tiptoeing through a patent minefield. Apple does have a license agreement though, and I hope it is broad and long lasting enough that such concerns are unfounded.

As an aside, not only do I have a soft spot for TBDRs since the Kyro days, but different folks working at ImgTech has contributed a lot to these forums over the years, and always in a very constructive manner. When someone like JohnH speaks about his field, one shuts up and listens because he knows whereof he speaks. And it is appreciated that he takes the time to do so.
 
Here's to say that John himself admitted here in the forum that if possible he'd rather have the advantages of both worlds (IMR vs. TBDR) and I can't say I disagree. My layman's instinct tells me that we still have years ahead before GPUs move to almost entirely programmable architectures. In the meantime I'd be delighted to see architectures that can accomplish the benefits of both sides without inheriting each sides disadvantages. Unfortunately public detaills on how Qualcomm Adreno GPUs handle things, but I think their GPU drivers attempt something along that line. Now if it's something that's mostly sw/driver limited I'd love it to expand into "supporting" hw/algorithms too. Here's to say that I have no idea if today's Adrenos already have fragments in that direction.

Unfortunately Apple has the advantage of having its own API and there the GPU driver in the majority of cases reacts as expected; on the Android side developers have a constant headache of unexpected/unwanted behavior.
 

Really interesting and well balanced article. I've wondered how Apples best really fairs against PC's since there's a lot of hyperbole out there around the M1 Max/Ultra. Turns out I'm quite impressed, seems to trade blows with the 12900K on the CPU front (albeit with more cores) although falls short of high end desktop Amperes.

That said, I'm not seeing any exceptional "Apple magic" here. It may be doing this at a very low power draw but it seems to be achieving this through a massive node/density advantage resulting in enormous but low frequency chips. And of course Raptor Lake/Zen 4 and Ada/RDNA 3 are going to drastically change the equation in the next 3 months.
 
Really interesting and well balanced article. I've wondered how Apples best really fairs against PC's since there's a lot of hyperbole out there around the M1 Max/Ultra. Turns out I'm quite impressed, seems to trade blows with the 12900K on the CPU front (albeit with more cores) although falls short of high end desktop Amperes.

That said, I'm not seeing any exceptional "Apple magic" here. It may be doing this at a very low power draw but it seems to be achieving this through a massive node/density advantage resulting in enormous but low frequency chips. And of course Raptor Lake/Zen 4 and Ada/RDNA 3 are going to drastically change the equation in the next 3 months.

M1 max/Ultra also cost much more than a alder lake 12900k does.... all the while the 12900k still is the better pick all-around performance wise.
 
Do people think 12900k level performance will drop down to the 60 watt tier from its current 250+ when Intel moves to the same node as the M1 Ultra? I don't. Apple engineers aren't magic but they have certainly been doing much better work than Intel. So has AMD.
 
Last edited:
Do people think 12900k level performance will drop down to the 60 watt tier from its current 250+ when Intel moves to the same node as the M1 Ultra? I don't. Apple engineers arent magic but they have certainly been doing much better work than Intel. So has AMD.

Yeah, this struck me as a little dismissive as well - the node advantage isn't that large, I think Anandtech's articles on the M1 architecture are somewhat of a better resource to understand exactly how Apple has managed this, there are definite architectural advantages that for example, actually allow for a single core to saturate 100GB/sec of the available bandwidth if it needs to. There is no guarantee Intel could move to 5nm with on-board ram and get this perf/watt. It's definitely worth noting Apple's node advantage and the complexity of these chips which are really only viable as Apple is selling a platform and not having to market a chip architecture that has to fit a wide variety of OEM needs sure, but I'm just skeptical when the implication seems to be that X86 will get similar efficiency without any significant changes to the X86 architecture.

Also as I mentioned on another forum, the video encoding hardware on the M1, while not improving much from the M1 Max to the Ultra, is actually a huge boon when rendering video largely because it can handle one of the most CPU-intensive tasks (outside of gaming) that your typical PC owner will engage it without really engaging the "CPU" at all. I can render in Final Cut on my M2 Macbook and the CPU will barely be touched so the responsiveness (and noise/heat) of my machine is basically unchanged over simple web browsing during this task, whereas my 12400 doing something similar in Davinci will have all cores pegged to 100 and limit what I can do while that export is going on without inducing noticeable stutter/delays in the foreground task.

Give me a powerful media encoding block on my next CPU that allows me to play games or edit another video while I'm rendering a video for 2 hours, please Intel/AMD!
 
Back
Top