Apple A15 SoC

Apple has announced the A15 SoC, which is present in the new iPhone 13 lineup and the redesigned iPad mini.

2021-09-14%2019_49_08.jpg


Apple compared the A15 in the 6th generation iPad mini to the A12 in the previous generation iPad mini:
Apple said:
The 6-core CPU delivers a 40 percent jump in performance, and the 5-core GPU delivers an 80 percent leap in graphics performance compared to the previous generation of iPad mini.

If that 40% sounds familiar, it might be because Apple gave the same number last year when comparing the A14 in the 4th generation iPad Air to the A12 in the 3rd generation iPad Air.
Apple said:
This latest-generation A-series chip features a new 6-core design for a 40 percent boost in CPU performance, and a new 4-core graphics architecture for a 30 percent improvement in graphics.

While I do not assume that the measurements used to obtain these percentages are the same from year to year (so concluding that A14 → A15 is 40% − 40% = 0% is not valid—also the iPad Air and iPad mini are different products), it doesn't seem like we are looking at anything more than a small CPU performance increase from the A14 to the A15.

AnandTech has a breakdown of the specs and technical details as part of an article on the new iPhones.
Andrei Frumusanu said:
Here, they’re claiming that the new A15 will be +50% better than the next-best competitor. The next-best competitor is Qualcomm’s Snapdragon 888 – if we look up our benchmark result set, we can see that the A14 is +41% more performant than the Snapdragon 888 in SPECint2017 – for the A15 to grow that gap to 50% it really would only need to be roughly 6% faster than the A14, which is indeed not a very large upgrade.
Andrei Frumusanu said:
For the lower performance 4-core GPU model, Apple again was weird with their performance predictions as they focused on the competition, and not the generational gains. The improvements here over the currently best performing competitor is said to be +30%. Taking GFXBench Aztec as a baseline, we see the A14 was around +18% faster than the Snapdragon 888. The slower A15 would need to be +10% faster than the A14 to get to that margin.

The faster 5-core A15 is advertised as being +50% faster than the competition, this would actually be a more sizeable +28% performance improvement over the A14 and would be more in line with Apple’s generational gains over the last few years.

Of course, all these figures are just speculation for now, as we don’t know exactly what workloads Apple references to, and there are quite larger variations that can be measured. We’ll have to verify things on actual devices.
 
Last edited:
The also increased the L2 cache to 32MB.

I wouldn't put it past them to have focused on power efficiency instead of purely performance and the numbers seem to back it up, upwards of 2.5 hours more. They are still on the 5nm node from TSMC.

The Neural Engine also had a nice improvement from 11 trillion to 15.8 trillion operations per second.

Remember that Apple is off-loading a lot of computational power to dedicated hardware and probably have one of the best hardware video encode and decode in the industry, now with support for ProRes!

The emphasis were clearly on better photography and power efficiency.
 
I guess since Apple now has a different product line (M1) they can now focus on power efficiency on phone SoC while focus more on performance on "desktop" SoC. This reflects on the decision to use M1 on iPad Pro, as iPad Pro is probably now considered by Apple as a "computer."
 
I will say -- I was a bit pessimistic about Apple's performance claims around the M1 product line before it came out, yet Apple delivered. I'm genuinely interested to see what they have coming with A15. If they can keep the same performance at a (much?) smaller power envelope, that's a pretty damned good iPad Pro offering.
 
I guess since Apple now has a different product line (M1) they can now focus on power efficiency on phone SoC while focus more on performance on "desktop" SoC. This reflects on the decision to use M1 on iPad Pro, as iPad Pro is probably now considered by Apple as a "computer."
Yup, I reckon Apple spent most of their effort this last year making changes that will benefit the configurations that will end up in future Macs, rather than ramping up the performance of the configurations that appear in smaller iDevices where the performance is already good.
 
I guess since Apple now has a different product line (M1) they can now focus on power efficiency on phone SoC while focus more on performance on "desktop" SoC. This reflects on the decision to use M1 on iPad Pro, as iPad Pro is probably now considered by Apple as a "computer."
Has it been different though? Top-of-the-line iPad has got its own SoC family since the beginning. Putting the order of product announcements aside, M1 appears to be no more than a continuation of this family, now with expanded presence into fanless and low-power Mac products (a logical move given the similar power envelope).

It does not seem like M1 has otherwise diverged in the CPU microarchitectures from the A14 ''phone" SoC. Moreover, M1 max clock over A14 does not seem substantial enough to be considered a different physical implementation.
 
Last edited:
Was is supposed to be?

Not really and it's not like any other ULP SoC GPU has one either up to yet. IF they should license relevant IP from IMG, I think they'll have RT integrated into the GPU pipeline starting from their C Series Architecture which is to be announced before this year runs out. Even then it would take a least a year from IP announcement to SoC availability.
 
Not really and it's not like any other ULP SoC GPU has one either up to yet. IF they should license relevant IP from IMG, I think they'll have RT integrated into the GPU pipeline starting from their C Series Architecture which is to be announced before this year runs out. Even then it would take a least a year from IP announcement to SoC availability.
And of course, even if implemented it would actually have be used. And if used, it would actually have to produce a substantial benefit for tiny screen gaming. And that's before we even get into the power draw aspects.
Wouldn't recommend holding my breath, honestly, despite the software hooks provided.

Andreis' preliminary analysis (SoC focussed) is available at Anandtech. It is up to his regular standards, so well worth a read.
 
And of course, even if implemented it would actually have be used. And if used, it would actually have to produce a substantial benefit for tiny screen gaming. And that's before we even get into the power draw aspects.
Wouldn't recommend holding my breath, honestly, despite the software hooks provided.

Andreis' preliminary analysis (SoC focussed) is available at Anandtech. It is up to his regular standards, so well worth a read.

Gaming hasn't been so far within Apple's priorities (despite their focus on GPU performance) and I could think of quite a few AR/VR related scenarios where efficient RT hw would make far more sense, but yes it all sounds like material for the less foreseeable future.

In the meantime we'll have to enjoy the marketing fireworks from each side: https://www.fudzilla.com/news/graphics/53653-exynos-2200-mobile-chip-will-support-ray-tracing
 
Last edited:
Gaming hasn't been so far within Apple's priorities (despite their focus on GPU performance) and I could think of quite a few AR/VR related scenarios where efficient RT hw would make far more sense, but yes it all sounds like material for the less foreseeable future.
Yes, you are probably right that the justification would be AR/VR.
At least, that's the area of the software demos.
In the meantime we'll have to enjoy the marketing fireworks from each side: https://www.fudzilla.com/news/graphics/53653-exynos-2200-mobile-chip-will-support-ray-tracing
I think it is a sign of maturity of a market in general that manufacturers are trying to draw attention to minor niche features. People are basically happy with what they have, manufacturers don't get much attention from small incremental improvement/refinement, gotta hit those buzzwords.
There is an argument to be made for that, from a manufacturer standpoint. Still, the utility per transistor for the average user has to be monumentally atrocious for dedicated RT hw on phones. It's hard to envision that it would be a wise way to spend your SoC transistor budget.
 
I think it is a sign of maturity of a market in general that manufacturers are trying to draw attention to minor niche features. People are basically happy with what they have, manufacturers don't get much attention from small incremental improvement/refinement, gotta hit those buzzwords.
There is an argument to be made for that, from a manufacturer standpoint. Still, the utility per transistor for the average user has to be monumentally atrocious for dedicated RT hw on phones. It's hard to envision that it would be a wise way to spend your SoC transistor budget.

Here's where Andre's write up comes in handy: https://www.anandtech.com/show/16983/the-apple-a15-soc-performance-review-faster-more-efficient/3

The comparison between Android phones and iPhones gets even more complicated in that even with the same game setting, the iPhones still have slightly higher resolution, and visual effects that are just outright missing from the Android variant of the game. The visual fidelity of the game is just much higher on Apple’s devices due to the superior shading and features.

In general, this is one reason while I’m apprehensive of publishing real game benchmarks as it’s just a false comparison and can lead to misleading conclusions. We use specifically designed benchmarks to achieve a “ground truth” in terms of performance, especially in the context of SoCs, GPUs, and architectures.

....and the next best being for RT specifically:

https://www.imaginationtech.com/whitepapers/ray-tracing-levels-system/

When I asked Rys in the past here in public if their past Rogue 6xxx which had additional dedicated RT hw has a sizeable amount of transistors for the latter the answer was negative. Granted for any future IMG GPU IP I'd expect the RT hw to be far more complicated than that fore-mentioned early implementation, but keep in mind that any IP beyond today's Series B will have RT incorporated in the pipeline itself and not any dedicated RT hw. How they have solved it remains to be seen, but I doubt the hw overhead is as big as many would expect.

Honest question as I really don't know: how high is the percentage in terms of hw/transistors for Tensor cores in NV's last generation GPU chips? (irrelevant of their capabilities). Do we have a single or double digit percentage?
 
Here's where Andre's write up comes in handy:

Apple obviously has the superior SOCs compared to any android SoC for sure, but in that above context he's only talking about Gensin Impact. Do all games have improved fidelity on IOS as compared to Android versions?
 
Apple obviously has the superior SOCs compared to any android SoC for sure, but in that above context he's only talking about Gensin Impact. Do all games have improved fidelity on IOS as compared to Android versions?

I'm sure if Andre (which is an experience developer for the record) would had seen a representative mobile game which he could use for cross platform gaming comparisons he would use it. The point here is elsewhere in the post above: having ray tracing in future IP doesn't necessarily mean that all implementations will have the same capabilities. If that's the case then the hw overhead for the implementation is obviously not the same and a fair list of other differences. At the end of the day you won't have here either any apples to apples comparisons (pun intended).

As for Apple having the superior SoC compared to any other Android counterpart on a hw level, I don't necessarily agree. IF Qualcomm would have the luxury of having its own OS I'd guess that things would be quite different.
 
As for Apple having the superior SoC compared to any other Android counterpart on a hw level, I don't necessarily agree. IF Qualcomm would have the luxury of having its own OS I'd guess that things would be quite different.
How do you know Qualcomm are not already tuning their SoCs - the majority of which end up running Android OS - for that OS? If Qualcomm aren't already doing this, it feels like a massive failure. They know who their customers are and Google would certainly be co-operative outside of the semi-open development of AndroidOS for device manufacturers.
 
Back
Top