Apple A10X SoC

Techinsights has confirmed TSMC 10 mm for the A10X.

a9x-a10x.jpg
 
That's not how it's laid out. Here:
...
A10 on the left, A9 on the right.

The distance to the cores isn't what matters, the L2 banks connect to the L2 arbitration in the middle.

Thanks for the clarification, I should have assumed some kind of arbitration on a shared cache...
But still, I guess in this case what matters is the (worst case) distance between cache and arbitration, arbitration delay and distance to the execution units. And partitioning the cache like that does at least reduce the first part. However I have no idea how relevant that is here.

Which brings me to an interesting point. I attempted to Identify some of the easy blocks and noticed the cache is actually offset to the bottom two cores. You can clearly see the cores based on the L1 caches, and the top core begins at the point where the cache arbitration ends.
Now I don't think that the top core has a higher cache latency, and with a mobile chip on 10nm, wire resistance may be more relevant to apple than wire delay, so this might be a case of apple optimizing the common case, i.e. moving the bottom two cores closer to the arbitration unit.

Screen Shot 2017-07-01 at 6.16.08 PM.png

Some other interesting observations:
-the 4M cache blocks are ˜2.1 mm2, both of them together are still smaller than the 4.8mm2 4M cache on the A8
-the cores are ˜2.7mm2 each
-I can't see any separate L1 for the little cores, although this might be due to image quality. I assume the dark area near the cache is are the little cores.
-with all of these differences, and the geekbench regression in some sub-tests, I am pretty sure this isn't a 10nm Hurricane.
 
The L1D for both small and big are next to the L2 slices as that has to be their positioning next to L2 arb. The cache your marked as L1 is the new Hurricane caches next to the front-end.

The small cores are towards the outer edge to the L2, but the picture isn't high resolution enough delineate them enough.

iVpq5lF.png
 
Last edited by a moderator:
After comparing to your A10 depictions, I agree, on your Image you can even see the L shape of the 1LD. And comparing with the A10 I'll also agree on the little cores and new cache.
Would really like better images though.

Any Idea what that 'new cache' is good for? some kind of decoded µop cache? Maybe even caching dependencies for energy efficiency? Seems awfully large.
 
I think I saw a Digitimes report that the A11 is in production on the TMSC 10nm process.

Meanwhile, Intel apparently just announced its roadmap for 10 nm next year:

https://arstechnica.com/gadgets/201...ip-plans-ice-lake-and-a-slow-10nm-transition/

Assuming no other problems that is, since 10 nm was originally due in 2016.

Has Intels fab prowess been permanently surpassed? Maybe they didn't have the resources to keep their 14nm and 10 nm on schedule while the volumes and money from mobile have allowed TMSC (and presumably Samsung) to get to 10 nm before Intel?
 
My impression was that everyone lagged Intel by a node, and that the naming was all PR/marketing fluff.

i.e.

22nm Intel ~= "16nm/14nmFF" (20nm w/FF)
14nm Intel ~= "10nmFF" GF/SS/TSMC
10nm Intel ~= "7nmFF"
 
Hmm, I thought the part of Samsung that fabs chips was already making more money than Intel.

So if they haven't passed Intel yet in process, they will soon enough?
 
Hmm, I thought the part of Samsung that fabs chips was already making more money than Intel.

So if they haven't passed Intel yet in process, they will soon enough?

A large part of Samsung's fab profits are from memory chips (DRAM and NAND) though. They are very different from logic chips.
However, it probably matters much less now, as the advantages from a node shrink are not as much as they used to be (for logic chips, memory chips still love density). Everyone's hitting a brick wall and it's just a matter of who hits the wall sooner.
 
A large part of Samsung's fab profits are from memory chips (DRAM and NAND) though. They are very different from logic chips.
However, it probably matters much less now, as the advantages from a node shrink are not as much as they used to be (for logic chips, memory chips still love density). Everyone's hitting a brick wall and it's just a matter of who hits the wall sooner.
Well, yes and no.
As there are increasing issues with current approches, focus shifts to packaging, alternative materials, the tool chains... - there are way to squeeze more into the envelope even if we can't shrink what we stuff into it much more, as well as making the process more accessible and thus migrating more of the industry to finer lithography even when the bleeding edge can't move that fast.
But yes, I'll very probably live to see the practical end of geometrical scaling. It's been a ride! :)
 
So one of the rumors about the event tomorrow is that the new Apple TV which will support 4K and HDR will be powered by an A10X.

That would be quite a jump from the A8 in the Apple TV 4th generation.

Also, the price point starts at $149.

Would they really put the same SOC as the one in their current iPad Pro, which starts at $649? Maybe they could raise the price of the Apple TV but it's already priced high relative to competing streaming set top boxes.
 
Would they really put the same SOC as the one in their current iPad Pro, which starts at $649?
Sure, why not. No fancypants wide color gamut LCD multitouch panel, no battery, quad speaker system and so on. More forgiving form factor too, probably making it easier and cheaper to build.

And then there's the console factor as well. You want your box in as many homes as possible to maximize software (and movies) sales. And streaming subscription fee cuts and all that other nickle and dime bullshit that Apple and other box manufacturers are up to these days. That means attractive price point. Besides, at 150 bux you can bet they're taking home big margins anyway.
 
<100mm^2 chip should help mitigate costs per chip, even at 10nmFF. Potentially, "worse" TDP chips may still be ok for the Apple TV bin since cooling can be less of an issue.

4GB LPDDR4 may not be too expensive?
 
I see no reason why they wouldn't stay with 3GB LPDDR3.
There's 4GB in the iPad Pros using the same processor. Cutting to 3GB would increase hardware fragmentation. Besides, with the apple TV's partial focus on gaming, the more memory you offer the better. They've had 3GB for a while now, tech tends to move on, not stay the same. :p

But, then again, Apple is Apple. They'll do whatever the hell they want. Maybe you're right.
 
There's 4GB in the iPad Pros using the same processor. Cutting to 3GB would increase hardware fragmentation. Besides, with the apple TV's partial focus on gaming, the more memory you offer the better. They've had 3GB for a while now, tech tends to move on, not stay the same. :p

But, then again, Apple is Apple. They'll do whatever the hell they want. Maybe you're right.

The iPad Pro is also marketed for productivity and not just media consumption, where the Apple TV will only be about the latter. Sure I would like 4GB of LPDDR4 RAM but its always better to expect less (like with Nintendo but that is a whole other hornets nest). Besides, the SoC in the Apple TV has always been crippled last generation.

They could be moving it away from hobby status though ... we will fint out soon enough :)
 
Would they really put the same SOC as the one in their current iPad Pro, which starts at $649?

I know it was some time ago...actually 2 years ago that the last AppleTV was released....and don't forget that the 4th gen AppleTV was announced with an A8 the same year the A9 was announced with an A9. I remember being surprised that they put effectively an iPhone 6 chip in the AppleTV that year.

2 years on...yes the A10x was only just announced a few months ago, but with the A11 in the new iPhone...I see no problem or surprise with the A10X being in the new AppleTV.

Yes, it's very powerful for what is effectively a set-top box....but it should help it be current for quite some time...and hopefully they start getting some proper games on it now.
 
I'll submit that if they do elect to go with the A10x as opposed to the iPhone chips, then it is probably due to the additional graphics performance.
And if so, it wouldn't surprise me if gaming was pointed out during the presentation. Apple hamstrung the gaming market by initially insisting that all games be playable with the stock TV remote. That limitation is no longer there, so they may make an effort to at least call attention to the gaming capabilities. The A10x has very respectable graphics capabilities, downports from XB1 or upports from the Switch would be well within the realm of the possible, although publishers already active within the iOS eco system are probably more likely to produce apps for the new AppleTV.
If indeed it has an A10x. We'll know in five hours.
 
Back
Top