In other news, Chipworks verified 3 GB of RAM for the A10 in the 7 Plus but only 2 GB of RAM in the A10 in the 7.

Surely they'll launch an iPad with an A10 this fall?
 
It could be result of memory bandwidth limit.

One interesting side note is that Apple mentioned it's a 6 cluster design but the 7XT in the A9 has no announced successor, much less one with 50% more arithmetic throughput. Oh, and this GPU consumes 2/3 the A9 GPU power on the same process.
 
Chipworks has a die picture of the A10.

trans-die-level.jpg


Chipworks said:
The A10 die size is ~125 sq. mm with a reported 3.3 billion transistors.

We have confirmed the process to be TSMC 16FF-based, so this means that Apple has basically been in the same 20/16nm technology for the last 3 generations, and it took 2 iterations on FinFET for Apple to get the A10 back to the gate densities we saw in the A8, optimized on a planar process.

A notable difference from the A9 to the A10 is much tighter SoC-level die utilization, which is more in line with the A8. This, along with tighter 9-Track and 7.5-Track libraries of an 16FFC process, are expected to have kept the die from bloating to the ~150 sq. mm level that we were expecting from a straight scale of the A9 to the A10, in terms of transistor count.
 
One interesting side note is that Apple mentioned it's a 6 cluster design but the 7XT in the A9 has no announced successor, much less one with 50% more arithmetic throughput. Oh, and this GPU consumes 2/3 the A9 GPU power on the same process.

8XT is not announced, but is being actively licensed. 7XT was quoted as providing 60% performance increase on 6XT, so the generational performance increase would be consistent.
 
its what I've heard.

8xe was announced some time ago. Recall that 7xe and 7xt came quite close together. so in a pure timing sense if nothing else, we can assume 8xt exists. The availability of img IP and the announcement of it are often out of step. Sometimes,img hold back announcements to coincide with a major show like CES or MWC, or until they have a licensee that is happy to announce (which may be long after they have licensed it)
 
Last edited:
The performance on multiple graphics benchmarks, including GFXBench's most strenuous all-around test, 1440p Manhattan 3.1.1 off-screen, supports the claim of 50% improvement. In some tests and graphics API combinations, iOS 10.0.1 itself actually degrades benchmark performance from iOS 9 (system navigation, response, and scrolling are all excellent in iOS 10, even on the older phones, though.)

I think the A10's graphics part is an unannounced six cluster GPU that achieves its gains mostly through architecture and some from a slightly higher GPU clock rate than the 6S Plus's.
 
The performance on multiple graphics benchmarks, including GFXBench's most strenuous all-around test, 1440p Manhattan 3.1.1 off-screen, supports the claim of 50% improvement.

1440p shows around 12% improvement compared to iphone6s ?
 
gfxbench comparison with A9 in the 6s is very interesting.
https://gfxbench.com/compare.jsp?be...S&api2=metal&hwtype2=GPU&hwname2=Apple+A9+GPU

Manhattan 3.1 offscreen A10=1631 A9=1521 7%
Manhattan offscreen A10=3036 A9=2184 39%
Trex offscreen A10=4973 A9=4139 18%

Texturing offscreen A10= 8473 A9= 5972 42%
ALU off screen A10=5406 A9=3823 52%

Long term performance Mahattan A10=3553 A9=1804 97%

Big variations in improvement. Manhattan 3.1 barely shows any improvement at all. Trex less that 20%. But ALU & Texturing are close to the 50% that was stated in Apple's presentation.

You always have to wait a bit until results pile up from different sources to get a clearer picture. This link https://gfxbench.com/device.jsp?benchmark=gfx40&os=iOS&api=metal&cpu-arch=ARM&hwtype=GPU&hwname=Apple A10 GPU&did=28447322&D=Apple iPhone 7 gives a completely different picture then from the results you've picked, so either that link is closer to reality or we'll have to wait to see where the average will stick at.

I would guess that the frequency of the A10 GPU is somewhere in the =/>630MHz ballpark which would be at least 40% higher than that of the A9 GPU. One interesting but probably meaningless detail no one seems to have noticed is that this time the GPU is listed both as A10 GPU and G9 GPU (G for "generation"?) https://gfxbench.com/device.jsp?benchmark=gfx40&os=iOS&api=metal&D=Apple+iPhone+7&testgroup=info

One interesting side note is that Apple mentioned it's a 6 cluster design but the 7XT in the A9 has no announced successor, much less one with 50% more arithmetic throughput. Oh, and this GPU consumes 2/3 the A9 GPU power on the same process.

7XT is according to IMG itself by 60% faster compared to 6XT variants and that was clock for clock, cluster per cluster without any added stream processors to the 6XT clusters.

A10 should contain a whatever IMG GPU (otherwise they'd hardly use IMG extensions for it amongst others) and if it's true that the A10 GPU is at least by 40% clocked higher than the A9 GPU, then it isn't really much of a surprise where the majority of the performance increase and at the same time lower power consumption could come from.

Before anyone asks: yes the 630-ish frequency is merely an assumption based on results of the A10 GPU and the A9 GPU. Usually the GFXbench3.x fillrate tests are misleading so it's no use thinking N amount of TMUs * clockspeed and you can compare to those Megatexels listed. However since both GPUs should have 12 TMUs (2 for each cluster) I assume that the frequency increase is in the ~40% ballpark.

1440p shows around 12% improvement compared to iphone6s ?

More like +40% if you go by the other listed peak iPhone7 results. Current peak results for all Apple SoCs....

Manhattan 3.0 1080p offscreen:
https://gfxbench.com/result.jsp?ben...dx=0&os-check-Windows_gl=0&ff-check-desktop=0
Manhattan 3.1 1080p offscreen:
https://gfxbench.com/result.jsp?ben...dx=0&os-check-Windows_gl=0&ff-check-desktop=0
Manhattan 3.1.1 1440p offscreen:
https://gfxbench.com/result.jsp?ben...dx=0&os-check-Windows_gl=0&ff-check-desktop=0
 
Last edited:
Chipworks has a die picture of the A10.
It looks like the little cores are right beside the big ones.

Revised_A10_die.png


Chipworks said:
Update: We have revised our first A10 floorplan with help from our friends at AnandTech in the search for the small, high-efficiency cores. Our combined guess is that it is likely they are indeed integrated within the CPU cluster next to the big, high-performance cores. This makes sense given the the distinct colour of the small cores indicating a different digital library, and the position of the big core L1. Thanks to our friends over at Anandtech for reviewing the floor plan with us and providing input on where these blocks might be located!
 
Wouldn't having the small cores "embedded" into the bigger one make it much more difficult to have different clock (and potentially power) domains?
 
Wouldn't having the small cores "embedded" into the bigger one make it much more difficult to have different clock (and potentially power) domains?
I crudely made that diagram. I have a much more detailed one I'll keep for the review.

The small cores are probably on the same clock plane as the big cores however that doesn't matter as only either one of the cores within the pairs is supposedly powered on.
 
One has to wonder what all the other unmarked blocks are for, there is enough space there to double the CPU + GPU,
Just a very crude copy paste:
doubleCPUGPU.jpg
 
If you want a dumb SoC with no multimedia, connectivity or imagining features :)

Sure there is all of that, if somebody can attempt to label that. :)
CPU and GPU take a lot of space due to massive parallel and caches.
It's hard to imagine the same kind of resources are needed for video de/encoding etc
(GPUs have fixed function blocks for that that are barely visible)
 
Last edited:
If you want a dumb SoC with no multimedia, connectivity or imagining features :)
In addition to that, could losing some of those areas keep it from powering up or running?
It would be like taking a diagram of the brain and realizing there could be so much more visual cortex if a chunk were merely copied over that boring brain stem.

Is there some idea on the role of the yellow and blue structures on the left, top, and bottom edges of the L3 interface?
The upper L3 array appears to be subdivided differently from the lower as well.
 
In addition to that, could losing some of those areas keep it from powering up or running?
It would be like taking a diagram of the brain and realizing there could be so much more visual cortex if a chunk were merely copied over that boring brain stem.

Is there some idea on the role of the yellow and blue structures on the left, top, and bottom edges of the L3 interface?
The upper L3 array appears to be subdivided differently from the lower as well.
Not sure what those structures are. Probably related to the NoC connectivity. The L3 array is divided in 8+8 columns, the top and bottom columns are just slightly differently laid out.
 
The Motley Fool speculates that the A10X ("next generation iPad GPU"?) will use 10 nm from this job description.

apple-gpu-a10x_large.png


Interestingly, 10 nm for the A10X was predicted by Ming-Chi Kuo over a year and a half ago (also mentioned in the article), although to be fair a number of other predictions in the table turned out to be incorrect.
 
Back
Top