Apple A8 and A8X

The GPU block in the A8 looks like 4.5x4 mm or 18 mm2, the SRAM is 2.2x2.0 mm or 4.4 mm2 and the CPU (I'm including what looks like the 2 separate L2 caches on the left side) is 3.6x3.2 mm or 11.5 mm2.

Actually my out of the blue layman's estimate (A7 total die estate 102, G6430 = 22mm2 meaning ~1/5th the total die estate) was around 17mm2 with the former rather dumb and simplistic reasoning.

Further dumb layman's speculation would say that the G6430 counts =/>280Mio transistors, while the GX6450 counts =/>390Mio transistors.

I don't think Chipworks ever gave the dimensions of the A7, but based on the 102 mm2 area and the proportions of the chip I'm using 10.5x9.7 mm for my calculations. That would make the GPU 4.3x6 mm or 25.8 mm2, SRAM 2.8x2.6 mm or 7.3 mm2, and the CPU 5.3x3.7 mm or 19.6 mm2.

So relative to the A7 the A8 GPU is 70% the size, the SRAM is 60%, and the CPU is 59% the size. Someone else can work out the transistor count changes, but it looks like the SRAM and CPU were pretty much straight shrinks, and the GPU grew a bit probably the change from G6430 to GX6450. So the question is still open where the new transistors went. Previously the CPU, GPU, and SRAM were 52% of the die in the A7, but now they are 38% of the A8.

EDIT: Is the secure element on the CPU? Could Apple be setting aside a larger portion portion of the die for the secure element/enclave for fingerprint data, credit cards, and possibly anticipating other types of secure data like health?

Since there isn't a dramatic increase in L3 cache I guess I'm back to suggesting the possibility of data compression in the memory and possibly flash with a hardware accelerator in the A8.
I can't help you much with the rest, but assuming that transistor density is close to reality the above for the GPUs might be close enough to reality.

And in case anyone is wondering I am speculating a density of ~13Mio/mm2 for A7 and a ~22Mio/mm2 for A8.
 
It appears that the iPhone 6 Plus is rendering at only 1136x640 (ie. same resolution as iPhone 6's native resolution, rather than the 6 Plus' higher native resolution of 1920 x 1080) in GFXBench On-screen tests:

iPhone 6 is 1334x750. If both phones are rendering at 1136x640 then GFXBench is simply hardcoded to the 5/5s screen size and will need to be updated.

I also do not know whether the physically 1080p display of the 6+ can be targeted for rendering without being intermediately scaled to 2208x1242.
 
Yuck if that's true and a trend that carries over to all mobile games also. Wherever I was able to reduce from native resolution the aliasing side-effects where more than just nasty. Why the heck did they then even bother to clock the GPU by roughly 10% higher in the Plus anyway? It's not that they'll run out of texel fillrate with 8 TMUs any time soon either *argh* :rolleyes:

Please ignore this post due to any other blond moment. If developers target N resolution there should be no aliasing.
 
GFXBench is simply hardcoded to the 5/5s screen size and will need to be updated.
I dont see how this can be
Surely any programmer would of queried the devices resolution and use that and not looked up a database of model names/numbers. Perhaps with IOS you could get away with it (but its a lot more work) but on android with 1000s of devices that would be madness
 
"Hardcoded" was the wrong term: unless an app explicitly opts-in to the new sizes or adaptive UI, iOS will lie to it about the screen resolution.
 
"Hardcoded" was the wrong term: unless an app explicitly opts-in to the new sizes or adaptive UI, iOS will lie to it about the screen resolution.
I haven't dabbled with iOS programming in ages, but when I switched from the 3GS to an iPhone 4, my code (simple OpenGL ES) just worked. For the view frustum setup, I retrieved the screen sizes through an API call and they indeed came back as those of the 3GS. Made everything very easy. ;)
 
"Hardcoded" was the wrong term: unless an app explicitly opts-in to the new sizes or adaptive UI, iOS will lie to it about the screen resolution.
Sorry I'm without internet for up to a week until my ISP sorts out the problem so can't Google things so well but I've done quite a bit of iOS programming and iirc what you say only applies to 2d apps 3d apps like this benchmark should just work.
 
If that is the case, then the question remains open as to why the benchmark appears to be being scaled like a legacy app.
 
I don't have an iPhone 6 yet but I think the situation is a bit different compared to iPhone 4 vs 3GS as iPhone 4's resolution is exactly twice of 3GS, so it's easy for the OpenGL runtime to use double resolution. However, on iPhone 6, since its resolution is not an simple multiple of iPhone 5 (although the aspect ratio is almost the same), if an app does not claim to support iPhone 6, the system will try to render everything with iPhone 5's resolution and then upscale it to iPhone 6's size. I just tried it with the simulator and it works like this even for OpenGL ES.
 
Could just be a bug in Kishonti's online site because the detail view for the Plus shows the correct resolution.
 
Yeah. When the database first started displaying results, the entry was only labeled iPhone 6 yet had a mix of results from both the 6 and 6 Plus. The database may not all be straightened out yet.
 
Is it just me or do the A8 GPUs have only a fraction of driver extensions that show up in the Kishonti database compared to A7?
 
The extensions that you can see are not present in all drivers. Kishonti display a union of the sets of all extensions they've ever seen, across all drivers, without any sensible way of checking the extensions per submission unless you've made the submissions yourself as a registered user.

So extensions have probably come and gone. That's my quick-because-I-can't-be-bothered-to-actually-check explanation anyway.
 
John Poole noted on Twitter that when running Geekbench's in-development Metal compute tests, the A8 is 50% faster when compute bound and 300-400% faster when memory bound. This is against the A7.
 
DailyTech believe that Chipwork's die shot of the A8 highlights a 6 cluster GPU. I'm not so sure, only 4 of the clusters appear to appear to be analogue of each other. Any opinions?

http://www.dailytech.com/Die+Shots+Confirm+A8+Packs+Six+PowerVR+Rogue+GPU+Clusters/article36586.htm

A8_Chipworks_Stripped_Editted_Wide.png


DailyTech annotations.
A8_Chipworks_Stripped_Editted_Analysis_Wide.png
 
Last edited by a moderator:
John Poole noted on Twitter that when running Geekbench's in-development Metal compute tests, the A8 is 50% faster when compute bound and 300-400% faster when memory bound. This is against the A7.

I can't speak for the memory bound tests, but 50% faster when compute bound makes sense given that the GPU in A8 has ~ 50% higher graphics compute performance in comparison to A7.
 
DailyTech believe that Chipwork's die shot of the A8 highlights a 6 cluster GPU. I'm not so sure, only 4 of the clusters appear to appear to analogue of each other. Any opinions?

I think they want to say its 6 cluster because that was Anandtech's assumption on the day of the iphone6 announcement. The fact that they obviously could not identify 6 similar looking areas does not seem to have dissuaded them from marking 6 equal sized blocks and calling it a GX6650 !
 
I can't speak for the memory bound tests, but 50% faster when compute bound makes sense given that the GPU in A8 has ~ 50% higher graphics compute performance in comparison to A7.

I dunno. The numbers are still confusing because the current assumption here is that the 50% extra GFLOPS take into account FP16 units. FP16 is fine for gaming but bizarre for compute, and I wouldn't expect anyone to make a compute test that contrived.
 
Back
Top