Apple A8 and A8X

Actually it is. If you were to look at the die shot, there's a strip of shared logic running right down the middle of the GPU, clearly different from the 8 USCs. I have to assume the various backends and frontends (ROPs, geometry, etc) were scaled up as well as the number of USCs.

I don't have a clue what Apple's engineers exactly did, however a quick glimpse in the Kishonti database gives:

Fill offscreen:
iPad Air2 7607
iPhone6 Plus 3761

T Rex offscreen:
iPad Air2 45,30
iPhone6 Plus 71,90

Manhattan offscreen:
iPad Air2 33,00
iPhone6 Plus 19,30

I could assume from the first that both GPUs run at a comparable frequency. If yes then in T Rex it scales up by only 59% and in Manhattan by 71%. Of course could there be quite a few other factors influencing performance scaling, but with a first look I'm wondering if they actually douplicated everything in the end.
 
That CPU layout and the placement of the L2 cache really makes it look like there should have been a 4th CPU core but was chopped at the L2 port. The L2 cache may be unified, but are there bandwidth, latency or residency benefits with each core using it's immediate L2 slice? ie. is that CPU core with the double L2 cache slice going to be the slightly better performer? If so I wonder if you can target that CPU core for use for the main thread in a program although CPU scheduling optimization on the A8X obviously wouldn't be as important as choosing what goes on core 1 in the Wii U Espresso.
 
Damned, at this point is it safe to assume that the Ax8 platform+2 GB of RAM and using Metal should out-perform significantly the ps3, X360 and WiiU if it were to aim for the same resolution?
 
Damned, at this point is it safe to assume that the Ax8 platform+2 GB of RAM and using Metal should out-perform significantly the ps3, X360 and WiiU if it were to aim for the same resolution?

I'd estimate the A8X GPU frequency at =/>500MHz, which is roughly on par with the GPU frequencies of the PS3, XBox360.
That would give 256 GFLOPs FP32 or 512 GFLOPs FP16 and 8.0 MTexels/s w/o overdraw. The answer should be yes, but no idea if it's significantly or better by how much exactly. What speaks for the A8X GPU are its "scalar" ALUs and what might speak up to a point against it might be geometry throughput.
 
Last edited:
The GPU is almost exactly mirrored, but if you look closely there are some areas which aren't (just above/below and to the right of the "8-core GPU" label).
 
The overlooking of the relevant product information for PowerVR's cores keeps leading to strange rationalizations about Apple's GPU decisions and unnecessary narratives about their ultimate ambitions by some commentators.

Two cases in point in a row now:

One, while most, including myself, initially expected the iPhone 6 devices to go with a wider six cluster configuration, the claim of 50% better performance over the four cluster iPhone 5s was used as "confirmation" of a six cluster A8 GPU even after the revelation of ASTC support had people agreeing on an architecture upgrade to Series6XT. Imagination had clearly stated that the jump from Series6 to 6XT was, by itself, quite specifically a 50% bump in the common benchmarks used for mobile graphics.

For A8X, a consensus for some existed that it still had to be the 12 TMU standard GX6650 core despite the screaming-for-a-reasonable-explanation fill rate results had been known for quite a while. And, now that the inescapable conclusion of a 16 TMU core and the eight cluster configuration logically follows, some rationalization is being forced out about how Apple themselves must've taken the PowerVR four cluster core floor plan and doubled it when Imagination had specified (just not publicly productized as is not always preferable depending on the project/partner) scalability to eight clusters with Series6.

Now, amid the rationalizations for incorrectly guessing the number of clusters Apple picked for the GPUs in A8 and A8X, the unnecessary narrative that Apple's obvious need for their own GPU engineers for implementing new PowerVR cores so surprisingly fast should also signify a definite intent to replace PowerVR is stretching the speculation too far at this point.
 
Last edited:
Now, amid the rationalizations for incorrectly guessing the number of clusters Apple picked for the GPUs in A8 and A8X, the unnecessary narrative that Apple's obvious need for their own GPU engineers for implementing new PowerVR cores so surprisingly fast should also signify a definite intent to replace PowerVR is stretching the speculation too far at this point.
Anything is possible (Apple certainly has access to the talent). But I would be very surprised if Apple stopped using PowerVR designs anytime soon. IMG has been a very good partner for Apple, and while I won't say that Apple can't do better, IMG has always delivered exactly what Apple seems to need.

Plus the more architectural stability they can offer the better it is for Metal.
 
Anything is possible (Apple certainly has access to the talent). But I would be very surprised if Apple stopped using PowerVR designs anytime soon. IMG has been a very good partner for Apple, and while I won't say that Apple can't do better, IMG has always delivered exactly what Apple seems to need.

Plus the more architectural stability they can offer the better it is for Metal.

Intel can't do better in terms of perf/mW for ULP SoC GPUs right now, yet they're switching to their own GenX designs even for smartphones in the foreseeable future it seems. Their perf/W ratios have improved and it does sound silly if they'd continue to use 3rd party IP.

IMG has been a very good partner for Apple, as Apple has been a very good partner for IMG. The latter dependency is too large and I don't see IMG being able to cover even remotely close the gap if Apple will decide to use something else in the future. For the next few years the co-operation between the two seems to be set at least. After that either the MIPS business must have catched up seriously or they might be walking a thin line after all.
 
The benchmark results and my own scores for the 6 Plus show it performing better in on-screen GFXBench Manhattan, with its 1080p display output, than the 1080p offscreen test. It actually tops all other phones at that resolution.
 
The on-screen test has less overhead than the off-screen, both in memory usage and raw rendering performance required for the workload. It's marginal, but certainly measurable and non-negligible.
 
Not really following this thread lately, but I've finally got my standard i6 64GB on monday night, upgrading from a 4S. Really really liking it so far :)
I think it looks very nice and sleek (except for the vertical bands on the back, but I don't usually see them so who cares), feels very good in my hand, good shape and solid build.
Really like the camera stuff and the display, and it's lightning fast compared to the 4S (captain obvious I guess). Love the finger print unlock, still experimenting with the rest.

It's also amazing how the display response works when switching to slo-mo video. Updates at something like 60Hz at least, super smooth, somewhat surreal even.

I wonder what else they can add for the upcoming generations, it starts to feel like there's little else to add...
 
More RAM and 4K video wouldn't hurt a future iPhone 6S, that's for sure. The rest is good enough for a year or two, at least.
 
Not too much 4k content, at least from streaming sources, worth playing on a phone.

New iOS always go slower on older hardware so latest CPU and GPU helps.

For design, they could get rid of the camera bump and reduce the bezels, esp. for the Plus.

But form factor changes will be two iterations from now, if they go by the historic pattern.
 
remove the huge bezels
waterproofing
remove the button (though I assume thats part of the look so they wont, then again the ipod look did have that ring & now look at it)
 
Not too much 4k content, at least from streaming sources, worth playing on a phone.
It's a chicken-egg problem: you'll care about 4K once your devices start recording it. So 4K camera + 4K display + 4K playback + other video stuff is still something phones could use (and something probably more than one phone generation ahead).
 
Back
Top