Ok, now the iPhone8s have reached reviewer hands, but as of yet we have seen basically nothing about the GPU, not as much as a single run through GfxBench!
The ImgTech notice this spring spoke of not using their IP in new products within a 15 month to two year time frame. The A11, though Apple states that the GPU is designed by them, would mark an earlier appearance of their own design than many of us would have expected from that statement.
Now, is there any way that someone with their hands on the iPhone 8 could conclusively demonstrate whether the GPU still uses ImgTech proprietary IP?
I doubt the regular review sites will take a look given that they can’t even be bothered to benchmark the thing, some kind of dedicated/community effort will probably be necessary.
 
From TechInsights: "Apple iPhone 8 Plus Teardown.”

a11-3.jpg

The die size is 89.23 mm2, representing a 30% die shrink compared to the A10.
 
Ok, now the iPhone8s have reached reviewer hands, but as of yet we have seen basically nothing about the GPU, not as much as a single run through GfxBench!
The ImgTech notice this spring spoke of not using their IP in new products within a 15 month to two year time frame. The A11, though Apple states that the GPU is designed by them, would mark an earlier appearance of their own design than many of us would have expected from that statement.
Now, is there any way that someone with their hands on the iPhone 8 could conclusively demonstrate whether the GPU still uses ImgTech proprietary IP?
I doubt the regular review sites will take a look given that they can’t even be bothered to benchmark the thing, some kind of dedicated/community effort will probably be necessary.

https://gfxbench.com/device.jsp?ben...pi=metal&D=Apple+iPhone+8+Plus&testgroup=info

Not that I know anything but both the performance characteristics as well as the remaining IMG extensiions smell like still IMG IP. Results are too fresh but if if it's throttling as it shows now with the first early results they might have pushed frequency again higher compared to A10. Nothing spectacular to see here other that Apple still doesn't seem to be willing to go higher than OGL_ES3.0.

Manhattan3.1 is the newest test it can run and despite truly just a < 30% increase compared to the A10 GPU, it's still by a sizeable amount ahead of smartphone SoC GPUs.
 
https://gfxbench.com/device.jsp?ben...pi=metal&D=Apple+iPhone+8+Plus&testgroup=info

Not that I know anything but both the performance characteristics as well as the remaining IMG extensiions smell like still IMG IP. Results are too fresh but if if it's throttling as it shows now with the first early results they might have pushed frequency again higher compared to A10. Nothing spectacular to see here other that Apple still doesn't seem to be willing to go higher than OGL_ES3.0.

Manhattan3.1 is the newest test it can run and despite truly just a < 30% increase compared to the A10 GPU, it's still by a sizeable amount ahead of smartphone SoC GPUs.
Apple seems to really promote Metal(2) over OpenGL. They are still listed on Khronos page as a highest level member though.
 
Also scores the top camera score for Dxomark, with improved low-light capture, partly due to the image processing of the A11.
 
Apple seems to really promote Metal(2) over OpenGL. They are still listed on Khronos page as a highest level member though.

I haven't the slightest clue what Metal supports, but it obviously still can't handle something like the Car Chase test in Gfxbench. Considering that IMG's Marlowe (HelioX30) gets 12.5 fps in that one (4 clusters@800MHz), something like the A11 GPU (which could be something like an 8 cluster config@ =/>900MHz or equivalent) performance would be somewhere in the Adreno 540 ballpark.

The handfull of long term performance/3.1 results still show a < 60% throttling trend which seems quite high for Apple; by the time where others (QCOM) decided that sustained performance is more important than peak short term results, is Apple now taking a 180 degree turn or what?
 
I haven't the slightest clue what Metal supports, but it obviously still can't handle something like the Car Chase test in Gfxbench. Considering that IMG's Marlowe (HelioX30) gets 12.5 fps in that one (4 clusters@800MHz), something like the A11 GPU (which could be something like an 8 cluster config@ =/>900MHz or equivalent) performance would be somewhere in the Adreno 540 ballpark.

The handfull of long term performance/3.1 results still show a < 60% throttling trend which seems quite high for Apple; by the time where others (QCOM) decided that sustained performance is more important than peak short term results, is Apple now taking a 180 degree turn or what?
Uhm, Kishonti hasn’t updated Metal GfxBench in well over a year. It’s their choice not to implement the new stuff under iOS.

Regarding throttling we have to look at both the absolute scores and percentage drops between competing devices. Both are relevant.
 
Uhm, Kishonti hasn’t updated Metal GfxBench in well over a year. It’s their choice not to implement the new stuff under iOS.

Which I didn't know, but it sounds weird that Kishonti wouldn't follow up with a vendor with an as huge market presence.

Regarding throttling we have to look at both the absolute scores and percentage drops between competing devices. Both are relevant.

I'm still cautious since early results can contain pitfalls.
 
Last edited:
I'm still cautious since early results can contain pitfalls.
For sure.
And I’d really prefer to see data from serious reviewers as opposed to users with God knows what running on the phone, and Messenger demanding attention twice during the run.... :)
 
The handfull of long term performance/3.1 results still show a < 60% throttling trend which seems quite high for Apple
I'm surprised Apple isn't doing any heatpipe stuff or similar to cool their chip - maybe a small vapor chamber attached to the chassis back. A number of androids had heatpipes back when their early-gen 64-bit SoCs ran super hot. Maybe some of them still do.
 
Considering A11 has a full node jump (10nm, ) plus supposedly a 'new in house' designed you, colour me not that impressed.
Saying that what we are looking at is ultra book performance graphics in a smartphone- in absolute terms still pretty good, just not compared to its predecessor and new technology.
The CPU part is very is basically Intel Kabylake 15w, very good indeed.
 
Considering A11 has a full node jump (10nm, ) plus supposedly a 'new in house' designed you, colour me not that impressed.
Saying that what we are looking at is ultra book performance graphics in a smartphone- in absolute terms still pretty good, just not compared to its predecessor and new technology.
The CPU part is very is basically Intel Kabylake 15w, very good indeed.
In absolute terms the GPU (and CPU) is faster than that any other phone. And according to Apple, at significantly lower GPU power draw than its predecessor. Plus, as you point out, it roughly matches Intels (15W) best in terms of CPU and absolutely crushes it in terms of GPU at a small fraction of the power. So - I'm impressed.
Also, it bears remembering that the tools we have to compare GPUs with are coded to a lowest common denominator. Is maximum OpenGL performance at minimum power draw the design goal? Or did Apple have other ambitions that the legacy graphics benchmarks cannot demonstrate? It's hard to tell. Apple may well never give an architectural overview of the GPU, or make explicit statements about design goals or evolutionary plans.
I really wish Apple was more forthcoming in terms of their proprietary technology.
 
With the Microsoft/Qualcomm partnership bringing full Windows to ARM with near native x86 emulation, I fully expect Apple to migrate to in-house chips on the Mac.

Even though we've been post Wintel for a while, this really does seem to mark the end of an era. Can't wait to see what bakeoffs will look like when Apple controls hardware and software for all their products.
 
Seems like A11 is PowerVR after all. This is alluded to in the IMG takeover documentation prepared by Clifford Chance which highlights Apple no longer paying royalties for new designs after 30th June 2018.
 
It’s not that binary.
Apple pays for OMG IP use. But in what shape? Is it a full design, is it parts of a design with some blocks replaced, is it a fully custom design that uses some IMG IP, is it....?
If I were to guess, the GPU in A11 is an Apple design (just as they say) that still uses some IMG IP or have grey areas that are due to be replaced/circumvented/depreciated in the software stack/whatever over the course of the upcoming year+. The transition is not complete yet.
As opposed to ”A11 is Furian”.
 
Back
Top