Anyone know what the chinese glyphs for the 17k and 16k A11 scores stand for?
Ok, now the iPhone8s have reached reviewer hands, but as of yet we have seen basically nothing about the GPU, not as much as a single run through GfxBench!maybe anyone with knowledge can explain the architecture of A11 GPU(still TBDR)
https://developer.apple.com/videos/play/fall2017/602/
https://developer.apple.com/documentation/metal/about_gpu_family_4
The die size is 89.23 mm2, representing a 30% die shrink compared to the A10.
Ok, now the iPhone8s have reached reviewer hands, but as of yet we have seen basically nothing about the GPU, not as much as a single run through GfxBench!
The ImgTech notice this spring spoke of not using their IP in new products within a 15 month to two year time frame. The A11, though Apple states that the GPU is designed by them, would mark an earlier appearance of their own design than many of us would have expected from that statement.
Now, is there any way that someone with their hands on the iPhone 8 could conclusively demonstrate whether the GPU still uses ImgTech proprietary IP?
I doubt the regular review sites will take a look given that they can’t even be bothered to benchmark the thing, some kind of dedicated/community effort will probably be necessary.
Apple seems to really promote Metal(2) over OpenGL. They are still listed on Khronos page as a highest level member though.https://gfxbench.com/device.jsp?ben...pi=metal&D=Apple+iPhone+8+Plus&testgroup=info
Not that I know anything but both the performance characteristics as well as the remaining IMG extensiions smell like still IMG IP. Results are too fresh but if if it's throttling as it shows now with the first early results they might have pushed frequency again higher compared to A10. Nothing spectacular to see here other that Apple still doesn't seem to be willing to go higher than OGL_ES3.0.
Manhattan3.1 is the newest test it can run and despite truly just a < 30% increase compared to the A10 GPU, it's still by a sizeable amount ahead of smartphone SoC GPUs.
Apple seems to really promote Metal(2) over OpenGL. They are still listed on Khronos page as a highest level member though.
Uhm, Kishonti hasn’t updated Metal GfxBench in well over a year. It’s their choice not to implement the new stuff under iOS.I haven't the slightest clue what Metal supports, but it obviously still can't handle something like the Car Chase test in Gfxbench. Considering that IMG's Marlowe (HelioX30) gets 12.5 fps in that one (4 clusters@800MHz), something like the A11 GPU (which could be something like an 8 cluster config@ =/>900MHz or equivalent) performance would be somewhere in the Adreno 540 ballpark.
The handfull of long term performance/3.1 results still show a < 60% throttling trend which seems quite high for Apple; by the time where others (QCOM) decided that sustained performance is more important than peak short term results, is Apple now taking a 180 degree turn or what?
Uhm, Kishonti hasn’t updated Metal GfxBench in well over a year. It’s their choice not to implement the new stuff under iOS.
Regarding throttling we have to look at both the absolute scores and percentage drops between competing devices. Both are relevant.
For sure.I'm still cautious since early results can contain pitfalls.
I'm surprised Apple isn't doing any heatpipe stuff or similar to cool their chip - maybe a small vapor chamber attached to the chassis back. A number of androids had heatpipes back when their early-gen 64-bit SoCs ran super hot. Maybe some of them still do.The handfull of long term performance/3.1 results still show a < 60% throttling trend which seems quite high for Apple
The A11's GPU has 3 cores.Considering that IMG's Marlowe (HelioX30) gets 12.5 fps in that one (4 clusters@800MHz), something like the A11 GPU (which could be something like an 8 cluster config@ =/>900MHz or equivalent) performance would be somewhere in the Adreno 540 ballpark.
The A11's GPU has 3 cores.
In absolute terms the GPU (and CPU) is faster than that any other phone. And according to Apple, at significantly lower GPU power draw than its predecessor. Plus, as you point out, it roughly matches Intels (15W) best in terms of CPU and absolutely crushes it in terms of GPU at a small fraction of the power. So - I'm impressed.Considering A11 has a full node jump (10nm, ) plus supposedly a 'new in house' designed you, colour me not that impressed.
Saying that what we are looking at is ultra book performance graphics in a smartphone- in absolute terms still pretty good, just not compared to its predecessor and new technology.
The CPU part is very is basically Intel Kabylake 15w, very good indeed.