Apple A9 SoC

Point taken on the fill rate test. When my phone comes in the mail today, I'll run the on-screen version to have a better look at what it's doing. Z/stencil performance is always strong with PowerVR; Series 7 probably even more so.

I like Apple's approach of building out the memory pools and interfaces for the CPU and GPU to this degree. They really have strong implementations of the cores they use and/or design.

GPU clock seems identical between the Plus and regular variant this time around. The GL ES driver has seen some decent improvement to efficiency recently, and the gap in the Manhattan test to Metal, which wouldn't be expected to show too significant of a difference normally with how GPU and shader focused it tries to be, is quite minimal -- and especially so on the new iPhones relative to GL ES vs Metal on the Air 2.
 
Nice to have a preliminary confirmation of 8MB L3. I really look forward to a latency analysis. L2 has increased a factor three, L3 presumably a factor two, while clocks have seen a significant boost. Have they been able to maintain latency in terms of cycles?
Also, since A8x had twice the L2 of the A8, could it be that A9x has some extra enhancements in store for us, above and beyond what we see in tha A9?
 
Heh, iOS 9.1b2 iPhone 5s vs 6s in Basemark Metal:

2ok6BgQ.png


xV6FmJx.png
 
Found these on Anandtech forums
Browser --- Kraken / Sunspider / Octane
----------------------------------------------------
iPad Air 2 (9.0) - 2420ms / 288ms / 10700

iPhone 6s 1719.2ms / 215.1ms / 16429

Does anybody know if the iPhone 6s uses emmc or ufs 2.0 NAND interface?
 
Heh, iOS 9.1b2 iPhone 5s vs 6s in Basemark Metal:
I don't understand these bar graphs. Which device is supposed to correspond to which screenshot? Is a higher score supposed to be better or worse? I ask, because apparently scoring 100-53-100-100 only adds up to 152, while 65-84-92-81 adds up to a whopping 917... :???:
 
The top one is probably the iPhone 5S and the bottom one is probably the iPhone 6S because then the image resolutions (1136x640 and 1334x750 respectively) match the device display resolutions.
 
In Basemark, the Overall score is linearly based off the offscreen fps. Higher is better.

The Baseline scores are poorly named and measures the device onscreen scores against itself when some features are de/activated. Baseline scores are sort of meaningless across devices, unless you're a game developer looking to introduce new effects (and FPS is already low).
 
With all that pixel shifting power, it's a shame it's stuck at 1080p.
My S6 has 1440p AMOLED, twice the pixel count.
You should see them side by side and do your pick.
 
With all that pixel shifting power, it's a shame it's stuck at 1080p.
My S6 has 1440p AMOLED, twice the pixel count.
You should see them side by side and do your pick.

Well, I doubt those in the Apple Eco-system buy solely on hardware specifications.
 
The NAND interface were upgraded from PCIe v1 to PCIe v2 in the A9.
Being honest, I don't know what that means, do you have any corroborating evidence for that statement?

I understand the rudimentary concept of inter-chip interfaces in modern SoCs, things like SDIO, HSIC and d-phy and newer m-phy but am far from expert, to say the least! I understand that the lastest SoCs have moved from SDIO to PCIe connects for wifi chips. MIPI M-PHY allows for a low-power M-PCIe interface, so in theory Apple could bypass JEDEC standard eMMC or UFS 2.0, and use some form of PCI - NAND interface, but what type of controller would it use SATA, perhaps NVMe, but those controllers use either FGPA or ASICs that I don't believe are low-power enough for a smartphone.

The NAND discovered on the iPhone 6s during the ifixit teardown is 16 GB - Toshiba (THGBX5G7D2KLFXG), according to Toshiba's product catalog all the items beginning with "THGB" belong to the "Flash memories with integrated controller" lineup and or either eMMC or UFS 2.0 products. I'd love to be wrong and for Apple to have created a unique NAND storage interface, as that would be interesting, but I still think that it's regular JEDEC eMMC or UFS 2.0.
0606_LPLowdown_F1.gif
 
A user on AnandTech forums ran GFXBench 3.0.38 on an iPhone 6s while it was hot and then while it was cold.

Code:
                      Cold score  Hot score  Hot/Cold ratio
Manhattan             56.4 FPS    54.1 FPS   0.959
Manhattan offscreen   39.2 FPS    32.2 FPS   0.822
    T-Rex             59.7 FPS    58.8 FPS   0.986
    T-Rex offscreen   78.8 FPS    65.6 FPS   0.833

I'm not very familiar with the GFXBench tests, so why are the onscreen ratios close to 1 even though the offscreen ratios are much lower?
 
A user on AnandTech forums ran GFXBench 3.0.38 on an iPhone 6s while it was hot and then while it was cold.

Code:
                      Cold score  Hot score  Hot/Cold ratio
Manhattan             56.4 FPS    54.1 FPS   0.959
Manhattan offscreen   39.2 FPS    32.2 FPS   0.822
    T-Rex             59.7 FPS    58.8 FPS   0.986
    T-Rex offscreen   78.8 FPS    65.6 FPS   0.833

I'm not very familiar with the GFXBench tests, so why are the onscreen ratios close to 1 even though the offscreen ratios are much lower?

IRC, the onscreen tests are capped at 60 fps, so given the lower than 1080p res of the iP6s it's easily hitting that cap in T-Rex and probably spiking that cap in Manhattan.
 
A user on AnandTech forums ran GFXBench 3.0.38 on an iPhone 6s while it was hot and then while it was cold.

Code:
                      Cold score  Hot score  Hot/Cold ratio
Manhattan             56.4 FPS    54.1 FPS   0.959
Manhattan offscreen   39.2 FPS    32.2 FPS   0.822
    T-Rex             59.7 FPS    58.8 FPS   0.986
    T-Rex offscreen   78.8 FPS    65.6 FPS   0.833

I'm not very familiar with the GFXBench tests, so why are the onscreen ratios close to 1 even though the offscreen ratios are much lower?
Onscreen results are capped at 60 fps so the GPU isn't being fully utilized in either cold or hot scenarios for those scores.

EDIT: Turbotab beat me to it.
 
A user on AnandTech forums ran GFXBench 3.0.38 on an iPhone 6s while it was hot and then while it was cold.

Code:
                      Cold score  Hot score  Hot/Cold ratio
Manhattan             56.4 FPS    54.1 FPS   0.959
Manhattan offscreen   39.2 FPS    32.2 FPS   0.822
    T-Rex             59.7 FPS    58.8 FPS   0.986
    T-Rex offscreen   78.8 FPS    65.6 FPS   0.833

I'm not very familiar with the GFXBench tests, so why are the onscreen ratios close to 1 even though the offscreen ratios are much lower?
Vsync cap. On-screen does less work than off-screen.

Edit: Tripppple-combo!
 
https://twitter.com/jfpoole/status/647653497350037504

As an informative illustration on how benchmarks like Geekbench can be compiler dependent, they've found performance improves by 8% just by using a 2 year newer compiler.

Things will be in flux even more going forward now that Bitcode is enabled by default so the App Store is building apps with the latest compiler on-demand when a user downloads or updates their apps.
 
Back
Top