Apple A8 and A8X

When following Apple's design cadence it's consistent that the CPU/GPU improvements in the A8 are more modest, after-all it's the "s" iPhone refreshes that focus on speed. The iPhone 3G used the same SoC as the original iPhone with no performance improvements. The iPhone 4 A4 used the same CPU/GPU architecture as the iPhone 3GS and was only up to 50% faster despite having 4x the pixels so it's probably the most comparable. The iPhone 5 A6 is the exception with 2x faster CPU and GPU, but the GPU was the same PowerVR Series 5XT architecture as the iPhone 4s A5, so no new features, just faster. The A8 does move to a different architecture, the Series 6XT with new features, a transition that previously only occurred in iPhone "s" refreshes, but the raw performance increase is a more modest 50%.

FWIW, Apple had been claiming 2x CPU improvement every generation from A4 to A5 and on until now. A5 did it with dual cores and OoO. A6 did it with a big clock boost and wider design. A7 did it with even wider design and ISA jump. This time they simply couldn't go wider efficiently and clock boost seems out of character with their power/speed balance historically.

Is it possible that all A8s for the iPhone are manufactured by TSMC and the A8s claimed to be made by Samsung are actually for the rumored iPad Air update later this year, and are somewhat different from the iPhone A8s? 40% seems too much for an iPad-only chip though.

It helps in terms of schedule by a few months at most, so I don't see it as a huge advantage. It will be interesting to see if they have the same GPU as seen in A7, though.
 
Even though this is Apple, they still need to hit certain targets with respect to die size, yield, and consumption, especially as fab process node shrinks become fewer and farther in between. And let's not forget that historically, for two full generations before A7, the iPhone had a more conservative GPU and mem. controller design compared to iPad. My guess is that we will see a six cluster GX6650 in a 12" iPad "Pro" within the next three to six months.

If there is a 12" ipad pro with active digitizer out, I'll like it
 
I wonder also if when you get to 14nm, if an IMG GPU saves you 20% power compared to an Intel GPU for a particular task, the absolute power difference is less significant than when you were on 45nm or 22nm, for example.

No.

The SOC TDP remains the same. hence the GPU power budget remains the same, roughly speaking. Thus any power efficiency differences convert directly into performance differences.
 
If there is a 12" ipad pro with active digitizer out, I'll like it
I wouldn't hold my breath about anything involving a pen from Apple.
I'm hopeful the force sensitive touchscreen tech from the iWatch will eventually make it into the iPad. Apple may not release their own stylus, but it would enable pressure sensitive third-party pens. Seeing the iWatch isn't even released yet I'm guessing there's no details yet on how force sensitivity on the iWatch works? Is it like the nVidia Tegra Note?

This discussion might be getting off-topic for this thread though.
 
AFAIK, the VoLTE should be entirely in the modem/transceiver/baseband and not in the apps processor, except as a sw.

I could be wrong about this.

You most likely aren't wrong; if I read it twice from different folks in the same thread it's hardly a coincidence. Oh and blah to the sw solution :rolleyes:
 
Why would they say Facetime over cellular? Unless the coding is software based and they use it for bandwidth reduction only.

Probably because MPEG-4 Part 2 doesn't need in-loop deblocking or CABAC and uses much less power per pixel. (Edit: I just noticed patsu linked to the compare page. which doesn't detail the codec discrepancy between Wifi and Cellular)

I think it's more reasonable to assume that they use MPEG-4 when convenient (e.g. when bandwidth is plentiful and power-cheap). Otherwise, it's H.264 when paired with older devices and HEVC when paired with A8 devices. The modem can then relax and offset the encoder/decoder's increased power profile.

Though even ignoring what codec they use and don't use, realtime software encoding on a brand new codec with limited time for computational and rate-distortion optimizations is ridiculously belligerent on a mobile platform.
 
I wonder when we will hear what the difference is with Geekbench and GFXBench results when they are tailored to run using Metal rather than OpenGL? Will be interested to see if the framerates go up considerably.
 
I wonder when we will hear what the difference is with Geekbench and GFXBench results when they are tailored to run using Metal rather than OpenGL? Will be interested to see if the framerates go up considerably.

The whole point of these tests is to test performance with standard APIs.

There is no way they are going to do metal versions of these IMO.
 
The whole point of these tests is to test performance with standard APIs.

There is no way they are going to do metal versions of these IMO.

I know the basis is to compare based on standard APIs over multiple platforms, just thought it would be interesting to see how the performance changes based on using METAL compared to OpenGL. I see that Geekbench have been developing tools to check it out.

https://twitter.com/jfpoole/status/513850325649072129
 
I know the basis is to compare based on standard APIs over multiple platforms, just thought it would be interesting to see how the performance changes based on using METAL compared to OpenGL. I see that Geekbench have been developing tools to check it out.

https://twitter.com/jfpoole/status/513850325649072129

I totally agree that it would be interesting to see how Metal improves performance over GLes3.0, when doing the same takss, which would give a good indication of the performance improvements that developers might see if designing for Metal.

However, glbench isn't about comparing relative API performance, it's about comparing devices running the same applications using the same APIs. I know next to nothing about graphics design, but the little I've read suggests that it is far from straightforward to switch from es3.0 to Metal, and certainly wouldn't be on the radar of the benchmark people IMO.
 
I totally agree that it would be interesting to see how Metal improves performance over GLes3.0, when doing the same takss, which would give a good indication of the performance improvements that developers might see if designing for Metal.

However, glbench isn't about comparing relative API performance, it's about comparing devices running the same applications using the same APIs. I know next to nothing about graphics design, but the little I've read suggests that it is far from straightforward to switch from es3.0 to Metal, and certainly wouldn't be on the radar of the benchmark people IMO.

Yeah maybe not, but I won't think it will be long before we see real world differences in games using Metal. We can already see the changes in games like Asphalt and Plunder Pirates and these have been reworked to use Metal rather than OpenGL over only a few months.

What will be interesting is if we start seeing an influx of Metal enhanced games and whether it will have a positive effect for iOS gaming over Android gaming?
 
I know next to nothing about graphics design, but the little I've read suggests that it is far from straightforward to switch from es3.0 to Metal, and certainly wouldn't be on the radar of the benchmark people IMO.

If you use game engines (such as Unity 3D) then it can be relatively straightforward to switch to Metal, if the game engine supports it.
 
Back
Top