Next-Gen iPhone & iPhone Nano Speculation

In the keynote they say "iPad Mini and "full-sized" iPad" quite a lot before introducing the iPad Air. I think "iPad Air" definitely sounds better than that... and I think it makes sense. Having one device name be a subset of another device name is rarely a good idea since it can lead to confusion.
Now we just need an iPad Pro :) 12"-13" would be great, as long as Apple finds a way to keep the thickness and weight below or equal to an iPad 4, A4 paper size would be great (not impossible, but probably quite expensive; although spring might be too early for that).

I'm not too concerned about the A7 that's in the iPad Air. There's no reason to doubt that it will be noticeably faster than the A6X. My one wish would be for 2GB of RAM, since switching between Apps is easy on the iPad it becomes really annoying when things slow down because stuff got kicked out of RAM, which happened way more often on my iPad 4 compared to the iPhone 5 (both have 1GB RAM).
 
Last edited by a moderator:
Especially with iOS 7's new multitasking that keeps more apps updated in the background than prior iOSs, more RAM would probably do as much for the experience as better flash storage or a better SoC right now.
 
Well, it will be interesting to read Anand's review of the iPad Air. He usually gets the gritty details.

But I think we can conclude that they just ditched the X in their forward naming scheme.
 
At a guess based on the information revealed, it looks like it is the same chip, but with a slight improvement in CPU speed, and maybe 50% increase in GPU Clock.

This or something similar is a possibility. The higher power draw of the iPads would certainly allow the graphics to be clocked significantly higher than in the iPhone. However, they indicated twice the performance both in their graph and in writing, so while a GPU clock hike would seem the most straightforward guess, it would be really nice if it was at least accompanied by some improvements of the memory subsystem.

Mika11 said:
Now we just need an iPad Pro 12"-13" would be great, as long as Apple finds a way to keep the thickness and weight below or equal to an iPad 4, A4 paper size would be great (not impossible, but probably quite expensive; although spring might be too early for that).
Such a pad would be an interesting attempt to cut marketshare from notebooks directly. Coupled with Apple making some in-house productivity apps free, such a thrust would make sense. But maybe that is just my personal desires speaking. A large, light iPad with an attacheable keyboard would sell to me, that's for sure. It would cannibalize their own OSX notebooks a bit, but the far larger share would be from Windows systems. There have been rumors of such a device with a higher resolution display. Whatever the truth of such rumors, it seems Apple is inclined to improve the productivity pedigree of their larger iOS devices.
 
It seems that Apple has decided to make one main SOC for this round and are relying on the A7 to be the chip for all, seems that it would be cheaper to make loads of A7's and then just bump the clockspeed to suit.

One thing for sure is that seeing as the A6x is more powerful than the A6 (seeing as it's clocked higher and has a different GPU), it's certain that the A7 in both the iPad Air and iPad Mini with Retina is most certainly clocked higher, although I'm not sure how much.

Going by the main differences between A5 > A5x > A6x and now to A7, I'm pretty positive that we will see a significant bump from the iPhone 5S' A7 by a minimum of 50%.

We could end up with a exceptionally powerful (graphically) tablet, that will be heads and shoulders above anything else on the market.
 
I've had some thoughts on the lack of an A7x and would like to run it past you guys.

My theory is the the X chips were driven by the need for more bandwidth rather than just more performance. If you remember the A5 at the time it had 6.4GB/sec bandwidth. Retina displays are bandwidth hungry; for a 2048×1536 display the frame buffer alone could take something approaching 2GB/Sec, if we take into account some overdraw. 6.4GB/s was clearly not enough and the only way they could achieve higher bandwidth at the time was by moving to a 128-bit memory interface. That requires lot of pads, which in turn requires a large die.

Now that we have fast LPDDR3 memory, you can achieve very high bandwidth by using higher clocked chips on a 64 bit bus (the Exynos 5420 has a bandwidth of 15GB/s on 2×32bit memory bus for example). Thus the big die is no longer required and performance can be increased by clocking the GPU higher.

In short, the X chips were a short term measure while they waited for memory technology to catch up with what’s needed to drive a retina display.
 
I've had some thoughts on the lack of an A7x and would like to run it past you guys.

My theory is the the X chips were driven by the need for more bandwidth rather than just more performance. If you remember the A5 at the time it had 6.4GB/sec bandwidth. Retina displays are bandwidth hungry; for a 2048×1536 display the frame buffer alone could take something approaching 2GB/Sec, if we take into account some overdraw. 6.4GB/s was clearly not enough and the only way they could achieve higher bandwidth at the time was by moving to a 128-bit memory interface. That requires lot of pads, which in turn requires a large die.

Now that we have fast LPDDR3 memory, you can achieve very high bandwidth by using higher clocked chips on a 64 bit bus (the Exynos 5420 has a bandwidth of 15GB/s on 2×32bit memory bus for example). Thus the big die is no longer required and performance can be increased by clocking the GPU higher.

In short, the X chips were a short term measure while they waited for memory technology to catch up with what’s needed to drive a retina display.
LPDDR3 doesn't seem fast enough to allow a 64-bit memory interface to replace a 128-bit memory interface. The A6X used LPDDR2-1066 with a 128-bit memory interface for 17 GB/s of memory bandwidth. The fastest available LPDDR3-1600 which on a 64-bit interface only provides 13GB/s of memory bandwidth. DDR3L-1866, as you note, can provide 15 GB/s, but is still below what the A6X has and is more power hungry than LPDDR3. Even if Apple has early access to faster LPDDR3 or DDR3L variants, they may be able to match A6X bandwidth on a 64-bit bus, but they really aught to be aiming to increase memory bandwidth over the previous generation to feed the 2x faster CPU and GPU.

Chipworks showed the A7 has 2 large memory bus pads and 2 small memory bus pads. I don't think anyone has yet come up with a good explanation for what they represent, but that would seem to be important to explaining the new iPad's memory bandwidth situation.
 
Yes it will be a regression in bandwidth, but they might be willing to take the hit this gen. 15 GB/s vs 17 GB/s is close enough. The benefits are obvious; one design across all their new iOS devices and lower cost per chip.

Soon we will have faster memory modules. and technologies like stacked dram make the issue of pad constraints on smaller dies go away. They will be back on course increasing bandwidth every gen in no time.
 
Any guesses on which baseband they're using?

They highlighted support for more LTE networks and the specs page lists 14 bands, more than any of the iPhone 5S SKUs.

I know Qualcomm was talking about a "global" LTE baseband.
https://twitter.com/nerdtalker/status/392862150521126913

Brian Klug said:
Yeah new iPads are essentially confirmed MDM9615+WTR1605L based on some digging
Still the MDM9615 based on firmware version information in the settings of the hands-on units and IPSW analysis.

Well, it will be interesting to read Anand's review of the iPad Air. He usually gets the gritty details.

But I think we can conclude that they just ditched the X in their forward naming scheme.
https://twitter.com/nerdtalker/status/392862612582457345

Brian Klug said:
Also references to S5L8960x SoC still, which is the A7, so there’s no funny business about it being different silicon, it’s the same
From IPSW analysis it looks like the new iPads are using the same S5L8960X A7 silicon as the iPhone 5S and not some silent variant. So the performance differences would seem to have to come from different clock speeds, enabled functional blocks, and RAM they are paired with.

Yes it will be a regression in bandwidth, but they might be willing to take the hit this gen. 15 GB/s vs 17 GB/s is close enough. The benefits are obvious; one design across all their new iOS devices and lower cost per chip.

Soon we will have faster memory modules. and technologies like stacked dram make the issue of pad constraints on smaller dies go away. They will be back on course increasing bandwidth every gen in no time.
Apple likes their profit margin, but I just don't think they're likely to compromise or regress on the experience of their product just to pad it. Aggressive GPU performance backed by a lot of memory bandwidth due to the larger 128-bit memory bus has been an important differentiating factor over competitors to make the retina display in the iPad usable at native resolution and avoid the need to pare back graphical effects or complexity to accommodate the higher pixel count.

I wonder if a 2 x 32-bit + 2 x 16-bit memory interface for an effective 96-bit memory bus is feasible? It could still be a unified memory system, just that the CPU preferentially uses the memory attached to the 2 x 16-bit interfaces that bracket it for latency benefits, while the GPU preferentially uses the memory attached to the 2 x 32-bit interfaces. LPDDR3-1600 on a 96-bit memory bus gives 19.2 GB/s of memory bandwidth which is still a modest improvement over the 17 GB/s of the A6X. The 2 x 16-bit memory interface could go unused in the iPhone 5S to save power and avoid stacking 2 extra DRAM since the extra memory bandwidth isn't required.
 
Last edited by a moderator:
Even if the A7 silicon is largely the same between ipad Air and iphone 5s, the GPU and/or CPU clock operating frequencies are surely higher on the ipad Air (and enough so that the performance is significantly ahead of ipad 4).

Anyway, pretty smart move by Apple to use this new form factor with ipad Air, because it opens the door to larger screen and higher performance ipad variants in the future (at a higher cost of course) in a physical size that is similar to many high end Android/Windows tablets today.
 
LPDDR3 doesn't seem fast enough to allow a 64-bit memory interface to replace a 128-bit memory interface. The A6X used LPDDR2-1066 with a 128-bit memory interface for 17 GB/s of memory bandwidth. The fastest available LPDDR3-1600 which on a 64-bit interface only provides 13GB/s of memory bandwidth. DDR3L-1866, as you note, can provide 15 GB/s, but is still below what the A6X has and is more power hungry than LPDDR3. Even if Apple has early access to faster LPDDR3 or DDR3L variants, they may be able to match A6X bandwidth on a 64-bit bus, but they really aught to be aiming to increase memory bandwidth over the previous generation to feed the 2x faster CPU and GPU.

Chipworks showed the A7 has 2 large memory bus pads and 2 small memory bus pads. I don't think anyone has yet come up with a good explanation for what they represent, but that would seem to be important to explaining the new iPad's memory bandwidth situation.

The ~4MB of SRAM sandwiched between the GPU block and one of the memory interfaces could somewhat compensate for a loss of memory bandwidth compared to A6X, depending on how it is utilised.
 
When Apple finally increases the size of the iphone display next year, they are going to have up the resolution or else it won't be retina anymore. There won't be much reason to have separate A chips then.
 
Last edited by a moderator:
https://twitter.com/nerdtalker/status/392862150521126913


Still the MDM9615 based on firmware version information in the settings of the hands-on units and IPSW analysis.


https://twitter.com/nerdtalker/status/392862612582457345

Disappointing. Qualcomm touted the RF360 in February and looks like it's not gotten into any device yet.

http://www.qualcomm.com/chipsets/gobi/rf-solutions/qualcomm-rf360-front-end


Maybe Qualcomm wants a premium or can't produce enough of the new chipsets, which area also suppose to be more efficient.

Would be nice to see it get into a device this year. Maybe the Nexus 5, but I'm not holding my breath.
 
Disappointing. Qualcomm touted the RF360 in February and looks like it's not gotten into any device yet.

http://www.qualcomm.com/chipsets/gobi/rf-solutions/qualcomm-rf360-front-end


Maybe Qualcomm wants a premium or can't produce enough of the new chipsets, which area also suppose to be more efficient.

Would be nice to see it get into a device this year. Maybe the Nexus 5, but I'm not holding my breath.

Unlikely, the leaked Nexus 5 (LG 821) service manual, listed the inclusion of the QFE1100 Envelope Tracking, but not the entire RF 360 stack.
 
That's really interesting! I didn't know that apple updated hardware drivers like that inbetween major OS releases...
 
Apple have updated GPU drivers with 7.0.3, which has resulted in a decent ~10% boost to both T-rex and Egypt HD scores. The drivers were listed as 27.10, and are now 27.11.4, but no mention of OpenGL 3.0.

http://gfxbench.com/device.jsp?benchmark=gfx27&D=Apple iPhone 5S

OpenGL ES 2.0 Apple A7 GPU - 27.11.4

Interesting thing is that in the low level tests, the only ones that got a boost from the driver change are the on-screen textured tests (up by about 25%), the off-screen ones are identical to the previous driver.

What driver improvement would result in onscreen low level tests getting a 25% improvement, but off-screen tests being identical ? Is there a chance that the glbenchmark site is showing old results for low level off-screen tests ?

Overall, the 5s with the new drivers on the 2.7 bench, get 8% improvement off-screen, and around 5% on-screen.

got over 12% improvement in the 2.5 offscreen bench too.
 
Last edited by a moderator:
Perhaps no one has submitted new off-screen sub-test results yet.

Perhaps the ~50% improvement in graphics for the new iPads' A7 versus the 5S's A7 isn't a bump all the way in peak clock rate but rather a combination of a bump in average GPU clock rate as well as peak clock. In other words, when the workload fluctuates in the new iPads, the thresholds by which it determines to ramp down in frequency/voltage is relaxed compared to the 5S and is consequently at a higher performance level more of the time.

When Apple claimed the A5 in the iPad 2 was 9x better in graphics than the A4 in the iPad yet the A5 in the iPhone 4S was only claimed to be 7x the graphics of the iPhone 4's A4, fill rate results in benchmarks and other performance correlations (if I'm recalling correctly and not just filling in my own blanks with what suits my theory here) seemed to support the idea that a 250 MHz clock speed was shared by the iPads' GPUs and a 200 MHz clock speed was shared by the iPhones' GPUs, yet the benchmark game tests (again, assuming I'm not misremembering) did back up the claim that the "overall" performance gain between the iPhones was somewhat less than the performance gain between the iPads as Apple had said.

The 9x figure makes sense (72 FLOPS/clock for 543MP2 versus 8 FLOPS/clock for SGX535), but that would've held true for the increase between both the iPhones and the iPads, assuming the GPU clock rates were indeed kept the same between the generations of both product lines. The lower CPU clock speed of the iPhones' CPUs, though just a proportional decrease from the iPads' that was in line with the 20% reduction on the GPU side, might've been limiting the performance gain on the graphics side between the iPhone generations, but my theory has been that the larger form factor of the iPad also allowed it to be less aggressive in ramping down voltages on its SoC to conserve power than what the iPhone 4S had to do when moving to the high-power configuration of the 543MP2. So, maybe the situation will be the same between the A7s of the newest iPhone and iPads.
 
Back
Top