...And the thumbprint reader from the 5S. Having to tap in a 4-digit code to unlock is SO 1960s!My one wish would be for 2GB of RAM
At a guess based on the information revealed, it looks like it is the same chip, but with a slight improvement in CPU speed, and maybe 50% increase in GPU Clock.
Such a pad would be an interesting attempt to cut marketshare from notebooks directly. Coupled with Apple making some in-house productivity apps free, such a thrust would make sense. But maybe that is just my personal desires speaking. A large, light iPad with an attacheable keyboard would sell to me, that's for sure. It would cannibalize their own OSX notebooks a bit, but the far larger share would be from Windows systems. There have been rumors of such a device with a higher resolution display. Whatever the truth of such rumors, it seems Apple is inclined to improve the productivity pedigree of their larger iOS devices.Mika11 said:Now we just need an iPad Pro 12"-13" would be great, as long as Apple finds a way to keep the thickness and weight below or equal to an iPad 4, A4 paper size would be great (not impossible, but probably quite expensive; although spring might be too early for that).
LPDDR3 doesn't seem fast enough to allow a 64-bit memory interface to replace a 128-bit memory interface. The A6X used LPDDR2-1066 with a 128-bit memory interface for 17 GB/s of memory bandwidth. The fastest available LPDDR3-1600 which on a 64-bit interface only provides 13GB/s of memory bandwidth. DDR3L-1866, as you note, can provide 15 GB/s, but is still below what the A6X has and is more power hungry than LPDDR3. Even if Apple has early access to faster LPDDR3 or DDR3L variants, they may be able to match A6X bandwidth on a 64-bit bus, but they really aught to be aiming to increase memory bandwidth over the previous generation to feed the 2x faster CPU and GPU.I've had some thoughts on the lack of an A7x and would like to run it past you guys.
My theory is the the X chips were driven by the need for more bandwidth rather than just more performance. If you remember the A5 at the time it had 6.4GB/sec bandwidth. Retina displays are bandwidth hungry; for a 2048×1536 display the frame buffer alone could take something approaching 2GB/Sec, if we take into account some overdraw. 6.4GB/s was clearly not enough and the only way they could achieve higher bandwidth at the time was by moving to a 128-bit memory interface. That requires lot of pads, which in turn requires a large die.
Now that we have fast LPDDR3 memory, you can achieve very high bandwidth by using higher clocked chips on a 64 bit bus (the Exynos 5420 has a bandwidth of 15GB/s on 2×32bit memory bus for example). Thus the big die is no longer required and performance can be increased by clocking the GPU higher.
In short, the X chips were a short term measure while they waited for memory technology to catch up with what’s needed to drive a retina display.
https://twitter.com/nerdtalker/status/392862150521126913Any guesses on which baseband they're using?
They highlighted support for more LTE networks and the specs page lists 14 bands, more than any of the iPhone 5S SKUs.
I know Qualcomm was talking about a "global" LTE baseband.
Still the MDM9615 based on firmware version information in the settings of the hands-on units and IPSW analysis.Brian Klug said:Yeah new iPads are essentially confirmed MDM9615+WTR1605L based on some digging
https://twitter.com/nerdtalker/status/392862612582457345Well, it will be interesting to read Anand's review of the iPad Air. He usually gets the gritty details.
But I think we can conclude that they just ditched the X in their forward naming scheme.
From IPSW analysis it looks like the new iPads are using the same S5L8960X A7 silicon as the iPhone 5S and not some silent variant. So the performance differences would seem to have to come from different clock speeds, enabled functional blocks, and RAM they are paired with.Brian Klug said:Also references to S5L8960x SoC still, which is the A7, so there’s no funny business about it being different silicon, it’s the same
Apple likes their profit margin, but I just don't think they're likely to compromise or regress on the experience of their product just to pad it. Aggressive GPU performance backed by a lot of memory bandwidth due to the larger 128-bit memory bus has been an important differentiating factor over competitors to make the retina display in the iPad usable at native resolution and avoid the need to pare back graphical effects or complexity to accommodate the higher pixel count.Yes it will be a regression in bandwidth, but they might be willing to take the hit this gen. 15 GB/s vs 17 GB/s is close enough. The benefits are obvious; one design across all their new iOS devices and lower cost per chip.
Soon we will have faster memory modules. and technologies like stacked dram make the issue of pad constraints on smaller dies go away. They will be back on course increasing bandwidth every gen in no time.
LPDDR3 doesn't seem fast enough to allow a 64-bit memory interface to replace a 128-bit memory interface. The A6X used LPDDR2-1066 with a 128-bit memory interface for 17 GB/s of memory bandwidth. The fastest available LPDDR3-1600 which on a 64-bit interface only provides 13GB/s of memory bandwidth. DDR3L-1866, as you note, can provide 15 GB/s, but is still below what the A6X has and is more power hungry than LPDDR3. Even if Apple has early access to faster LPDDR3 or DDR3L variants, they may be able to match A6X bandwidth on a 64-bit bus, but they really aught to be aiming to increase memory bandwidth over the previous generation to feed the 2x faster CPU and GPU.
Chipworks showed the A7 has 2 large memory bus pads and 2 small memory bus pads. I don't think anyone has yet come up with a good explanation for what they represent, but that would seem to be important to explaining the new iPad's memory bandwidth situation.
https://twitter.com/nerdtalker/status/392862150521126913
Still the MDM9615 based on firmware version information in the settings of the hands-on units and IPSW analysis.
https://twitter.com/nerdtalker/status/392862612582457345
Disappointing. Qualcomm touted the RF360 in February and looks like it's not gotten into any device yet.
http://www.qualcomm.com/chipsets/gobi/rf-solutions/qualcomm-rf360-front-end
Maybe Qualcomm wants a premium or can't produce enough of the new chipsets, which area also suppose to be more efficient.
Would be nice to see it get into a device this year. Maybe the Nexus 5, but I'm not holding my breath.
Apple have updated GPU drivers with 7.0.3, which has resulted in a decent ~10% boost to both T-rex and Egypt HD scores. The drivers were listed as 27.10, and are now 27.11.4, but no mention of OpenGL 3.0.
http://gfxbench.com/device.jsp?benchmark=gfx27&D=Apple iPhone 5S
OpenGL ES 2.0 Apple A7 GPU - 27.11.4