Next-Gen iPhone & iPhone Nano Speculation

Isn't it exactly freedom of choice that you can pick any mobile phone you would like? The argument would only hold true if - for example - Apple were the only one providing you with a mobile phone to choose from. Which really isn't the case.

Now, back on topic.

I think we can expect the CO-processor to gain some oomph and new features in the next Apple chip, especially if the rumors are true that they are going into the home automation market.
 
With Android, you have the freedom to decide your own magnetic north! (http://gameovenstudios.com/bounden-on-android-delayed/) Take that Apple!

Crappy developers apparently don't know about the embedded gyroscope calibration function, which is unfortunately not very helpful in iphones, leading to this:
I ran a few (rather unscientific) tests in my office that equalized my iPhone 5s and 5 within a few degrees, but both iPhone models displayed a header a good 40 degrees south of what it should have.



Nice try at trolling, though.
 
I think we can expect the CO-processor to gain some oomph and new features in the next Apple chip, especially if the rumors are true that they are going into the home automation market.

In a sense, there already is (the M7). Of course, right now nothing other than Apple's app is able to access M7 directly, but even with that Apple could open more functions (though M7 could be a little too under powered). For example, right now apps can ask the GPS to perform in "accumulation mode" which let's M7 to accumulate distance data while the app is sitting in the background. This is useful for runner apps, for example.
 
Why would you think the M7 is underpowered? All it ever does (AFAIK anyway) is reading hardware sensor values, as slow as it may be, surely it meets or exceeds most any 1990s era CPU, and you could actually do quite a bit with those. A lot more than just reading a compass, gyro, accelerometer and GPS receiver a bunch of times every second.
 
Why would you think the M7 is underpowered? All it ever does (AFAIK anyway) is reading hardware sensor values, as slow as it may be, surely it meets or exceeds most any 1990s era CPU, and you could actually do quite a bit with those. A lot more than just reading a compass, gyro, accelerometer and GPS receiver a bunch of times every second.

Of course, right now there's very little public data on the M7, other than it's basically a micro controller made by NXP. So it could be powerful enough. Note that it's not just reading from a bunch of sensors, it has to interpret these data and do some works (e.g. to know whether you are walking or cycling or driving).

However, if Apple never intended the M7 to be directly used by 3rd party apps, then it must be just powerful enough or it'd be consuming too much energy. For example, it's possible that the M7 has only limited channel to the main memory (because reading from high performance DRAM is expensive), and runs everything in its own small SRAM. That could put some limit on what type of works it can be used.
 
I was under the impression M7 just reads the sensors (and temporarily accumulates the data, logically, presumably), while A7 does the actual interpreting when it is woken up out of sleep.

If you're saying M7 also interprets, then you have better sources than I do. :)
 
I was under the impression M7 just reads the sensors (and temporarily accumulates the data, logically, presumably), while A7 does the actual interpreting when it is woken up out of sleep.

If you're saying M7 also interprets, then you have better sources than I do. :)

I think it has to interpret because it also provides function such as step counter (running in the background, supposedly without having to wake up the main CPU). Of course, that's not really a large amount of works, but if everyone wants a pie of it then it could be quite a pressure on a small microcontroller :)
 
I think it has to interpret because it also provides function such as step counter (running in the background, supposedly without having to wake up the main CPU). Of course, that's not really a large amount of works, but if everyone wants a pie of it then it could be quite a pressure on a small microcontroller :)
I thought it simply records the data in a buffer, then off-loads it to the main SOC when it wakes up, for interpretation. So simple deferred evaluation.
 
I'd assume interpretation would be better (and also more power-efficiently) handled by the main CPU cores; modern, powerful, high-clocked CPUs are apparantly preferable in that situation than a slow, low-power chip because you can race through the calculations fast compared to a wimpy chip, then hit sleep mode again to conserve power...or so the boffins say anyway. :p
 
I thought it simply records the data in a buffer, then off-loads it to the main SOC when it wakes up, for interpretation. So simple deferred evaluation.

It works for some applications, but not all. For example, if all you want is to record the running distance of the user, you can do it this way. However, if you want your app to be notified when the distance has reached a preset value (IIRC CoreMotion does provide this function), then unless you want to wake the main CPU up periodically (which might drain battery unnecessarily), it's probably better to do some computation works in the controller.
 
I'd assume interpretation would be better (and also more power-efficiently) handled by the main CPU cores; modern, powerful, high-clocked CPUs are apparantly preferable in that situation than a slow, low-power chip because you can race through the calculations fast compared to a wimpy chip, then hit sleep mode again to conserve power...or so the boffins say anyway. :p

The problem is the cost of waking up a CPU is actually quite high, sometimes higher than the computation itself. That's why both Windows and MacOS X now try to "consolidate" background timers (making all apps' timer handler run at the same time slice) in order to reduce power consumption.
 
Well, didn't we already establish that interpreting these calculations could potentially be rather involved? Also, since we stored up a bunch of it, that'd make the computation load comparatively larger, yes?

Also, the M7 wouldn't be running the actual app (or any of the phone's standard background tasks), so you'd need to wake up the CPU every now and then anyway, by which point it could deal with the queued-up sensor data...

Anyway, this is a theoretical discussion, and you could very well be right in that the M7 also interprets sensor data. I really have no idea, I just like to speculate. :)

Anyway - conference in less than three days now. Exciting, isn't it?!
 
Well, didn't we already establish that interpreting these calculations could potentially be rather involved? Also, since we stored up a bunch of it, that'd make the computation load comparatively larger, yes?

Well that depends on what you need to compute. Step counting is easy. Note that M7 is supposed to record everything even if all apps are sleeping: for example, there's an API to get the step count between any two points of time, regardless whether any app actually requested such data at that time period.

Also, the M7 wouldn't be running the actual app (or any of the phone's standard background tasks), so you'd need to wake up the CPU every now and then anyway, by which point it could deal with the queued-up sensor data...

But you need to know when to wake up the CPU. Of course, you can wake the CPU periodically. Actually the CPU has to be awaken periodically anyway (e.g. to know whether to raise a timed alarm), but you probably don't want to wake the CPU too frequently. I mean, every second is probably fine, but even every 100 ms would increase the idle power consumption of the CPU ten folds.

Anyway, going back to the original topic, I think unless a "little" core is vastly more efficient in terms of power consumption, it's probably better to just use the main CPU (with reduced frequency and voltage) to handle complex background works. Those simple and fixed ones can be handled by M7, though.
 
It works for some applications, but not all. For example, if all you want is to record the running distance of the user, you can do it this way. However, if you want your app to be notified when the distance has reached a preset value (IIRC CoreMotion does provide this function), then unless you want to wake the main CPU up periodically (which might drain battery unnecessarily), it's probably better to do some computation works in the controller.
Ok. The M7 has a Cortex M3 in it. That should indeed be more than sufficient to make these kind of calculations.
 
So any GPU related speculation/expectations for iOS 8 tomorrow?

OpenGL ES 3.1: I'm guessing this is pretty much a shoe-in.

Other major GL extensions/features: OES doesn't have geometry shaders in core because some vendors might want to save the transistors, but in the case of the Series6 which has hardware geometry shader support is there a reason for Apple not to expose it? There's a new EXT extension defining geometry shaders on top of OpenGL ES 3.1 so the groundwork is set.

Texture formats: I guess the reason it's called the A7 GPU is not because they customized the Series6 to added ASTC support. Will Apple add PVRTC2 support though? With iOS 8 and intensive games requiring A5 devices and up PVRTC2 can become the new standard texture format. Are there technical reasons why Apple wouldn't support PVRTC2 like it being slower on older Series5XT hardware compared to PVRTC or the image quality improvement not being significant or would lack of support be more likely related to not wanting to license and maintain support for another texture format, especially with ASTC coming soon?

OpenCL: Apple's a big OpenCL supporter in computers, but have been silent on mobile. Have they just been waiting on performance and market maturity such that the timing may now be right with the A7 and iOS 8 or do they have some other objections that are currently unresolveable like stability, security, power consumption or wanting Full Profile support? Or perhaps they think compute shaders in OES 3.1 will be sufficient for iOS 8 as an initial step toward GPGPU?

CPU/GPU integration: Will there be moves toward hUMA? Clarifying and exposing the L3 cache to developers?

Older hardware: Is there anything major left to enable for Series5XT GPUs? Gokhan Avkarogullari previously said they aren't going to implement MRTs or OpenCL on the Series5XT since they don't consider the hardware to have native support for them and I guess they don't like how others are implementing it. They are still missing multisample render to texture and full NPOT support. ImgTech also has an implementation of UBOs for Series5XT although they haven't published a spec for it.

Hints for future hardware: How hard Apple pushes Auto Layout and resolution independence and how they talk about it could give hints on future hardware. Whether they focus on screen space/resolution or also talk about dynamic minimum tap target size with changing dpi would suggest whether new devices will move away from 326 dpi in additional to expected screen size/resolution increases. When people sift through the SDK, if they find references to ASTC, that could suggest that the A8 will be moving to something other than just a high clocked G6630 be it Series6XT or something else.
 
Hard-coding everything for specific, set screen resolutions like Apple has done up until now isn't going to be feasible in the long run, not if they want to offer a broader range of device sizes. It will become a friggin mess for devs, and probably, non-ideal situation for users too.

I hope moves towards device-independent, retargetable resolutions/graphics will be made this year.
 
I would love Apple to create a "proximity zooming" function...where when you're using the browser your finger's proximity to the screen triggers a variable zoom function...ie the closer your finger to the screen the higher the zoom factor....

I wonder if they've already patented something like this?

Also panning when zoomed in using proximity as well...
 
How would such a feature know when not to zoom as you approach the screen to touch it for other functions? I guess a slight delay may work, but I'm not sure if the calibration could be set fine enough for a feature like that to not constantly annoy someone by zooming when not desired.
 
Did the Samsung eye tracking thing ever take off?

Seems like this would be in the same category, something to tout but maybe not used widely.
 
Back
Top