Next-Gen iPhone & iPhone Nano Speculation

I was thinking that given the iPhone 5's late September/October expected release, Apple would want to pad their GPU performance increase to meet potential challengers in the Exynos 5 series Mali-T604, Qualcomm S4 Adreno 320, and similar SGX544MP2 in OMAP5 which will all presumably be available later this year. Assuming Apple continues to aim to have top GPU performance on release of course. They may well be aiming for a more incremental performance increase and focusing their attention on the case redesign as the key product differentiator this year.

Let's see first if we see SoCs like OMAP5 and co in actual devices before this year runs out. Even if then in what kind of quantities?

As you note in another post Apple's headache for the next iPhone might not be only on the GPU side but also on the CPU side, since dual A15 at high frequencies (such as in OMAP5 at =/>2.0GHz) are going to be a tough cookie for quad A9s. Of course there's also always the sw side of things and since Apple has its own OS and hw is not obviously the entire story.

I also missed your 554MP2 question somewhere above; such a config is IMHO ideal if you have a workload with very high complexity shaders for today's small form factor standards, in any other case where shader complexity in applications is still mediocre the advantage to a 544MP2 at the same frequency could be by a lot smaller then the theoretical difference in FLOP amounts between the two might indicate (exactly because all other GPU aspects like texel, pixel and Z fillrates remain the same amongst others).

Rogue on the other hand is a totally different beast (obvious for any new architectural generation); it doesn't only rise FLOP rates by a lot compared to Series5XT but apparently also by a high degree triangle rates and texel fillrates amongst other things.

Remember also that you get from different sides a crapload of twisted marketing material; how long was it ago when in a freescale document the Vivante GC2000 was presented as being by a good marging faster than the MP2 in the iPad2 (while they'd obviously compared non vsynced vs. vsynced results :rolleyes: ) and now that the Huawei U9510 with a GC4000 (twice the "cores" than the GC2000) made it to a product it's suddenly even slightly below the iPhone4S in Egypt.

Again I've no idea what Apple's intentions are, however it seems to me that if their next iPhone banks in terms of GPU performance somewhere in between the iPad2 and 3 it could very well be sufficient in order for them to stay competitive always in a relative sense. Yes competing solutions might beat it within timeframe N, but then again iPhone uber-next might beat the first again later on. Here I doubt that either Apple or other players in the small form factor market will go for a <1 year launch cycle. When any new product launches by N months later than another one, I'd expect the newer product to be better.
 
While I can't be 100% certain that Siri relies only upon server-side processing for its voice recognition, I've tried plenty of other apps that do, which work on even the original iPhone, and have accuracy subjectively similar to Siri. Maybe the article's comment was referring to the GPU compositing of the search results to the display, as unnecessary as such a comment would be.

Other phone manufacturers, including Apple on non-Siri equipped iOS devices, have shown that voice recognition for voice activated dialing and other voice controls can be processed quite well locally on the phone. That's without even tapping the GPU, and that's while tackling the challenge of deciphering the names/surnames of a person's phone book, let alone simple dictionary words. When giving commands for playing music to pre-Siri iOS devices, the voice recognition was actually very impressive for understanding the names of songs and artists, and I believe they even had that working on smaller iPods. The major limitation was the range of commands Apple let the application do; it even allowed a fair amount of flexibility in how a person could phrase the commands it did support (though not nearly as flexible as Siri, admittedly).

Tapping the GPU for pattern recognition for even more sophisticated voice controls should give software designers more than enough performance at this point. If the voice command involves performing a search on the Internet and thus server side processing anyway, the whole argument is moot since the voice recognition might as well be done there for even more accuracy.
 
While I can't be 100% certain that Siri relies only upon server-side processing for its voice recognition, I've tried plenty of other apps that do, which work on even the original iPhone, and have accuracy subjectively similar to Sir....i.

Other phone manufacturers, including Apple on non-Siri equipped iOS devices, have shown that voice recognition for voice activated dialing and other voice controls can be processed quite well locally on the phone. That's without even tapping the GPU,).....

Tapping the GPU for pattern recognition for even more sophisticated voice controls should give software designers more than enough performance at this point.

Your comments focus on processing capability, however IMG in its recent published video demos (were they at MWC ? or GDC ? or some such), placed at least if not more emphasis on GPU compute being able to do certain tasks using significantly less mWatts than the CPU, and hence improve battery life, so maybe that is the reason (assuming the article is correct).
 
Voice recognition in the cloud has a lot more advantages than just processing power. it also has access to a much larger data set to bade its decicions on.
 
The other important advantage to it is Apple's ability to update that database on their end whenever they want, not once every few months when they make an iOS update. Keeping Siri's vocabulary and range of responses as current as possible helps maintain the perception of an intelligent assistant.

If Siri's voice recognition were processed locally, GPGPU would certainly be the way to go; I'm just doubting that they're actually doing that and believe the article was inaccurate/misleading. I'd have a hard time imagining Apple taking good advantage of GPGPU for Siri yet not for iPhoto.
 
Looks like the bigger screen iPhone is finally coming.

Not sure what the next big feature would be. If it's a bigger screen/new form factor, that might mean a modest change in the SOC and of course LTE (supposedly the US carriers are going to ditch the grandfathered unlimited data plans in anticipation of LTE support in iPhone).
 
Based on the recent articles about Siri's answer to "What is the best smartphone in the world" (for some, it reported Nokia Lumia 900), I'd say it's definitely a cloud-based search. It pretty much crawls through the web based on the Wolfram Alpha engine and collects keywords/category and phrases. That's not something you can do on a smartphone.
 
Based on the recent articles about Siri's answer to "What is the best smartphone in the world" (for some, it reported Nokia Lumia 900), I'd say it's definitely a cloud-based search. It pretty much crawls through the web based on the Wolfram Alpha engine and collects keywords/category and phrases. That's not something you can do on a smartphone.

I think the question is whether any part of the voice recognition is done using GPU, as opposed to the actual search that results from it.
 
I think the question is whether any part of the voice recognition is done using GPU, as opposed to the actual search that results from it.

There are chips by Audience that clean up the audio using noise canceling mics before it's sent to the cloud, but no, I don't think the smartphone really does much besides act as a mic.
 
There are chips by Audience that clean up the audio using noise canceling mics before it's sent to the cloud, but no, I don't think the smartphone really does much besides act as a mic.

Indeed, my original post was regarding the bbc article that stated that "siri was driven by the GPU", and whether that was a indication of GPU compute, or just plain wrong statment.

We know that the info returned is cloud sourced (except for stuff that resides on the phone, like alarms), we know they have circuitry to tidy-up the audio in. What we DONT know for sure is whether the GPU is doing anything for Siri other than display. It would make sense for the audio data to be "voice recognised" by cloud servers, so likely the article was just wrong ?
 
I see the latest word on the street (well, Reuters) have confirmed that the WSJ's reports about the next iPhone receiving a larger screen are correct:

http://www.dailytech.com/article.aspx?newsid=24714

Seems likely to me. I do hope that Apple provides a thumb stretching device for those who believe any phone with a screen over 3.5" is too unwieldy to use. ;)

If true, what would be the likely resolution of an iPhone with a larger-screen, I wonder? Apple would surely want to keep the whole 'Retina' thing going as a good sales pitch and I'd imagine they will want to keep their unusual 1.5:1 screen ratio. Surely not 1440x960? Something like 1152x768, perhaps?
 
I see the latest word on the street (well, Reuters) have confirmed that the WSJ's reports about the next iPhone receiving a larger screen are correct:

http://www.dailytech.com/article.aspx?newsid=24714

Seems likely to me. I do hope that Apple provides a thumb stretching device for those who believe any phone with a screen over 3.5" is too unwieldy to use. ;)

If true, what would be the likely resolution of an iPhone with a larger-screen, I wonder? Apple would surely want to keep the whole 'Retina' thing going as a good sales pitch and I'd imagine they will want to keep their unusual 1.5:1 screen ratio. Surely not 1440x960? Something like 1152x768, perhaps?
One limitation on resolution changes, besides the usual concerns on the need to remain a "Retina Display" and avoid ugly non-2x scaling, is whether they still intend to allow iPhone only apps to run 2x on the iPad. If they want to keep this compatibility feature, then their max iPhone resolution has to fit within 1024 and 768. For example, a switch to 16:10 for 1024x640 on a 4" display would result in a still Retina 302 dpi while still able to run 2x on the new iPad. Of course, after all this time Apple may just drop the 2x feature and encourage developers to focus on iPad optimized/Universal apps. That would free up their resolution choices.
 
Seems likely to me. I do hope that Apple provides a thumb stretching device for those who believe any phone with a screen over 3.5" is too unwieldy to use.
I dont see why they just dont release various sized models
3.0" 3.5" 4.0" 4.5" same resolution but different DPI
ok this is not the apple way (one size rules them all) but still
 
There's a certain mythos that goes along with the iStuff. It works for marketing and (as apparant) works for recognition as a premium brand. Having many different models confuses and dilutes that.
 
I dont see why they just dont release various sized models
3.0" 3.5" 4.0" 4.5" same resolution but different DPI
ok this is not the apple way (one size rules them all) but still
Having only a few components also maximizes their buying power allowing very low unit prices. That's probably one of the reasons competitors haven't been able to significantly undercut Apple in price on similarly specced devices or have much smaller profit margins. If Steve Jobs was loathed to introduce too many models because it conflicts with the ideal of simplicity, even if Tim Cook doesn't share that reason as deeply, he'd still likely be against it based on his background in operations and supply chain.
 
What Im proposing is arent different models in the traditional sense eg samsung Y is different from S or whatever.
Im proposing the same specs in all phones (same cpu/gpu/memory/resolution) just different screensize (& perhaps battery)

now will apple patent the idea, since noone could possible think of it
 
It seems like an effort to leverage economies of scale by producing a high volume of the same components.

They differentiate their Macs by screen size but of course, they only have to produce a fraction of iOS volumes.


Isn't there some work, not by Apple, to produce little projectors which could be used on mobile devices to project larger images? For viewing content obviously, not so much for a bigger screen to input.

The 4 and 5-finger gestures of the iPad improve the usability (better flow, not having to press the home button all the time) but can't be applied to screens on smaller, pocketable devices.
 
Having only a few components also maximizes their buying power allowing very low unit prices. That's probably one of the reasons competitors haven't been able to significantly undercut Apple in price on similarly specced devices or have much smaller profit margins. If Steve Jobs was loathed to introduce too many models because it conflicts with the ideal of simplicity, even if Tim Cook doesn't share that reason as deeply, he'd still likely be against it based on his background in operations and supply chain.

When you think about it, since certain parts of the iPhone haven't changed in 1-2 years, they are extremely cheap to make now. For example, the Retina display on the iPhone 4/4S has probably been produced in the 100 millions already. The equipment has more than paid for itself by now.
 
Back
Top