Next-Gen iPhone & iPhone Nano Speculation

What makes you think any manufacturer is interested in "future-proofing?"

Ideally for them, you buy a new device from them every year.

Anyways, by the end of 2012, what percentage of smart phones sold will be 4G capable?

And of those, what percentage of them are actually used for 4G a substantial portion of the time? I've heard anecdotes about people turning off 4G on their SGII to preserve battery.

But the bigger issue is, Verizon's network alone doesn't justify manufacturing tens of millions of iPhone5 with LTE if only a fraction of them will end up on Verizon.
 
Eventually tablets will go to ultra high resolutions and that's exactly the reason why GPU IP vendors like Vivante, ARM and IMG (Qualcomm also soon albeit not an IP IHV) all went for multi-core GPU IP. Just for the record the Lenovo LePad K2 (Tegra3) is already at 1920*1152: http://www.glbenchmark.com/phonedetails.jsp?benchmark=glpro21&D=Lenovo+LePad+K2&testgroup=system
Shouldn't that be a 1920×1200 resolution? The benchmark only sees 1920×1152 pixels because of the permanent 'taskbar' on android for tablets.
What makes you think any manufacturer is interested in "future-proofing?"

Ideally for them, you buy a new device from them every year.

Anyways, by the end of 2012, what percentage of smart phones sold will be 4G capable?

And of those, what percentage of them are actually used for 4G a substantial portion of the time? I've heard anecdotes about people turning off 4G on their SGII to preserve battery.

But the bigger issue is, Verizon's network alone doesn't justify manufacturing tens of millions of iPhone5 with LTE if only a fraction of them will end up on Verizon.
I agree. Another factor to consider is that LTE will be using many more different frequency bands than 3G does. You need a pentaband UMTS radio to be able to use it on any network, but I read somewhere that there will be more than 30 different frequency combinations used for LTE. Shouldn't that pose a challenge for Apple?

http://www.engadget.com/2011/12/11/wherever-i-wander-wherever-i-roam-lte-probably-wont-work/ According to that the most common spectrum range of 700 to 900 MHz will probably only cover 16% of the total LTE market.
 
Is apple concerned about the absence of LTE? External storage? NFC? Videocalls? Direct HDMI-out? Direct USB host? Confortably larger screen? File transfer between devices? Multitasking? Ability to download files from the browser?

Given the importance that the SGX543MP2 has been given by game developers so far, I'd say any of the above is more important than the upgraded GPU.
The iPhone 4, iPhone 4S, iPad 2, and 4th gen Touch lack video calling? Admittedly FaceTime is WiFi only, but third-parties like Skype offer 3G video calling.

In any case, Apple seems to make their GPU choices with a longer term view than just one SoC generation. To date, Apple retains the same CPU and GPU architecture for 2 SoC generations and so often seems to go overkill on their GPU choice compared to the competition in the 1st year of that cycle.

1st gen: 90nm 400MHz ARM11 + MBX Lite
2nd gen: 65nm 533MHz ARM11 + MBX Lite
3rd gen: 65nm 600MHz Cortex A8 + SGX535
4th gen: 45nm 1GHz/800MHz Cortex A8 + SGX535
5th gen: 45nm 1GHz/800MHz dual core Cortex A9 + SGX543MP2

If I'm not mistaken, 3D graphics and smartphone gaming wasn't the phenomenon it is today until the iPhone came along, so having a dedicated GPU in the MBX Lite was a strong choice when it was introduced. Similarly, the SGX535 was an aggressive choice when it was introduced at a time when Anand was speculating Apple would stick to a SGX520 or when TI was using the SGX530 and Qualcomm had the Adreno 200.

Now on the 5th gen SoC or 3rd major architecture, the SGX543MP2 can be considered ahead of its time. However, if the pattern holds, we'd expect Apple to stick with the SGX543MP2 for the Apple A6, which in the 2012 timeframe will be more inline with the competition. Apple going with a SGX540 for the A5, would look far too behind to reuse it in the A6. Keeping the same CPU and GPU architecture for 2 generations of SoC makes sense for Apple since it amortizes the silicon design/layout and driver development costs. Notably, it means Apple introduces new architectures on known silicon processes and shrinks existing architectures on new silicon processes, Intel Tick/Tock style. Personally, I'm expecting the Apple A6 to be a 1.6GHz (iPad 3)/1.2GHz (iPhone 5) dual core Cortex A9 with SGX543MP2 (clocked presumably at 1/4 CPU) on either Samsung 32nm or TSMC 28nm. Hardly a shocking guess. If Apple feels they need more power for the Retina display in the iPad 3, presumably they'd go with the SGX543MP4 to maintain as much commonality with the A5 as possible vs going with say an SGX554MP or Rogue based design.
 
What makes you think any manufacturer is interested in "future-proofing?"

Ideally for them, you buy a new device from them every year.

Anyways, by the end of 2012, what percentage of smart phones sold will be 4G capable?

And of those, what percentage of them are actually used for 4G a substantial portion of the time? I've heard anecdotes about people turning off 4G on their SGII to preserve battery.

But the bigger issue is, Verizon's network alone doesn't justify manufacturing tens of millions of iPhone5 with LTE if only a fraction of them will end up on Verizon.

With Qualcomm's new LTE chip, it will be an integrated all in one solution. So Apple is looking at a couple bucks at most increased cost for this radio versus the current Gobi chip they are using from Qualcomm. The only downsides are accommodating the extra power (which is zero if the user isn't using 4G) and accommodating an extra antenna, which does cost design effort, but is not a huge cost. Given these products have to compete for a year, by the time they're in their later life, AT&T's LTE network is going to getting some legs too.

Verizon's LTE deployment has been very rapid. My hometown, which is 35000 people, is next to a town of ~20,000 and another town of 30,000. To get a town with any larger population, you have to drive at least 50 miles, yet my hometown has LTE.

And Apple is looking to also launch with China Mobile, whose customer base is 350 million and are rolling out their own LTE network. In my opinion, it's worth the effort.
 
I'll go even further: GPU processing power will continue to scale at higher degrees in the future then CPU processing power will.

Isn't that pretty much a given though?
We're already at a point where we've gone to dual core CPUs. Going beyond dual cores is of dubious value even on the desktop for the majority of uses, and even Intel has stopped adding cores beyond the four we saw even three lithographic steps back from Ivy Bridge. Even at Intel pricing, the technical justification just isn't there, the utilization is too damn low. (Marketing has no limits however, so we'll see what the future brings). So it makes sense if you have cheap die area, to spend it at something that does scale well with parallel resources such as graphics, particularly as nice graphics is a sellable visual feature. Up to a point.

As I have indicated before, I don't believe CPU power on mobile devices will scale all that quickly from 20nm on. At that point, we will be clearly beyond the performance/feature level where desktop consumer computing power needs started to stagnate, and no killer app ever showed up to change that. Couple that with the ability to access cloud computing resources and you're left with marketing, rather than consumer, needs as a technology driver. It will still happen, the industry will still need new stuff to advertise and sell, but the gold rush days are likely to be over after that. At least, that's what I can see from my armchair. Still, that leaves us with a few interesting years ahead in the mobile space. There will be things to talk about. :)
 
IAs I have indicated before, I don't believe CPU power on mobile devices will scale all that quickly from 20nm on. At that point, we will be clearly beyond the performance/feature level where desktop consumer computing power needs started to stagnate, and no killer app ever showed up to change that. Couple that with the ability to access cloud computing resources and you're left with marketing, rather than consumer, needs as a technology driver. It will still happen, the industry will still need new stuff to advertise and sell, but the gold rush days are likely to be over after that. At least, that's what I can see from my armchair. Still, that leaves us with a few interesting years ahead in the mobile space. There will be things to talk about. :)
Mostly agreed but there are two important points that are worth pointing out:
1) Quad-core is cheap in terms of transistors. On 40nm a Cortex-A9 is about 2mm². The Cortex-A15 is about twice as big which means it's very roughly the same die size on 28nm as an A9 on 40nm. As we move to 20nm it won't be much more than 1mm² per additional core (even if we pessimistically assume 50% higher effective wafer price it's still quite a bit cheaper than an A9 on 40nm).
2) While Handheld CPUs don't take much area, they have extremely high power consumption compared to most other subsystems. And while doubling the number of cores will double your theoretical TDP, it will never increase your power consumption for the same performance target (thanks to power gating) and will actually significantly reduce power consumption for workloads with more than two threads (undervolting; 4 cores at half clock is more efficient than 2 cores at full clock). You can probably keep the same TDP by limiting clock rates with 4 cores at ~70%(?) of your maximum dual-core clock rate.

This isn't like the PC market where CPU cores have historically been very big. So when you combine these two facts you get a clear conclusion: even if the performance benefit isn't very large, it's potentially still worth going quad-core on technical merits alone. And when marketing enters the picture, it should be an obvious decision, especially in the 20nm generation. However that does NOT mean it's the single most important factor - obviously a very strong dual-core solution like the MSM8960 is preferable to an average quad-core one like Tegra 3.

So if you compare the A9600 and a similar 20nm SoC, I'd expect the number of both CPU and GPU cores to double. However GPU performance might still increase more than CPU performance because CPU clock rates will be much more limited when all cores are enabled. Obviously everyone else's ratio is going to eventually increase up to that point in the ultra-high-end so for everyone else there's a lot of catching up to do. I agree that the amount of die size dedicated to the CPU will go down on 14nm but how much depends on what ARM does with their first (likely incremental) and second (SMT at the very least) generation 64-bit cores...
 
That would've made sense if the first device with an A5 launched last week.
It didn't. ipad2 has been available for 9 months, and during that time the impact in games or anything 3d accelerated has been almost null.
...
So the seeds have been sowed. There's no excuse there.
One would think that developing complex games with a lots of advanced 3D content takes more than 9 months?

It's also beside the point that I was making: even if you could make such games right now, you still want to cater to the iPad 1 installed base. Give it another year, when that the iPad 1 percentage becomes low enough to ignore, then you can focus on iPad 2 and iPad 3.

It's the same dynamic for software targeting iOS releases. Most new apps have an iOS 4 minimum requirement nowadays, but that's only fairly recent.

That's the point: it makes no difference. It made absolutely no difference until now.
It would only make difference if those competitors had shown any worry in catching up. They haven't so far. Samsung manufactures the A5 from day one, they could have made a "clone" of the SoC for Galaxy S2 or the latter HD/LTE versions, yet they didn't, they just didn't care.
Maybe all the others were simply caught of guard? Are you suggesting that Apple should have lowballed GPU performance because others would do too? Why? What's the upside? A die size that's 10% smaller? Apple doesn't have the fabless middleman that all others have, so they're much less cost sensitive: from an integrators point of view, their silicon cost is probably ~40% cheaper than that the others.

And still, Galaxy S2 is considered by most as the best smartphone available, even after iphone4s was released.
Sure, just like some adamantly claim that most people use laptops for gaming. :D

Is apple concerned about the absence of LTE? External storage? NFC? Videocalls? Direct HDMI-out? Direct USB host? Confortably larger screen? File transfer between devices? Multitasking? Ability to download files from the browser?
All your workstation feature checkmarks, again, that seem to be live or die features to you but that most people couldn't care less about, except maybe the larger screen for some.
(Do we agree that the Galaxy Nexus is the best Android phone on the market? You do know that it did away with external storage, right?)

What history of Android tablets do you have to support that statement?
Pretty much every non-chinese/ultra-low-cost Android tablet is receiving the upgrade to Ice Cream Sandwich.
That's most excellent news for the original Galaxy Tab users, who didn't even get Honeycomb! What about the Dell Streak?

Anyway, this will be a welcome and refreshing break from that past Android practices.

But they also also did well in not supporting the next-gen broadband communication standard in their mobile phone because it won't be widely available during the next year?

So what's important here is not to sell people an actual future-proof product, right?
It's to sell people products that are obviously not future-proof while making them think they are.
If implemented correctly, the additional GPU has negligible impact on that what's most important to a lot of phone users: battery life. LTE is a different story. It will be there on the iPhone 5, don't worry. Lower power chips are on their way.

I don't think the LTE argument holds for tablets anyway. AFAIK, the majority of users are not interested in yet another data plan. Tablet use happens mostly at home, Starbucks or airport lounges, where wifi is freely available.
 
Lack of other high end 3D games could be the economics of mobile games. Maybe there isn't money in it when game prices are under $10.

Or how many have licensed that Unreal mobile engine? Anyone using it for games other than Infinity Blade?
 
wco81 said:
Lack of other high end 3D games could be the economics of mobile games. Maybe there isn't money in it when game prices are under $10.
Yes, possible. I've been wondering if they use the GPU for photo image processing?
 
Mostly agreed but there are two important points that are worth pointing out:
1) Quad-core is cheap in terms of transistors. On 40nm a Cortex-A9 is about 2mm². The Cortex-A15 is about twice as big which means it's very roughly the same die size on 28nm as an A9 on 40nm. As we move to 20nm it won't be much more than 1mm² per additional core (even if we pessimistically assume 50% higher effective wafer price it's still quite a bit cheaper than an A9 on 40nm).
2) While Handheld CPUs don't take much area, they have extremely high power consumption compared to most other subsystems. And while doubling the number of cores will double your theoretical TDP, it will never increase your power consumption for the same performance target (thanks to power gating) and will actually significantly reduce power consumption for workloads with more than two threads (undervolting; 4 cores at half clock is more efficient than 2 cores at full clock). You can probably keep the same TDP by limiting clock rates with 4 cores at ~70%(?) of your maximum dual-core clock rate.

This isn't like the PC market where CPU cores have historically been very big. So when you combine these two facts you get a clear conclusion: even if the performance benefit isn't very large, it's potentially still worth going quad-core on technical merits alone. And when marketing enters the picture, it should be an obvious decision, especially in the 20nm generation. However that does NOT mean it's the single most important factor - obviously a very strong dual-core solution like the MSM8960 is preferable to an average quad-core one like Tegra 3.

So if you compare the A9600 and a similar 20nm SoC, I'd expect the number of both CPU and GPU cores to double. However GPU performance might still increase more than CPU performance because CPU clock rates will be much more limited when all cores are enabled. Obviously everyone else's ratio is going to eventually increase up to that point in the ultra-high-end so for everyone else there's a lot of catching up to do. I agree that the amount of die size dedicated to the CPU will go down on 14nm but how much depends on what ARM does with their first (likely incremental) and second (SMT at the very least) generation 64-bit cores...

Insightful post. I never thought about mobile workloads vs. die area spent like that. I always assumed it was better to stay dual core vs. quad core unless you specifically knew your CPU would have all cores loaded often.
 
Lack of other high end 3D games could be the economics of mobile games. Maybe there isn't money in it when game prices are under $10.

Or how many have licensed that Unreal mobile engine? Anyone using it for games other than Infinity Blade?
Epic games has a pretty good deal with the Unreal Development Kit (UDK), which free to use. Once you actually want to sell a game that you made with it you'll have to pay $99 and if/once you start making more than $50 000 you'll have to pay Epic 25% of all income from that game. You won't have to pay Epic anything if you release your game for free. Seems like a pretty good deal if you're not sure whether you'll ever make more than $50 000 on your game. If you are pretty sure that you'll brake the $50 000 barrier, then you can talk about a different licensing agreement with Epic.

BTW, the UDK currently allows you to build games for iOS, Mac OS X and Windows. There might be more platforms added in the future.
 
Lack of other high end 3D games could be the economics of mobile games. Maybe there isn't money in it when game prices are under $10.

<10 buck games are obviously many times cheaper to develop than a full handheld console game that costs times more as a comparison. The key is the volume of downloads. It's fairly easy to find out how often something like Infinity Blade has been downloaded for example.
 
Insightful post. I never thought about mobile workloads vs. die area spent like that. I always assumed it was better to stay dual core vs. quad core unless you specifically knew your CPU would have all cores loaded often.

Well another message you can draw from Arun's post (and to which I agree) is that in the future much bigger performance increase for SoCs will come from the GPU side. GPUs are obviously lower clocked, but can host a constantly increasing number of ALUs.

All that needs to be done from SoC manufacturers on the sw side is to concentrate on GPGPU APIs like OpenCL.
 
Well another message you can draw from Arun's post (and to which I agree) is that in the future much bigger performance increase for SoCs will come from the GPU side. GPUs are obviously lower clocked, but can host a constantly increasing number of ALUs.

All that needs to be done from SoC manufacturers on the sw side is to concentrate on GPGPU APIs like OpenCL.

I would hope they would leverage that in their rapid pace of growth and DIY nature to licensing core IPs in general. Arguably their position is even better than Nvidia's because they don't have to build something and tell the customer that's what they need for GPGPU. They just have to build a capability and customer decides precisely how much they need. Apple says they are committed to it on the x86 side, but perhaps that is the next step for iOS given it's near feature completeness from the laundry list of features consumers have wanted ever since iOS 1. They've added all the visual features and functions a la Leopard, now it's time to enhance the back-end a la Snow Leopard. I would argue that their direct control of their software and hardware puts them in the best position to do just that.

Perhaps that would put them in a good position moving forward with the iPad 3, allowing them to stream and decode say a heavily encoded 1080p to keep it lean on size?
 
Bigger changes will come IMHO with the advent of 28nm. For high end embedded GPUs for 2012 I'd expect something in the 35-70 GFLOP region; in 2013 I'd expect that rate to exceed the 200GFLOP mark (or roughly XBox360 Xenos level, disregarding efficiency/FLOP since it can be for better or for worse).

Perhaps that would put them in a good position moving forward with the iPad 3, allowing them to stream and decode say a heavily encoded 1080p to keep it lean on size?
Unless I'm having a blond moment here, aren't you theoretically better off with dedicated fixed function encoding/decoding hw than a GPU?
 
Bigger changes will come IMHO with the advent of 28nm. For high end embedded GPUs for 2012 I'd expect something in the 35-70 GFLOP region; in 2013 I'd expect that rate to exceed the 200GFLOP mark (or roughly XBox360 Xenos level, disregarding efficiency/FLOP since it can be for better or for worse).

Unless I'm having a blond moment here, aren't you theoretically better off with dedicated fixed function encoding/decoding hw than a GPU?

I'm assuming you don't want to spend the silicon. Is that a bad assumption? I don't know relative size or engineering effort something like that will cost you.
 
I'm assuming you don't want to spend the silicon. Is that a bad assumption? I don't know relative size or engineering effort something like that will cost you.

Here's a link to next generation decode and encode IP from IMG: http://imgtec.com/News/Release/index.asp?NewsID=597

It doesn't mention die area but if you log in for the VXD392 it decodes H.264 HP@4.1/30Hz with a frequency under 100MHz. I personally wouldn't want to torture neither a GPU at higher frequencies nor a CPU at even higher for such functions. I'm not even sure if there's any SoC at the moment available that doesn't have dedicated encoding/decoding hw blocks.
 
Back
Top