Is the iPhone held back by Samsung's progress?

edepot

Banned
It seems Samsung is making all of iPhone's CPU (and integrated GPU). When before you could maybe improve the CPU or GPU individually by swapping it out with a better one, now they are coupled together. This will inevitably slow down the progress of technology. I am not sure, but I think they are on the same die. If they are, the SGX-535 is a letdown in performance. At least they could have put in a better version of SGX in the A4. Since it is OpenGL ES 2.0, there is no need to tie to one particular chip, just swap it out with a better performing chip that supports all the OpenGL ES 2.0 features at better performance. It could be that the new iPhones will be dependent on how fast Samsung churns out chips for it's own use (like the Galaxy S chip used in the iPad). Maybe someone should get Samsung to improve their GPU selections.

But this post is not mainly about today's limitations, it is about the future:

http://www.edepot.com/iphone.html#The_Future

There needs to be a better way to swap components so that mobile devices can stay with the times for at least 6 years. Why 6? Because that is the average console lifetime before new ones are released (like the playstation brand) and it is a good indicator of the market available for software on mass produced devices. Too soon, not enough population to make profit on the software, too late, technology moves too quickly and competition will come out with better device for future software market. The iPhone is churning out so many variations, that old users will get left out unless they can keep up (hardware wise) by swapping old components with new ones. The PCs had it (chip sockets that are the same). Ram sockets that were standardized. Even the PC ISA and EISA and PCI-Express allowed custom GPU. It is time for this change. Even the PlayStation 3 is allowing Harddrive size change as standard feature. The iPhone is stuck with one size flash, and battery that dies after 2 years with no easy way to swap it out.

Even some governments run by this model (swapping things). For example the US is founded on the principle of revolution... You have the right to rebel (against the british) if the government is treating you unfairly. However, it is very difficult to keep a government long if the basic principles of the founding members allows revolution as a centerpiece of its existence. Eventually some major group will rebel. That is why multi-parties were created, they are needed. Because each time an increasingly large population doesn't like how it is run, they will emulate the founding fathers and rebel and start a revolution. So all major differences in opinion are shoehorned into the two major parties, and get people to concentrate on those two parties. That way no-one is left out, their differing opinions or beliefs are integrated into the government, rebel against a party of the government, not against the government.

Ok, some may think it weird that I am talking governments when this is about the iPhone. So let me piece it together. What the Android is doing is trying to be an operating system for all devices. Old and new technology. This causes fragmentation. The developer has to decide which population it wants to cater to to make a profit. The iPhone is trying to solidify on one CPU instruction set (ARM), and improving the rest at a standard rate of 1 year cycle. Each year, things are improved, and leaves one new generation of devices in the dark. The developer would have to decide how far back they want to support to get enough profit. What are you going to do with those that felt left behind and want to go to the other side because others have better CPU, GPU, whatever? You would need to be able to swap out (integrate their desires for a better CPU, GPU or whatever) right into the device. Providing an upgradability path so people don't rebel and buy another device. What do you think?
 
There's probably a good reason why the knowledgeable posters responding haven't tackled this but I'll take the bait, I hope no one minds ;p

Having mobile devices with a single package/die system on chip that contains CPU(s), GPU, and several peripherals is industry standard. This is for a very good reason. Having multiple chips (especially of your larger, hungrier components) where you could have one is a significant impact to power consumption, space, cost, board layout complexity, and even systems software. All things that are exactly what you're trying to minimize in a mobile device. These aren't desktops. The ability to upgrade core components is not sought after because it's impractical, a significant compromise, and not interesting to begin with due to the overall low target cost of these devices.

Furthermore, components you're describing like GPU and CPU follow at least some upgrade path together. It makes more sense to upgrade both where possible than just one. I guarantee you that six years after purchasing a cutting edge handheld no one will think that it'll be cutting edge again after swapping the CPU or GPU. They'll want the latest display technology, the latest baseband technology, the most RAM, the fastest everything, the most preferred input options and the most in-style design. In other words, they'll want to replace everything or at least nearly everything.

You're really not going to find companies putting out a lot of discrete mobile graphics chips anymore anyway because there just isn't demand. It's much more lucrative for a company like Samsung to release a top of the line SoC that has the best CPU, GPU, and peripherals they can manage to put on it. And the market is certainly competitive enough to drive SoCs to be as cutting edge as possible (in all areas of course, not just being huge power draining performance kings).

Your comment about Samsung doesn't really make sense anyway - A4 may have SGX535, but Samsung is also shipping S5PC110 which has SGX540 - btw, the chip in the Galaxy S, which is NOT A4. Why A4 has 535, or why the next iPhone is using the same A4 (if indeed they even are) sounds more like an Apple problem than a Samsung one. It's true that other manufacturers have taped out dual core Cortex-A9 already, and Samsung hasn't announced such an SoC (probably trying to push their A8 implementation as much as possible), but this has nothing to do with them making SoCs.

Maybe you think that if two different manufacturers were packaging SoC and GPU there'd be more opportunity to excel since both parties could focus on improving their designs. But as it stands SoCs are already conglomerations of several pieces of third party IP; in the SoCs with Cortex-A8/A9 and SGX that means IP from ARM and IMGTech. Different chips might allow for a little better release granularity but SoC releases have heated up and are at the very least way more aggressive than once every six years.
 
If Apple had somehow been able to license the PSP graphics architecture for the original iPhone, fitting it into the power consumption and die area constraints of the phone would've left it with a small fraction of the performance and features the real iPhone had.
 
There's no holding back in the SOC market. There seem to be a lot of vendors trying to leapfrog each other.

Certainly more competition than there ever was in the PC CPU market and it seems the annual performance gains are at least equal to the gains in that market.


As an aside, Intel's Sandy Bridge seems to be integrating more components onto a die, kind of mimicking mobile SOCs? Suppose to deliver substantial gains over current chipsets.
 
Providing an upgradability path so people don't rebel and buy another device. What do you think?
Did somebody say rebellion? Venceremos!

Seriously, though. One day we'll be able to take our last-year SoC to the vendor, put the chip in a nano-recycler (a sort of a sophisticated washing machine from the future), select the atomic grid program, and get a brand new silicon after the unlocking click of the door latch. That unless by then our civilization imploded from the vacuum of its own stupidity. Either way our SoC desires will be fulfilled.
 
In the US the iPhone is held back more by AT&T than Samsung's progress...
 
There's probably a good reason why the knowledgeable posters responding haven't tackled this but I'll take the bait, I hope no one minds ;p

I don't think anyone minds. I didn't bother reading the article though either because that's what it is really about.

Your comment about Samsung doesn't really make sense anyway - A4 may have SGX535, but Samsung is also shipping S5PC110 which has SGX540 - btw, the chip in the Galaxy S, which is NOT A4. Why A4 has 535, or why the next iPhone is using the same A4 (if indeed they even are) sounds more like an Apple problem than a Samsung one. It's true that other manufacturers have taped out dual core Cortex-A9 already, and Samsung hasn't announced such an SoC (probably trying to push their A8 implementation as much as possible), but this has nothing to do with them making SoCs.
My own KISS approach: Apple felt it needed primarily fill-rate. Considering (which has been noted times and times again) SGX535 and 540 have the same amount of TMUs under that reasoning the 535 is by far the all around cheaper solution. Less die area allows higher frequencies and thus more fill rate and yes of course re-using the same IP is cheaper too.

The real question in the end would be if any other smart-phone or tablet available today or within the year is using anything more advanced or powerful in terms of an embedded GPU compared to the 535. I personally don't see any, but I'd love to get convinced of the opposite.
 
The real question in the end would be if any other smart-phone or tablet available today or within the year is using anything more advanced or powerful in terms of an embedded GPU compared to the 535. I personally don't see any, but I'd love to get convinced of the opposite.

Depends what you mean by out I think, Galaxy S has been reviewed and is said to be released "this summer." Other platforms with S5PC110 include Odroid-T/Odroid-S which may also be available, it's hard for me to really tell. Maybe there are some other devices local to Korea using it.

OMAP4's are publicly sampling (ie, you can buy the reference platform "now", arriving within 4-18 weeks if the site is honest - see here http://svtronics.com/market_omap), so it's not entirely impossible that we'll see an OMAP4 phone or tablet this year.
 
Last edited by a moderator:
Depends what you mean by out I think, Galaxy S has been reviewed and is said to be released "this summer." Other platforms with S5PC110 include Odroid-T/Odroid-S which may also be available, it's hard for me to really tell. Maybe there are some other devices local to Korea using it.

OMAP4's are publicly sampling (ie, you can buy the reference platform "now", arriving within 4-18 weeks if the site is honest - see here http://svtronics.com/market_omap), so it's not entirely impossible that we'll see an OMAP4 phone or tablet this year.

I meant outside of PowerVR graphics IP; of course is SGX540 far more powerful. Compare 535 to anything else outside of PowerVR IP and you'll see what I mean.

As for Apple themselves they and only they know what's on their future roadmap; however if they continue using PowerVR IP my gut feeling tells me that they might skip 540 entirely and go straight for a 543MP.

In any case Apple is a case of its own; despite the typical criticism left and right (some perfectly justified some exaggerated) Apple knows how to design and market its software. When the time comes and we can compare a smart-phone with an OMAP4 SoC vs. iPhone4, will we really be able to define a real "winner" or will each device have its own advantages and disadvantages despite the underlying hw differences?
 
I meant outside of PowerVR graphics IP; of course is SGX540 far more powerful. Compare 535 to anything else outside of PowerVR IP and you'll see what I mean.

Oh, yeah, sure. I wonder if even Tegra 2 will be a non-starter for not being competitive enough in the mobile space against SGX540/543 which will be its most likely contemporaries.
 
Oh, yeah, sure. I wonder if even Tegra 2 will be a non-starter for not being competitive enough in the mobile space against SGX540/543 which will be its most likely contemporaries.

Depends on the perspective and time-frame; Tegra2 doesn't sound like it'll have much of a chance for smart-phones and I don't see SoCs like OMAP4 to appear in tablets. If NV's so far claims for tablet design wins are close to reality I wouldn't be surprised if all of the T2 powered tablets would make up for a reasonable percentage but of course nowhere near what Apple could sell with its iPad alone.

2011 then will be a battle of its own. I recall reading at xbitlabs that NV made roughly over $30Mio revenue (gross revenue?) from the Tegra department for Q1. I'd suppose that's with everything GoForce that might still lurk in the pipeline included. Nothing to write home about IMO, but not bad at all either.

Long story short: IMO T2 is actually competing this year at least more with SGX535 than anything else.
 
Ars posted a piece about how the T2 tablets they've shown so far have sluggish performance.

When they brought that up, Nvidia kept pointing out that T2 is much faster than A4. But likely Android had not been optimized for it yet.
 
I meant outside of PowerVR graphics IP; of course is SGX540 far more powerful. Compare 535 to anything else outside of PowerVR IP and you'll see what I mean.
Depending on what you believe, AMD Z430 / Qualcomm Adreno 200 could be quite a contender, it's short on pixel output, but trianglewise while short 6 Mtri/s in theory, in practice some sources report that at least on iPhones the SGX535 isn't getting anywhere near it's theoretical numbers, but falls even under 10 Mtri/s numbers.

In practice, SGX540 is ~double the speed of Z430/Adreno 200, at least when measured with Neocore 3D benchmark ( http://asia.cnet.com/reviews/mobilephones/0,39050603,62200389,00.htm )

edit:
Here's some sort of specsheet'ish if you scroll down enough, and some talk about these too
http://alienbabeltech.com/main/?p=17125
 
Last edited by a moderator:
Depending on what you believe, AMD Z430 / Qualcomm Adreno 200 could be quite a contender, it's short on pixel output, but trianglewise while short 6 Mtri/s in theory, in practice some sources report that at least on iPhones the SGX535 isn't getting anywhere near it's theoretical numbers, but falls even under 10 Mtri/s numbers.

In practice, SGX540 is ~double the speed of Z430/Adreno 200, at least when measured with Neocore 3D benchmark ( http://asia.cnet.com/reviews/mobilephones/0,39050603,62200389,00.htm )

edit:
Here's some sort of specsheet'ish if you scroll down enough, and some talk about these too
http://alienbabeltech.com/main/?p=17125
According to that spec-sheet the Samsung Galaxy S has 512 MB of L2 cache :LOL:
 
edit:
Here's some sort of specsheet'ish if you scroll down enough, and some talk about these too
http://alienbabeltech.com/main/?p=17125

This writeup is full of questionable and IMO poorly drawn conclusions.

I guess I'll take it from the top..

- All of this talk about Hummingbird performing better than "stock Cortex-A8" per-clock is popular on the internet but unsubstantiated - Cortex-A8 is not a modifiable design; in order to be Cortex-A8 it means you're using the same design and therefore have the same timings. If ARM offered some kind of special joint venture to Intrinsity where they let them modify Cortex-A8 then I think Hummingbird would no longer even claim to be A8. Here's Samsung's press release:

http://www.samsung.com/global/business/semiconductor/newsView.do?news_id=1030

Nowhere are claims made about faster functions per clock speed - the claim is made that the logic gates used to implement the Cortex-A8 design (supplied as net-lists) are 25-50% faster than the competition and this has been grossly misinterpreted. I have no idea where the "20% of functions" number comes from but the 5-10% number appears to be extrapolated by combining that with the 25-50% number. This should give you a good idea of the dangerous line of reasoning at work. If there really is more to all of this then some sources are desperately needed.

- Lists Cortex-A9's improvements as 25% improvement per clock AND out of order execution, as if these are mutually exclusive
- Is using theoretical geometry performance as an indicator for real world benchmarks, even when theoretical is at max clock speeds for the technology and actual clock speeds are unknown
- Still hasn't gotten on the clue train that iPhone 3GS is not S5PC100, when it's well documented that Apple is using an SGX and S5PC100 isn't
- Thinks memory bandwidth is the limiting factor in geometry rate - okay, at some point it is, but if a vendor wants to push it they can define a vertex as something like 4 half floats x/y/z/w coordinates, 10bit x/y/z normal, 16bit u/v = 16 bytes per vertex, less if you take out texturing and lighting altogether of course - at 28MTri/sec (assuming 1 unique vertex per triangle, sent as triangle strips) we're looking at 0.448GB/s which is way under the 4.2GB/s he's saying is necessary with no justification

So I dunno, I'd take everything this guy says with a grain of salt for the time being.
 
Depending on what you believe, AMD Z430 / Qualcomm Adreno 200 could be quite a contender, it's short on pixel output, but trianglewise while short 6 Mtri/s in theory, in practice some sources report that at least on iPhones the SGX535 isn't getting anywhere near it's theoretical numbers, but falls even under 10 Mtri/s numbers.

There's a HUGE difference between what each partner claims for solution X and what IMG claims for it. Samsung claims 89M Tris for the Wave and IMG claims 40M for SGX540@200MHz. Now convince me that the 540 in the Samsung SoC is clocked beyond 200MHz.

Further to that if IMG rates itself a 540 with 4 ALUs at 40M Tris, what a 2 ALU 535 would rate at at 200MHz.

I'll make it easier: triangle rates in the embedded space are as "telling" as GFLOPs today on standalone GPU market.

In practice, SGX540 is ~double the speed of Z430/Adreno 200, at least when measured with Neocore 3D benchmark ( http://asia.cnet.com/reviews/mobilephones/0,39050603,62200389,00.htm )
Ok and?

edit:
Here's some sort of specsheet'ish if you scroll down enough, and some talk about these too
http://alienbabeltech.com/main/?p=17125
What a bunch of horseshit. Despite what Exophase noted I'll give you quick one:

For example, let’s take a look at the iPhone 3GS. It’s commonly rumored to contain a PowerVR SGX 535, which is capable of processing 28 million triangles per second (Mt/s). There’s a driver file on the phone that contains “SGX535” in the filename, but that shouldn’t be taken as proof as to what it actually contains. In fact, GLBenchmark.com shows the iPhone 3GS putting out approximately 7 Mt/s in its graphics benchmarks.
I'm too tired to bother with the rest of the nonsense in that write-up but GLBenchmark has actually a database with results. Here's the fastest Z430/Adreno listed, which ironically doesn't even manage to beat a first generation OGL_ES1.1 MBX:

http://www.glbenchmark.com/phonedetails.jsp?benchmark=glpro11&D=Acer Liquid A1&testgroup=lowlevel

Now compare triangle rates or whatever else to that:

http://www.glbenchmark.com/phonedetails.jsp?benchmark=glpro11&D=Apple iPhone 3G S&testgroup=lowlevel

You were saying about triangle rates and what they achieve in real time? What is the Z430 rated at again? I'm pretty sure it isn't "just" 2M Tris now is it?
 
Back
Top