Implications of SGX543 in iPhone/Pod/Pad?

I don't think 3D performance is a priority yet.

It would take a blockbuster 3D game to make it so.

Photos at 4X resolution of the iPad would display nice, presumably, maybe things like Google Earth too.

iPad chokes on 1080p rips, okay on 720p rips. But Apple isn't one to tout BR and who knows if iTunes will offer 1080p movies.
 
How much power will SGX 543(2core at 400MHz) consume in 32 nm low-power CMOS process or even in 28 nm (which may be available in 2nd half this year)? Thx.

Ask me that question again when either/or manufacturing process will be available for mass production at X semiconductor manufacturer.
 
I don't think 3D performance is a priority yet.

It would take a blockbuster 3D game to make it so.

Photos at 4X resolution of the iPad would display nice, presumably, maybe things like Google Earth too.

iPad chokes on 1080p rips, okay on 720p rips. But Apple isn't one to tout BR and who knows if iTunes will offer 1080p movies.

Here we go again....what kind of applications are mostly downloaded from Apple's application store? Yeahrightsureok all a coincidence :rolleyes:
 
We've always dismissed people who were speculating about multi-GPU cores for PCs, since GPUs were always highly parallel in the first place.

Why is this considered more plausible to be a reality for a mobile chips? I don't see any performance or area advantages in doing so over increasing the amount of shaders just like big GPUs?

IMG multi-core relies on SFR and has specific scheduling logic for load balancing; they're promising almost linear scaling.

What am I missing?

(Can somebody point me to a comprehensive table with all the different PowerVR versions that doesn't require me to !#$!#$! create an account on their website?)

http://users.otenet.gr/~ailuros/IMG_MP_GPGPU.pdf

It's their own explanation for going multi-core.
 
They do, they have, they will.
Very interesting - I wonder if their 45nm Orion SoC is actually 40nm. Wouldn't be the first company that calls everything 45nm when nearly all of their chips are really 40nm... (*cough* Qualcomm *cough*)
 
We've always dismissed people who were speculating about multi-GPU cores for PCs, since GPUs were always highly parallel in the first place.

Why is this considered more plausible to be a reality for a mobile chips? I don't see any performance or area advantages in doing so over increasing the amount of shaders just like big GPUs?

What am I missing?

Imo there's two aspects to this,

1) Ability to genuinely scale performance
2) Ability to respond to the demand for an extremely wide range of performance points

In terms of (1) the desktop IMR MP solutions simply don't scale particularly well imo, at least not without other significant compromises. If we'd stayed in the desktop market I strongly suspect you'd have seen multi-core solutions from us as tilers do scale well in MP configuration. Whether this would have appeared as multiple cores in multiple devices or devices with a larger number of cores or a combination of both is unclear as the practical aspects of the later would need to be considered in more detail than they have been.

For (2) it's safe to say that delivering IP into a wide range of environments with very different performance points is made easier by being able to offer scalability from a single base core and as the MP configurations are effectively viewed as a single GPU this is s simpler solution than physically separate multi-core. Note there is still a compromise here wrt redundant replication of resources which effects the granularity at which you choose to go multi-core.

John.
 
That should serve as any sort of statistic as what exactly? You yourself mentioned "blockbuster 3D games". I could name a couple either shipping or in development on iOS, exactly because Apple doesn't care about 3D. Now try to find something comparable on other platforms.

Point is, nobody is buying iOS devices because of 3D games. Some people are buying iPod Touch mainly for games and other entertainment content.

But so far, no 3D game has by itself drawn people to iOS. I'm thinking along the lines of Halo getting people to buy Xbox.

Still, it's great that some developers are working on such games when the prevailing pricing can't be that encouraging for higher-cost games. Can the licensing costs for Unreal Engine or any other advanced 3D engine (not to mention other development costs) allow games to be priced at 99 cents, for instance?

You get the sense that Apple is putting in better and better GPUs mainly to enable certain GUI features more than providing a platform for console-like games. Doesn't mean such games can't be made for the platform, just that it's not their main focus.
 
Very useful. Thanks!

Kristof is always helpful; oh and nothing is for free in 3D ;)

Point is, nobody is buying iOS devices because of 3D games. Some people are buying iPod Touch mainly for games and other entertainment content.

No one said they aren't. But still 3D isn't some sort of neglected sideshow for Apple; au contraire. I'd be even so bold and claim that iOS is more "3d oriented" as an OS than anything else out there in the embedded space. What people are actually buying are multi-function devices but f.e. a smart-phone is still a smart-phone.

But so far, no 3D game has by itself drawn people to iOS. I'm thinking along the lines of Halo getting people to buy Xbox.

Must be the reason why SONY sees itself in a relative sense threatened for its handheld business from Apple. Ironically I've seen similar official comments from Nintendo too. You'll see the how the GPU cores the "PSP2" will end up with and I'm willing to bet that it's not a coincidence either. As much as possible to keep a reasonable distance from what Apple is cooking.

Still, it's great that some developers are working on such games when the prevailing pricing can't be that encouraging for higher-cost games. Can the licensing costs for Unreal Engine or any other advanced 3D engine (not to mention other development costs) allow games to be priced at 99 cents, for instance?

What would stop an "i-user" to download a AAA title at a much higher cost?

You get the sense that Apple is putting in better and better GPUs mainly to enable certain GUI features more than providing a platform for console-like games. Doesn't mean such games can't be made for the platform, just that it's not their main focus.

That's your personal opinion and I can't rub it under your nose enough how wrong you've been in the past about anything 3D in the mobile space. Enjoy the ride in the meantime.
 
Apple just announced in their Q1/FY2011 financial results conference call that Apple "made a two-year, $3.9 billion deal with three suppliers to secure a "very strategic" component for its products" and "these payments consist of both prepayments and capital for processes and tooling, and similar to the flash agreement, they're focused in an area that we think is very strategic." Until the end of Q2/FY2011 (likely the earliest release date for the iPad 2) they will already have made prepayments in the amount of $1.7 billion ($650 million in Q1/2011FY and $1.05 billion in Q2/2011FY).

Compared to the famous flash deal in 2005 in the amount of $1 billion this is huge, and this time it's not about flash. IMHO that only leaves high resolution displays... and with $1.7 billion prepayments and capital for equipment and tooling (before the release of the iPad 2) a lot can be done... and you don't have to give display manufacturers this much money in form of prepayments and capital for process equipment and tooling if you just want to buy 9.7" 1024x768 IPS displays (especially if you have already reached supply-demand balance for that part)...

http://www.engadget.com/2011/01/18/apples-invested-in-a-very-strategic-3-9b-component-supply-ag/
(Including 3min audio clip from the conference call with the statements from COO/CEO Tim Cook and CFO Peter Oppenheimer about these deals)

PS: Sure, there's also the possibility that these deals were made for some other component. Who knows, maybe even for something IMHO crazy like the rumored 3D display for the iPhone/iPod touch, a touchscreen with tactile feedback (ala Nokia Haptikos), anti-glare coating for all MacBook Pro displays ;) or even a fuel cell (a guy can dream)... but my money is on high density displays for the iPad 2.
 
Last edited by a moderator:
Yeah, it would be displays or battery tech.
Yeah, IMHO battery tech would be the second most likely. Something like lithium-air or zink-air maybe?

Or Solid State Disk in all their lineup.
I don't think so. IMHO Cook made it very clear that this time it's not about flash. And you don't need to invest $3.9 billion (and three suppliers) just for flash controllers.
 
Last edited by a moderator:
Well they made deals with Toshiba and Sharp but I thought those were for factories which still had to be built?

What kind of manufacturing capacity could there be of specialized screens? They might not be able to produce enough units with the high DPI screens.
 
Well they made deals with Toshiba and Sharp but I thought those were for factories which still had to be built?
Well, maybe these rumored Toshiba and Sharp deals are part of it. But as you said, won't these factories take around 1-1.5 years to be built and be mass production ready (Toshiba late 2011 and Sharp late 2012)? And this deal is for 2 years only and started already in Q3/2010FY. And these factories are rumored to cost around $1.2 billion each and most likely Apple would only pay part of it (let's say half, that only makes $1.2 billion total).
 
We've always dismissed people who were speculating about multi-GPU cores for PCs, since GPUs were always highly parallel in the first place.

Why is this considered more plausible to be a reality for a mobile chips? I don't see any performance or area advantages in doing so over increasing the amount of shaders just like big GPUs?

What am I missing?

(Can somebody point me to a comprehensive table with all the different PowerVR versions that doesn't require me to !#$!#$! create an account on their website?)
Calling a GPU multi core is just marketing as there's no clear definition of multi core as with CPUs. With a CPU an extra core means an additional thread can run mostly unencumbered by the first thread. With GPUs you can already render multiple primitives and pixels in parallel so the core distinction is not clear cut.

Even on CPUs the core definition is being challenged as a Bulldozer module might be called 1 or 2 cores depending on who's talking.
 
Calling a GPU multi core is just marketing as there's no clear definition of multi core as with CPUs. With a CPU an extra core means an additional thread can run mostly unencumbered by the first thread. With GPUs you can already render multiple primitives and pixels in parallel so the core distinction is not clear cut.

Even on CPUs the core definition is being challenged as a Bulldozer module might be called 1 or 2 cores depending on who's talking.

In this case though, all multi-core GPU offerings so far have consistently been complete black-box GPU cores scalable to work in parallel, with the designer (or even end-user) being able to choose the scaling. This is similar to how multi-core CPUs originally were, but the lines have blurred as they've started sharing resources. I don't think multi-core GPUs are slated to have anything like that yet.
 
Calling a GPU multi core is just marketing as there's no clear definition of multi core as with CPUs. With a CPU an extra core means an additional thread can run mostly unencumbered by the first thread. With GPUs you can already render multiple primitives and pixels in parallel so the core distinction is not clear cut.

Even on CPUs the core definition is being challenged as a Bulldozer module might be called 1 or 2 cores depending on who's talking.

But there's a specific amount of redundancy on IMG's MP. If they'd hypothetically would design a single core for a specific performance ballpark instead of a MP, I'm pretty sure they'd save portions of die area. One easy example I could detect would be z units which typically have a high amount on TBDRs. Albeit probably an awkward measure of comparison their old KYRO/Series3 desktop GPU had 32z/stencil units, while on each SGX543 there are 16. Now I can't know how many a large single GPU chip would need, but I'd say that the 256z units you get on a theoretical 16MP are way over the top.

In a case where with multi-core all components get doubled, tripled etc. instead of only scaling within a specific block of die area the necessary parts, I don't see why the definition is off base. Each 543 core is at 8mm2@65nm and for hypothetical 16 cores the total die area would be 128mm2@65nm.

In this case though, all multi-core GPU offerings so far have consistently been complete black-box GPU cores scalable to work in parallel, with the designer (or even end-user) being able to choose the scaling. This is similar to how multi-core CPUs originally were, but the lines have blurred as they've started sharing resources. I don't think multi-core GPUs are slated to have anything like that yet.

There's quite a difference between multiple GPU chips on a single PCB in the desktop space and a GPU block with multiple cores within a SoC. Apart from that the first solutions are based on AFR (and there are SFR routines available for compatibility reasons) while IMG's MP relies on SFR. Unless I've missed something I don't see any hw support on AFR solutions while there is in the latter case. There's a reason why only Series5XT cores are capable of multi-core and Series5 cores aren't.
 
Back
Top