Implications of SGX543 in iPhone/Pod/Pad?

http://daringfireball.net/2011/01/cold_water_ipad_retina_display

Gruber has gotten information out of his own sources that claim that resolution doubling and Retina Displays are not coming to the iPad 2. The remaining scenarios are a intermediate resolution like 1.5x, which he feels is unlikely due to the non-ideal scaling and the need to continue supporting it as a third resolution since he believes a Retina Display will come sooner or later, or that the iPad will remain at 1024x768 which he feels is most likely. Even though 2048 × 1536 did seem outlandish, I must say, sticking to 1024x768 does seem disappointing now. Maybe 1920x1280 to standardize on a 3:2 ratio would be a good compromise although even that is a high resolution for a ~10" screen.

Calling a GPU multi core is just marketing as there's no clear definition of multi core as with CPUs. With a CPU an extra core means an additional thread can run mostly unencumbered by the first thread. With GPUs you can already render multiple primitives and pixels in parallel so the core distinction is not clear cut.

Even on CPUs the core definition is being challenged as a Bulldozer module might be called 1 or 2 cores depending on who's talking.
So presumably OpenGL ES apps will see the SGX543MP2 as 1 GPU, but I wonder whether it'll appear to OpenCL apps as 2 devices so they can be working on independent tasks? Or if there is any interest in using 1 GPU core for graphics and the 2nd core for say an OpenCL physics engine?
 
Well they made deals with Toshiba and Sharp but I thought those were for factories which still had to be built?

What kind of manufacturing capacity could there be of specialized screens? They might not be able to produce enough units with the high DPI screens.

Higher DPI requires smaller and tighter geometries. It is likely that the investment is in either fab capacity for the cpus, fab capacity for the displays, or mineral right/mining for the batteries. Considering China's saber rattling as of late over various materials, it might make sense to secure a long term supply.
 
Yeah, IMHO battery tech would be the second most likely. Something like lithium-air or zink-air maybe?


I don't think so. IMHO Cook made it very clear that this time it's not about flash. And you don't need to invest $3.9 billion (and three suppliers) just for flash controllers.

I suppose Solid State Disk in the new form factor (mSATA) is something we all can expect from future designs in the notebook family.

Lithium-polymer is still the best available battery technology so far (not meant in a cheap way). Fuel Cells still seem so futuristic.

Higher DPI requires smaller and tighter geometries. It is likely that the investment is in either fab capacity for the cpus, fab capacity for the displays, or mineral right/mining for the batteries. Considering China's saber rattling as of late over various materials, it might make sense to secure a long term supply.

Yeah, China is restricting export of Lithium for example.

If it isn't displays or solid state disks, then the next logical step would be batteries. Although Apple have been known to do some funky stuff to rattle up the industry before. Doubt they will introduce a new product category this year though.
 
So presumably OpenGL ES apps will see the SGX543MP2 as 1 GPU, but I wonder whether it'll appear to OpenCL apps as 2 devices so they can be working on independent tasks? Or if there is any interest in using 1 GPU core for graphics and the 2nd core for say an OpenCL physics engine?

Distributing workload should be an exclusive GPU driver affair. Page 8 & 9 picture in rough terms how tasks/threads are been scheduled/distributed: http://users.otenet.gr/~ailuros/IMG_MP_GPGPU.pdf

Since there are quite a few similarities between the META GPP core and the USSE (unified scalable shader engine) here's an explanation how META roughly works: http://www.eetimes.com/design/audio...magination-Technologies-Meta-processor-Part-1

Note the multiple multi-threaded execution units under USSE in the block diagram:

POWERVR_SGXSeries5_lrg.gif


...and the virtual processors marked in the following:

META_HTP_lrg.gif


which might help understanding the whole thing.
 
As I said before, the iPhone 4's pixel density in the iPad doesn't even make sense as no one will look at iPads at the same distance they look at iPhones.

1280*960 for example would be a pretty good resolution for the new 9.6" screen (if they decide to maintain the same aspect ratio), without going "ridiculous".
 
http://daringfireball.net/2011/01/cold_water_ipad_retina_display

Gruber has gotten information out of his own sources that claim that resolution doubling and Retina Displays are not coming to the iPad 2. The remaining scenarios are a intermediate resolution like 1.5x, which he feels is unlikely due to the non-ideal scaling and the need to continue supporting it as a third resolution since he believes a Retina Display will come sooner or later, or that the iPad will remain at 1024x768 which he feels is most likely. Even though 2048 × 1536 did seem outlandish, I must say, sticking to 1024x768 does seem disappointing now. Maybe 1920x1280 to standardize on a 3:2 ratio would be a good compromise although even that is a high resolution for a ~10" screen.
Gruber usually has very good source, so these are disappointing news. But as wco81 mentioned, maybe the deals are about new display factories that are yet to be build. So maybe this "very strategic" component really is high-resolution displays, but for 2012?
 
I think it is worth considering that Apple has been quite conservative with hardware improvements with each incrementation of their original iPhone (up until the release of the iPhone4). I remember all sorts of expectations for firstly the 3G then the 3GS, yet the hardware improvements in each of these devices were relatively minor. 800x400 resolution displays were in use in other phones when the 3GS was released yet they stayed with the low-res screen. Obviously this may have been part of Apple's strategy but, regardless, Apple rarely go cutting edge hardware-wise (heck, the iPad only contains 256MB RAM when my cheapo £90 android phone has 512MB!) and it wouldn't surprise me to see the iPad2 being a relatively minor upgrade. I'm guessing same resolution screen, an increased amount of RAM and perhaps a dual-core CPU, but any other improvements in the spec may be minor.
 
I think it is worth considering that Apple has been quite conservative with hardware improvements with each incrementation of their original iPhone (up until the release of the iPhone4). I remember all sorts of expectations for firstly the 3G then the 3GS, yet the hardware improvements in each of these devices were relatively minor. 800x400 resolution displays were in use in other phones when the 3GS was released yet they stayed with the low-res screen. Obviously this may have been part of Apple's strategy but, regardless, Apple rarely go cutting edge hardware-wise (heck, the iPad only contains 256MB RAM when my cheapo £90 android phone has 512MB!) and it wouldn't surprise me to see the iPad2 being a relatively minor upgrade. I'm guessing same resolution screen, an increased amount of RAM and perhaps a dual-core CPU, but any other improvements in the spec may be minor.
At least in the past year Apple was cutting edge in terms of display technology, that's why a lot of people think that's where they make their next big move. The brought IPS into the spotlight for mobile devices and introduced a very high resolution smartphone display. In recent history Apple could also be considered cutting edge hardware-wise in terms of e.g. flash usage in their iPod products and capacitive touchscreens in smartphones etc. ... IMHO Apple cares most about using cutting edge hardware when it can directly influence a mainstream audience's first impression in a big way (unibody construction, capacitive touchscreen, IPS etc.). They care much less about impressing the tech-savy people with hardware specs (or confusing mainstream audiences with them).
 
Last edited by a moderator:
I think it is worth considering that Apple has been quite conservative with hardware improvements with each incrementation of their original iPhone (up until the release of the iPhone4). I remember all sorts of expectations for firstly the 3G then the 3GS, yet the hardware improvements in each of these devices were relatively minor. 800x400 resolution displays were in use in other phones when the 3GS was released yet they stayed with the low-res screen. Obviously this may have been part of Apple's strategy but, regardless, Apple rarely go cutting edge hardware-wise (heck, the iPad only contains 256MB RAM when my cheapo £90 android phone has 512MB!) and it wouldn't surprise me to see the iPad2 being a relatively minor upgrade. I'm guessing same resolution screen, an increased amount of RAM and perhaps a dual-core CPU, but any other improvements in the spec may be minor.

What makes the iPad interesting is that it is finally a tablet done right (though it isn't a new market, just rethought into something usable), so they have to innovate faster than they are used to because competitors are breathing down their neck.

Just like Windows became the mainstream back in the late 80's, early 90's ahead of Mac System 6 and 7.

This is a mistake Apple will not make again.

At least in the past year Apple was cutting edge in terms of display technology, that's why a lot of people think that's where they make their next big move. The brought IPS into the spotlight for mobile devices and introduced a very high resolution smartphone display. In recent history Apple could also be considered cutting edge hardware-wise in terms of e.g. flash usage in their iPod products and capacitive touchscreens in smartphones etc. ...

Making USB and FireWire mainstream comes to mind. Dumping the floppy disk, and hopefully this year we will see them dump the optical drive.

Also introducing killer battery life to notebooks while not sacrificing performance and making the structural great unibody enclosures.

No small feat but all these "small" things do differentiate them from the competition.
 
Last edited by a moderator:
The brought IPS into the spotlight for mobile devices and introduced a very high resolution smartphone display.

IPS panels have been used by Motorola in all their 3.7" devices, since the Droid/Milestone debut in Q4 2009.
Their brightness and viewing angles are about the same as the iPhone 4's, but their color reproduction accuracy is actually a lot better than Apple's device or any S-AMOLED screen. They're actually compared to professionally calibrated S-IPS monitor displays for designers/photographers.
http://www.displayblog.com/2010/09/28/displaymate-super-amoled-vs-retina-display/

Regarding the "very high resolution".. I honestly think that anything above 265-280 ppi is nothing but a marketing gimmick.
Sure, you'll notice #some# difference when looking at very small text, but what's the point? Text reflow has been implemented for some reason.

Or are we supposed to use our cellphones with magnifying glasses, just because Apple launched a cellphone that you could use one with?
 
What makes the iPad interesting is that it is finally a tablet done right (though it isn't a new market, just rethought into something usable), so they have to innovate faster than they are used to because competitors are breathing down their neck.

Just like Windows became the mainstream back in the late 80's, early 90's ahead of Mac System 6 and 7.

This is a mistake Apple will not make again.



Making USB and FireWire mainstream comes to mind. Dumping the floppy disk, and hopefully this year we will see them dump the optical drive.

Also introducing killer battery life to notebooks while not sacrificing performance and making the structural great unibody enclosures.

No small feat but all these "small" things do differentiate them from the competition.

Dropping the optical drives might be the ideal excuse to force people into iTunes for music and film/video, App store for software. So yes, it's probably very near! :D

I'm glad Apple pushed for IPS in consumer devices. Do they use it on their macbooks yet? I was very sad to only have old Thinkpad notebooks (select models of series T4x and T60) as option for IPS for a long time.
 
Apple is definitely feeling competitive pressure. And given the rapid development of SOCs from multiple vendors, they should be able to take up the latest and greatest at about the same cost as the previous year, no?

Remember before the iPad was announced there were rumors about them using an OLED display and how it could really expand the market for OLED? Of course there probably wasn't enough OLED manufacturing capacity, especially of 9.7-inch displays.

Can money alone advance the manufacturing process for higher-DPI displays of that size? Since Steve Jobs said the 7-inch tablets are inferior to the 10-inch ones, it seems at some point, they will make bets to reinforce the superiority of the larger screens they've decided to go with.
 
Remember before the iPad was announced there were rumors about them using an OLED display and how it could really expand the market for OLED? Of course there probably wasn't enough OLED manufacturing capacity, especially of 9.7-inch displays.
I think there's still not enough manufacturing capacity for OLEDs even for the iPhone until the new factories are ready AFAIK sometime in 2012. Android manufacturers had to go back to LCDs because of OLED shortages and Samsung's Super AMOLED (plus) is still exclusive to Samsung Mobile because there's just not enough to go around.

I think a Super AMOLED plus (or something at least as good) 1280x960 display is even more unlikely than a 2048x1536 IPS display for the iPad 2, just from a manufacturing capacity and cost perspective.

Ah well, at least with a SGX543MP2 and just a 1024x768 resolution you can do a lot of impressive 3D stuff more easily.
 
Ah well, at least with a SGX543MP2 and just a 1024x768 resolution you can do a lot of impressive 3D stuff more easily.

To what kind of x86-equipping GPU/IGP would a SGX543MP2 be comparable to? Lower-clocked MCP78 ( 8shader+8TMU G86 derived)?
 
There's quite a difference between multiple GPU chips on a single PCB in the desktop space and a GPU block with multiple cores within a SoC. Apart from that the first solutions are based on AFR (and there are SFR routines available for compatibility reasons) while IMG's MP relies on SFR. Unless I've missed something I don't see any hw support on AFR solutions while there is in the latter case. There's a reason why only Series5XT cores are capable of multi-core and Series5 cores aren't.

There's certainly a difference in how they achieve load-balancing, but for the sake of this analogy they both share the modular/scalable distinction that a modern multi-core CPU wouldn't. They're both black boxes that can work independently and without sharing vital resources.

Good point on the additional hardware support, though.
 
It's worth nothing that we're not SFR in the same way that modern desktop graphics is, and that command issue to Core1+ isn't just the responsibility of the driver (far from it in practice).
 
To what kind of x86-equipping GPU/IGP would a SGX543MP2 be comparable to? Lower-clocked MCP78 ( 8shader+8TMU G86 derived)?

I thought the MCP78 carries a 8200, which if memory serves well has 16SPs@1.2GHz, 4 TMUs@500MHz and 4 ROPs on a 64bit bus?

If yes then I'd suppose that for comparable performance the MP2 should be clocked at least at 400MHz (for 2 cores it has "32SPs"), but you shouldn't forget that the 8-series chipsets are all DX10.

It's worth nothing that we're not SFR in the same way that modern desktop graphics is, and that command issue to Core1+ isn't just the responsibility of the driver (far from it in practice).

I of course understand less than half of it and I've posted them before but those with a fair understanding of patents might get a better picture:

http://v3.espacenet.com/publication...P&FT=D&date=20100915&CC=EP&NR=2227781A1&KC=A1

http://v3.espacenet.com/publication...T=D&date=20090604&CC=WO&NR=2009068893A1&KC=A1

Given that load balancing seems to get assigned dynamically according to workload, I'd say that a DR having constantly 1 frame's data deferred it sounds natural that single frame rendering is far more efficient and it also tells me that something like AFR would be absolute nonsense on a DR (which isn't necessarily self-explanatory for everyone).

I don't know if you read my post above, but when I said that load balancing should be an exclusive driver affair I meant in comparison to any API employed. Given that I mentioned above that there's hw scheduling support I know what you mean, but I obviously phrased the driver thing a tad awkward.

Good point on the additional hardware support, though.

Judging from the slides in the MP pdf above, I understand it as a hierarchical scheduling scheme on 3 different levels (MP-->cores-->ALUs). In order to get there I'd imagine a more advanced form of some sort of hierarchical tiling they've used so far (macro/micro-tiling), where I wouldn't be surprised if macro tiles have dynamic tile sizes according to workload.

Overall I'd expect the whole load balancing to stay exclusively in the driver/hw realm and any API simply seeing one GPU core (all of it of course stands open to correction). One more thing that distinguishes their method compared to AFR/mGPU desktop solutions is the absence of inter-frame latency which leads to occasional micro-stutters on the latter configurations when the frame-rate drops too much. Also they don't need N-times the memory amounts for N-amount of cores. On a desktop mGPU with 2GB on the PCB, you obviously have only 1GB/chip/frame.
 
I don't know if you read my post above, but when I said that load balancing should be an exclusive driver affair I meant in comparison to any API employed. Given that I mentioned above that there's hw scheduling support I know what you mean, but I obviously phrased the driver thing a tad awkward.
Ah, I didn't see the context before it. There's no API involved and, depending on the configuration of the driver and platform by the licensee, it can even be impossible to tell from application code that you're running on an SGX MP configuration.

Judging from the slides in the MP pdf above, I understand it as a hierarchical scheduling scheme on 3 different levels (MP-->cores-->ALUs). In order to get there I'd imagine a more advanced form of some sort of hierarchical tiling they've used so far (macro/micro-tiling), where I wouldn't be surprised if macro tiles have dynamic tile sizes according to workload.
Who knows :runaway: Macro tiles will always be processed by the same core, MP or not.

One more thing that distinguishes their method compared to AFR/mGPU desktop solutions is the absence of inter-frame latency which leads to occasional micro-stutters on the latter configurations when the frame-rate drops too much.
Micro-stuttering as in GPU0 and GPU1's frame time are significantly different? We won't have presentation-time issues like that.
 
Couple of corrections...

Ah, I didn't see the context before it. There's no API involved and, depending on the configuration of the driver and platform by the licensee, it can even be impossible to tell from application code that you're running on an SGX MP configuration.
The application is _never_ aware that it is runing on SGX multi-core, even the driver is mostly the same.
Who knows :runaway: Macro tiles will always be processed by the same core, MP or not.
Rasterisation workload division is on a tile by tile bases so macro tiles will invariably be distributed across multiple cores.
Micro-stuttering as in GPU0 and GPU1's frame time are significantly different? We won't have presentation-time issues like that.
Correct, we parallelise within individual renders so there is no extra latency or issues with imbalance due to differing frame content.

John
 
The application is _never_ aware that it is runing on SGX multi-core, even the driver is mostly the same.
That's what I said, I was just hinting at app detection via GL_RENDERER or something, which isn't guaranteed.

Rasterisation workload division is on a tile by tile bases so macro tiles will invariably be distributed across multiple cores.
I thought a tile stayed on a core, so its macro tiles would too.
 
Back
Top