How does iPad graphics compare to typical desktop Graphics?

The current iPad 3 screen uses a lot of power. Newer display technology from Samsung could easily reduce this by 80%, which will free up a lot of power for other purposes. What I am most interested in, is how much of that will then go to improving performance versus making the battery smaller and the device lighter.
 
When I said 8-16x improvement, I was referring specifically to GFLOP GPU throughput relative to the original ipad. The original ipad (with SGX 535) had ~ 2 GFLOPS GPU throughput (assuming that the GPU was clocked at ~ 250MHz). The ipad 2 (with SGX 543MP2) has ~ 16 GFLOPS GPU throughput. The ipad 3 (with SGX 543MP4) has ~ 32 GFLOPS GPU throughput. Of course, real world performance differences (notwithstanding the effect of vsync) can be far different than that: http://images.anandtech.com/graphs/graph4971/41966.png . Now, with respect to die size, there was indeed a dramatic increase in SoC die size when going from ipad to ipad 2. The increase in SoC die size was not quite as dramatic when going from ipad 2 to ipad 3, but still significant because the majority of the die size increase went towards the GPU.

Ok most of us obviously missed the part of the original iPad in that comparison. Under that light no disagreement.

I'm hadly even in the clear with Apple's real plans, but the might have wanted a higher resolution screen for the iPad2 and their plans didn't work out or it was simply some sort of preliminary exersize for iPad3 and its 2048*1536 monster resolution.

The thing is that most small form factor GPUs have either just 1 TMU or 2 TMUs per cluster or core. In that regard one of the reasons that probably drove Apple going for a MP4 in iPad3 is that the MP4 comes with a total of 8 TMUs vs. only 2 in the original iPad.

iPad1 scores relatively low in synthetic fillrate tests, probably strangled too much by the lack of bandwidth amongst other things:

http://www.glbenchmark.com/compare....D1=Apple iPad 3&D2=Apple iPad 2&D3=Apple iPad

I'm not aware how much system level cache Apple had dedicated to the SGX535 but I have my doubts that it's the maximum possible ie 128KB as it is per core in A5 and A5X for their GPU blocks.

In any case we all know that GFLOPs aren't the be all end all for any sort of GPUs; there's a high amount of other units/factors that also play a role and contribute to the final real time performance. What also should be noted is that besides ALU differences the 535 has a much weaker trisetup than any 54x cores above it. Compared to a single 543 peak real triangle rates should be close to 1/3rd, let alone z/stencil units which are only 8 in the 535, while the 543MP4 sports a total of 64 z units.

Last but not least I don't expect Rogue to be different in the 2 TMU/cluster trend the entire Series5 family (with the exception of SGX530) followed.
 
The current iPad 3 screen uses a lot of power. Newer display technology from Samsung could easily reduce this by 80%, which will free up a lot of power for other purposes. What I am most interested in, is how much of that will then go to improving performance versus making the battery smaller and the device lighter.

It's Sharp that is currently mass-producing in-cell panels.

The screen is what uses the majority of the power in its current implementation.
 
Hi, where you got this picture please?

Theoretical maximum in terms of FLOPs for the SGX543MP4@250MHz is at 36 GFLOPs times 100, gives 3.6 TFLOPs, which isn't out of line for a today's GeForce 6xx desktop GPU. Only other difference would be that the Apple A5X is being manufactured under 45nm while Kepler under 28nm, the first is DX9.0 while the latter DX11.

In order to gain a more fair comparison I'd suggest an at least 28nm small form factor GPU and even better a DX11 compliant one. While a FLOP/mm2 analogy of some kind would also be helpful for something like upcoming Rogue, I'd still say that the =/>100x higher performance would shrink immediately to much more reasonable levels and of course with better metrics for an apples to apples comparison.

A quad cluster Rogue GC6430 (DX11.1) should deliver under 28nm something above 210 GFLOPs, which with a theoretical costant of 3.6 TFLOPs for a today's desktop GPU would give a difference of a factor ~17x.

033_cpu_vs_gpu_GFLOPS.png
 
Fortunately at beyond3d most don't accept a single benchmark as proof of anything but superiority in that benchmark.
 
I wouldn't really consider the Radeon HD 7970 and Geforce GTX 680 for your typical desktop graphic cards heh.

The current Steam hardware survey says that the most people use the Geforce GTX 560 Ti and the Intel HD 4000, which still handily beat the SGX544MP4 in the iPad 4.
 
Back
Top