The condition i mentioned, assumed that Apple will use the same frequency as the MP2. Or close to it anyway.
The reason for that is simple. If they go for the MP4, the power usage when all 4 cores are stressed, will double ( twice the amount of cores ).
Your right, that it can be offset by the fact that they use a small manufacturing process. And add maybe some small increase in mhz.
But going from MP2 @ 250mhz, to MP4 @ 500mhz, is something different.
MP2 @ 250Mhz => x power load
MP4 @ 250Mhz => 2x power load
Mp4 @ 500Mhz => 2x power load + 4 * power load
This comparison is a bit rudimentary but get the point across.
Too much detail over something that fits in a sentence. Just for the record's sake the ULP GeForce in T20 (Tegra2 tablets) has 1 Vec4 PS ALU + 1 Vec4 VS ALU clocked at 333MHz, while the ULP GeForce in T30 (Tegra3 tablets) has
2 Vec4 PS ALUs + 1 Vec4 VS ALU clocked at 520MHz and that at the very same TSMC 40nm manufacturing process. Now I admittedly haven't seen any power consumption measurements with 3D for both platforms, but if it should be somewhat higher on Tegra3 it won't obviously just come from the GPU if all A9 CPU cores are utilized at their maximum frequencies.
A5 had been manufactured on Samsung's 45nm and Apple's next SoC most likely will be manufactured on Samsung's 32nm. Now I never suggested or will suggest that doubling the amount of cores and at the same time doubling the frequency is feasable, but even if it wouldn't increase the power consumption by 4x times under full stress exactly because a smaller manufacturing process is at play for the successing SoC.
Each core doubling its frequency increases also the power load on each core. If there is something we all know, with any CPU/GPU, the higher the frequency, the more the power requirements skyrocket.
That's not a rule even under the same manufacturing process. A GTX480@700MHz has 15 SMs enabled only and has a TDP of 250W, while it's successor the GTX580 under the very same 40G/TSMC process has all 16 SMs enabled is clocked at 772MHz with a TDP of 244W. Hw bugs aren't all too typical like in GF100 but in any case if you overgeneralize things there are more than one traps you can step into.
Smaller manufacturing processes have typically a higher tolerance for higher frequencies. Samsung has announced that they managed to increase the frequencies of their Exynos 4xxx SoCs by 50% while going from 45 to 32nm. But that's without any additional chip complexity. The more additional units get used the lower the chances for as high frequency increases. Note that I didn't claim that a MP4@500MHz is likely, just that you missed to consider it in your former post.
Now, a Mp4 @ 500Mhz, will give a theoretical increase of performance by a factor of 4 ( compared to a MP2 @ 250 ). Making it in theory even faster then the Rogue.
Not a single chance in hell would it be faster than Rogue.
SGX543 or 544MP4@500MHz =
72 GFLOPs, 4.0 GTexels/s, 332M Tris/s, DX9 (L3 for 544)
ST Ericsson Novathor A9600 Rogue (most likely being a 4 cluster G6400) =
>210 GFLOPs, >5.2 GTexels/s, >350M Tris/s, DX11.x
The A5 is currently made at 45 nm. Assuming that the A6 is made at 28 nm ( 20 nm is still too much in the future ).
If Apple's next SoC will be manufactured under 28nm (which would be most likely TSMC) then the step from 45 to 28nm is actually twice as big since an entire full node (32nm) would had been jumped. In that case a MP4@500MHz would be much easier than under 32nm (but still not within possibilities).
From my years of experience with PC hardware, a smaller manufacturing process does not magically allow the frequency to double. And that does not take in account, going from MP2 to MP4.
IHVs are investing the majority of the headroom given by the new process into more units and only a relatively small portion of it into frequency. That shouldn't mean though that if they'd go for a MP4 under 32nm (at least) that they couldn't slighly increase the frequency also.
Just for the record's sake the A4 under 65nm had its single core SGX535 clocked at 200MHz, while the dual core SGX543 in A5 under 45nm is clocked at 250MHz.
SGX535 =
2 Vec2 ALUs
2 TMUs
8 z/stencil
SGX543MP2 =
8 Vec4+1 ALUs
4 TMUs
32 z/stencil
Look at how much bigger the A5 is under 45nm compared to A4 under 65nm and usual power consumption hasn't changed. However if you stress out both under 3D power consumption should be quite a bit higher on A5.
But, realistically? It does not sound very realistic. A MP4 @ 300 or 350Mhz, that i can still believe, but jumping to 500Mhz... Even the Sony Vita's MP4 core speed is unknown, but it was rumored that they had problems with it in the past in regards to heat => And heat in general means high power consumption.
PS Vita's SGX543MP4+ is clocked at 200MHz manufactured at Samsung's 45nm. I NEVER SAID, CLAIMED OR IMPLIED THAT IT'LL CLOCK AT 500MHz. I merely pointed out that your former reasoning is flawed not encounting frequencies.
There are some rumors flying that the iPad3 is supposed to have a bigger battery. Maybe its because of the increase with CPU's / GPU's, but my bet is that the Retina Display is sucking in most of the power. In any smartphone/tablet the main power consumer, is the screen ( unless you run the cpu/gpu at full load all the time ofcourse ).
That's a reasonable assumption.