Well firstly there has to be some funky marketing going on as they are sure as hell refer to ALU's/shaders..not full GPU cores with TMU's/rasterizers etc..
Even if it is a proper quad GPU like A5X or even a 'compromised' 'quad' like T604..the results whilst being great aren't that great..unless they have used a low clocked variant to keep power consumption down?.
The K3V2 seems to use LP manufacturing process, where as the Tegra3 used Performance manufacturing process, BUT used LP for the 5th companion CPU. This is why Huawei is claiming that there K3V2 uses 30% less power.
The marketing funky claims are based on probable situations where the Tegra3 uses its main CPU's, compared to the K3V2's main CPU's. Claim 30% less power. Bingo.
Now, when the Tegra3 jumps to the 5th core, the K3V2 will still be running on one of its main CPU's.
So, in low power situations, we can assume that the K3V2 will use more power on one of its main CPU's.
And all the other CPU's are probably power gated when not needed. Question becomes who's strategy will be more power efficient.
For heavy loads or anything that needs to run on its main CPU's, i assume the K3V2 will be more power efficient.
For light loads... your guess is as good as mine.
How do we know for sure that A5X is 45nm? thought it was assumed to be on Sammy's 32nm process?
http://www.anandtech.com/show/5686/apples-a5x-floorplan
There first gamble where based on a guess, and it looks like they where wrong.
Ditto with the K3V2 how do we know the manufacturing tech?
Because the manufacture said so? Its 40nm ... And to be honest, its not a far stretch. If they said 28nm, there will have been more doubts about that, as it seem 28nm has plenty of problems.
The said that in the future, they are looking at A15 @ 28nm, and the current K3V2 @ 28nm.
But it seems clear that 28nm is not ready for prime time yet. That might also explain some of those numbers the Krait. And how we STILL have not gotten any power usage indicator for the Krait. If the claims are correct that 28nm is leaking too much power, that might explain some things?
The workloads of some of these benchmarks and the variability of some of the comparative scores prevents me from buying some of the performance claims here.
HiSilicon says the texel fill is 1.3 billion per second and presumably IMR (especially with all of the SGI name dropping), so this SoC wouldn't have made for a better A5X to drive the new iPad display.
Some of the other theoretical performance figures sound nice, though.
Immature drivers? We can assume that the drivers that Apple uses are already extremely optimized now.
That Vivante GC4000 seems to be a rather "unknown" asset up to now, so its very possible that there drivers still have more power underutilized?
Who knows, its just a gamble, but it can explain the difference between some of the results, and the claims.
If we look at for instants Intel, they have been the king of problematic performance for there drivers ( for integrated graphics ).