Let me repeat it once again:
3D Marks are known for very low bandwidth dependency.
1 - The difference of GPU power between the 35W A10-4600M and the 17W A6-4455M
is not 20%. It's 33% less on clock speeds and at 33% less on shader units. If we go by the previous iGPU iterations, the ROPs are probably halved and there's also 33% less TMUs.
We're not talking about a 20% slower iGPU. The 17W A6-4455M should have its iGPU at least 50% slower than the iGPU in the A10-4600M that was widely reviewed last week (Trinity A10-4600M x 0.66 <lower clock speeds> x 0.66 <less shader units> = ~45% of the original performance).
2 - Don't like 3DMark? Okay, here's some more:
Anno 2070
Llano A4-3300M (HD6480G @ 444MHz) - 32 FPS
Llano A8-3500M (HD6620G @ 444Mhz) - 40 FPS
20% less performance
Starcraft 2
Llano A4-3300M (HD6480G @ 444MHz) - 20 FPS
Llano A8-3500M (HD6620G @ 444Mhz) - 31 FPS
35% less performance
Risen
Llano A4-3300M (HD6480G @ 444MHz) - 31 FPS
Llano A8-3500M (HD6620G @ 444Mhz) - 43 FPS
28% less performance.
So for a 40% decrease in shader units, 40% decrease in TMUs and 50% decrease in ROPs, we get up to 35% less real-world performance (note: even though the A4 is only dual-core and the A8 has a quad-core, the A4 is substantially higher clocked).
Therefore, unless there's some magical drivers coming for the 17W A6 Trinity, its 3D performance will be some 30 to 50% slower than the 35W A10 Trinity.
That said, the 17W IvyBridge will be at least comparable - if not faster - than the A6 !7W in 3D gaming performance.
Check your math, that's 40% drop, not 60%.
That 40% drop is for shader processing, as well as 40% less texture units and 50% less ROPs
I stand corrected on the semantic level. By 60% drop, I meant the performance of the fastest model is multiplied by 0.6. The slower model has 60% of the performance of the faster one, resulting in a 40% drop in performance.
Semantics aside, the math is correct, though.