Intel Atom Z600

The problem is that the OS will again be different. But that should be a more fair comparison.

I'd actually think if anything, Win RT would be more efficient because it doesn't need to support anything legacy.

If you want smartphones and modern SoC(Krait), its compared in the iPhone 5 review: http://www.anandtech.com/show/6330/the-iphone-5-review/12

That's about as close a comparison as you can get.

You can see that it has an advantage in the actual battery life test too, with the WiFi on.

That was interesting. And helps explain why Clovertrail has such a dramatic increase (over doubling) in battery life compared to the original Atom. It would have been interesting to be able to see the same comparison, except with an original Atom slate instead of Tegra 3.
Haswell ULV 10W is a CPU running at 1.1 GHz (turbo unknown) so it is not obvious it will be as power efficient as existing IVB 17W.

One of the biggest reason for Clover Trail doing so much better on the battery life department compared to previous generations is that the platform is capable of supporting Connected Standby, the Intel version being called S0iX.

While its to be seen how much Haswell improves at near-TDP levels, the ULT(Ultrabook-class) platform getting Connected Standby and S0iX support will help nicely in most other scenarios.
 
I'd actually think if anything, Win RT would be more efficient because it doesn't need to support anything legacy.
In theory you're perfectly right. But Windows RT on ARM is very young so I wonder how well tuned it is. The fact the T3 fifth core isn't supported makes me think something isn't properly working.

If you want smartphones and modern SoC(Krait), its compared in the iPhone 5 review: [B]http://www.anandtech.com/show/6330/the-iphone-5-review/12[/B]

That's about as close a comparison as you can get.

You can see that it has an advantage in the actual battery life test too, with the WiFi on.
Interesting to see how I read different things :smile: For me the last graph shows that the Atom CPU consumes more than S4 and Swift. Of course one should take into account the time needed to complete the benchmark; against S4 it's not obvious, but Swift definitely looks much more power efficient: I'd say Swift requires about 1.3W for 230ms while Medfield needs 2W for 180ms and S4 1.3W for 290ms.

One of the biggest reason for Clover Trail doing so much better on the battery life department compared to previous generations is that the platform is capable of supporting Connected Standby, the Intel version being called S0iX.

While its to be seen how much Haswell improves at near-TDP levels, the ULT(Ultrabook-class) platform getting Connected Standby and S0iX support will help nicely in most other scenarios.
Hmm I had indeed forgotten about S0iX, sorry. This might indeed help battery a lot.
 
Interesting to see how I read different things :smile: For me the last graph shows that the Atom CPU consumes more than S4 and Swift. Of course one should take into account the time needed to complete the benchmark; against S4 it's not obvious, but Swift definitely looks much more power efficient: I'd say Swift requires about 1.3W for 230ms while Medfield needs 2W for 180ms and S4 1.3W for 290ms.

Hmm I had indeed forgotten about S0iX, sorry. This might indeed help battery a lot.

You can't ignore the time it takes on a phone though, considering what kind of tasks are used there. Also, Medfield has lower idle floor too.

Not having the Turbo on the Atom would probably lower the load power use, but it would also cause the benchmark to slow down as well, so it may be intentional.
 
Haswell ULV 10W is a CPU running at 1.1 GHz (turbo unknown) so it is not obvious it will be as power efficient as existing IVB 17W. I also guess it will be expensive and probably will still need active cooling even in a tablet.

As usual with Haswell vs Ivy Bridge, the TDP figure for the Haswell chip might include the VRM while the Ivy Bridge does not.
But it will of course be expensive.
 
As usual with Haswell vs Ivy Bridge, the TDP figure for the Haswell chip might include the VRM while the Ivy Bridge does not.
But it will of course be expensive.

It's not the VRM that makes the TDP higher. In the Nehalem-generation, Intel increased the TDP of mobile processors to 7/10W(ULV/SV). The logic is that its equivalent to what they did with Northbridge integration.

But how do you explain similar increase in TDP for quad core Clarksfield that lacked the graphics part versus dual core Arrandale parts that did? Chipsets with GPU integrated had few W higher TDP.

My guess is because they were eventually going to integrate the whole Northbridge in the future(which was with Sandy Bridge). Long time ago they went from making every single speed grades with different TDP levels to unified TDP across entire line.

In the future I assume Intel will put IOH into part of the CPU core as well. Haswell's 2W increase is just preparing for that.

The truth is that they didn't need to increase TDP. But corresponding increase allows CPU performance to not drop with integration(going from 45+10W to 45W would mean less clock frequencies).
 
Back
Top