Intel Atom Z600

Discussion in 'Mobile Devices and SoCs' started by liolio, May 6, 2010.

  1. DavidC

    DavidC Regular

    I'd actually think if anything, Win RT would be more efficient because it doesn't need to support anything legacy.

    If you want smartphones and modern SoC(Krait), its compared in the iPhone 5 review: http://www.anandtech.com/show/6330/the-iphone-5-review/12

    That's about as close a comparison as you can get.

    You can see that it has an advantage in the actual battery life test too, with the WiFi on.

    One of the biggest reason for Clover Trail doing so much better on the battery life department compared to previous generations is that the platform is capable of supporting Connected Standby, the Intel version being called S0iX.

    While its to be seen how much Haswell improves at near-TDP levels, the ULT(Ultrabook-class) platform getting Connected Standby and S0iX support will help nicely in most other scenarios.
     
  2. Laurent06

    Laurent06 Veteran

    In theory you're perfectly right. But Windows RT on ARM is very young so I wonder how well tuned it is. The fact the T3 fifth core isn't supported makes me think something isn't properly working.

    Interesting to see how I read different things :smile: For me the last graph shows that the Atom CPU consumes more than S4 and Swift. Of course one should take into account the time needed to complete the benchmark; against S4 it's not obvious, but Swift definitely looks much more power efficient: I'd say Swift requires about 1.3W for 230ms while Medfield needs 2W for 180ms and S4 1.3W for 290ms.

    Hmm I had indeed forgotten about S0iX, sorry. This might indeed help battery a lot.
     
  3. DavidC

    DavidC Regular

    You can't ignore the time it takes on a phone though, considering what kind of tasks are used there. Also, Medfield has lower idle floor too.

    Not having the Turbo on the Atom would probably lower the load power use, but it would also cause the benchmark to slow down as well, so it may be intentional.
     
  4. Blazkowicz

    Blazkowicz Legend

    As usual with Haswell vs Ivy Bridge, the TDP figure for the Haswell chip might include the VRM while the Ivy Bridge does not.
    But it will of course be expensive.
     
  5. DavidC

    DavidC Regular

    It's not the VRM that makes the TDP higher. In the Nehalem-generation, Intel increased the TDP of mobile processors to 7/10W(ULV/SV). The logic is that its equivalent to what they did with Northbridge integration.

    But how do you explain similar increase in TDP for quad core Clarksfield that lacked the graphics part versus dual core Arrandale parts that did? Chipsets with GPU integrated had few W higher TDP.

    My guess is because they were eventually going to integrate the whole Northbridge in the future(which was with Sandy Bridge). Long time ago they went from making every single speed grades with different TDP levels to unified TDP across entire line.

    In the future I assume Intel will put IOH into part of the CPU core as well. Haswell's 2W increase is just preparing for that.

    The truth is that they didn't need to increase TDP. But corresponding increase allows CPU performance to not drop with integration(going from 45+10W to 45W would mean less clock frequencies).
     
Loading...

Share This Page

Loading...