french toast
Veteran
The MacBook Air uses LPDDR3 at 1.2V configured as 4x32-bit.
Ala ipad 4?
The MacBook Air uses LPDDR3 at 1.2V configured as 4x32-bit.
HD 5000 base clock rate is halved (200 MHz) compared to HD 4000 series (400 MHz). With twice as many EUs, the GPU can run at 1/2 the clocks and still provide the same performance. As everyone here knows, double clocks consume more than double power (4x is more closer in general). So Intel actually traded die area to power savings. I think it was worth it as MBA 2013 has outstanding 12 hour battery life (+5 hours compared to the last year's model AND slightly better performance).
But this isn't really happening is it? If the gpu has a certain TDP it can boost to, it'll boost to it and the results should be seen in higher fps. If HD 5000 offers similar performance at 650 MHz compared to HD 4000 at ~1200 MHz, it should boost higher to increase performance.HD 5000 (GT3) can still turbo clock up to 1100-1300 MHz in cases where the (15W) TDP allows it. But that shouldn't occur as often as it did in the past (but at 650 MHz it should already offer similar performance).
I'm not sure...I have a hard time believing sub-20W parts are bandwidth bound by that much even with their consistent yearly improvements, and I think we'll see much better (~30%) percent gains for Haswell at 28W with the same bandwidth.Also without Crystalwell the GPU will be severely bandwidth bound at maximum clocks, lowering the potential performance gains even further. Anand's tests with GT3e showed that GT3 is TDP bound even at 47W. He increased the TDP to 55W (using Intel Extreme Tuning Utility) and it brought noticeable gains in games (but not that much in pure GPU synthetic benchmarks, as the CPU half is pretty much idling in those and gives its TDP to the GPU).
Right, if you're using a non-trivial amount of CPU (games tend to), it's even TDP bound at 55W, hence the 65W R-series versionAnand's tests with GT3e showed that GT3 is TDP bound even at 47W. He increased the TDP to 55W (using Intel Extreme Tuning Utility) and it brought noticeable gains in games (but not that much in pure GPU synthetic benchmarks, as the CPU half is pretty much idling in those and gives its TDP to the GPU).
Yeah as usual sebbbi nails it and his comments are in-line with my experience.
The most interesting thing about the 15W GT3 parts in my opinion is their ability to run more stuff v-sync'd without spinning up the fan (i.e. at ~600Mhz GT frequency or similar). In cases where GT2 has to turbo up and and generate a lot of heat, GT3 can do it while keeping the system cool. Definitely useful for more casual gaming or running old stuff (I play a lot of Myth II still ).
Yep and the less you have to work with at the start, the less you can be expected to gain. I'm not surprised by the smallish gains here even with the doubled resources, tbh as an overall package it's pretty good for the same node.Right, if you're using a non-trivial amount of CPU (games tend to), it's even TDP bound at 55W, hence the 65W R-series version
I really is a power/cooling game at this point. The exact same chip can perform vastly differently depending on the quality of chassis/cooling it is paired with. With configurable TDP and such large turbo ranges, there are definitely hills and valleys in the "user experience" landscape.
I'm glad to see you say that because I've always felt like we're being had somewhat by some of these ULV benchmarks.The oddity for bench-marking as well is that things like time-demos start to become a bad way to represent quality of user experience as well, since they just max out TDP and heat up the chip, often for no real gain over running a game simulation at "normal" rates v-synced. We're going to have to evolve benchmarking as well.
It's nothing specific to Haswell really, just the fact that burning extra power beyond the display refresh rate (or whatever target frame rate) is really harmful on these thermally constrained platforms. Instead, letting things go idle to wait for the next v-blank vastly improves the overall end user experience.I didn't know about this (v-sync) but it's similar to something I advocated that AMD might end up doing in order to save power with Llano on mobile, many moons ago on another forum. Kudos to you for making it happen first.
Yeah I've been a big proponent for the switch to more experienced-based metrics like frame time variance, etc. but it goes even further for ULV-type stuff. Ultimately games need to be part of the solution here too instead of their typical "run at max performance and use every last hardware resource I have", but a lot of that is driven outwards-in by reviews, so the change probably needs to start happening there first.I'm glad to see you say that because I've always felt like we're being had somewhat by some of these ULV benchmarks.
It's nothing specific to Haswell really, just the fact that burning extra power beyond the display refresh rate (or whatever target frame rate) is really harmful on these thermally constrained platforms. Instead, letting things go idle to wait for the next v-blank vastly improves the overall end user experience.
A big offender currently is menus. Lots of games run the menu unthrottled at hundreds of FPS, so by the time you even get into the game you've already heated up the chip and have the fan spinning at maximum.
Yeah it's all about bigger numbers. To be frank I don't even think people want to know the truth, they just want to see simple bars with x is better than y. The tech press is mostly giving them what they want.Yeah I've been a big proponent for the switch to more experienced-based metrics like frame time variance, etc. but it goes even further for ULV-type stuff. Ultimately games need to be part of the solution here too instead of their typical "run at max performance and use every last hardware resource I have", but a lot of that is driven outwards-in by reviews, so the change probably needs to start happening there first.
Yes, and properly waiting for v-synch will actually improve game (minimum) frame rate on new CPUs, since the CPU and GPU will not constantly try to run at the TDP extra limit. Intel's chips can momentarily run over TDP if needed, and thus prevent those occasional frame hiccups you often encounter in games (for example explosion near the camera).It's nothing specific to Haswell really, just the fact that burning extra power beyond the display refresh rate (or whatever target frame rate) is really harmful on these thermally constrained platforms. Instead, letting things go idle to wait for the next v-blank vastly improves the overall end user experience.
Do quad cores even support S0ix? Though even C-states should give a lot of power savings. Or are C-states too slow?A quad core laptop Haswell CPU should easily crunch through the frames of current generation console ports in less than 5ms, allowing the CPU to save lots of power (that can be used to improve integrated GPU performance, as the TDP is shared).
A quad core laptop Haswell CPU should easily crunch through the frames of current generation console ports in less than 5ms, allowing the CPU to save lots of power (that can be used to improve integrated GPU performance, as the TDP is shared).
Has anyone tried running games in software mode with DirectX SDK on Haswell? Does AVX 2 get utilized at all?