Trinity vs Ivy Bridge

The Zenbook is nearly 2.5x faster and people think it's because of the single channel RAM on the X202E. While that is undoubtedly a factor I just can't believe it's having such an effect at low settings 720p.
Why can't you believe that? ~12GB/s (or 9 apparently...) shared between the CPU and GPU is not much, and realize that framebuffer bandwidth is not what dominates most modern workloads, especially stuff like BF3.

Also of course the CPU speed differences are always relevant. GPUs don't run in isolation.

But seriously, do you have any intention of checking out the tools that I pointed you at? Because that's how you can actually find out real information instead of just cherry picking review graphs to support your theories.
 
Last edited by a moderator:
Maybe you could point me to the real information I'm supposed to be looking at on the GPA?

BF3 is known to be shader heavy so I don't see how bandwidth could be causing such a discrepancy. If it was down to bandwidth why is the HD4000 losing so heavily to the single channel Kabini also?

As for my "cherry picked" graphs, I'm picking out the ones that don't make sense yes, as that's the whole point of my theory. Your theory so far seems to be that Toms can't do benchmarks and the TDP limit can only be exceeded for milliseconds (which I already showed you was false). Do you have any other theories because I'm not buying those.

We know that IVB can have configurable TDP if the OEM decides to use it, so why can't that be a factor? Why can't it be that these "17W" ultrabooks are running well above TDP during benchmarks - is it really a stretch to assume that on the better systems the TDP is up and cooling is improved so that the HD4000 can exceed normal spec? That's sure what it looks like to me and I can't find any other gpu power benchmarks proving otherwise.
 
Wow, you can buy HD4000 with single channel? No wonder it's disastrous, you're asking why it isn't less disastrous than that but maybe are the memory hierarchy and GPU not designed for such starvation.

1280x720 is not a low enough resolution you would maybe need tests at 640x480 to test your theory that it can run better..
 
I'm really not sure what's going on but I'm a big believer in what I see (yes even on the internet :p).

Yes the bandwidth will be a factor, but by that much? I don't know, I guess it's possible. Tom's measured the power draw of the i3 system though and it's 35W over the duration - it's not like it was peak or anything you can see the whole time graph and the system was clearly pulling 34-35W without the screen for the entirety of the benchmark.

I just cannot fathom how that is possible for a 17W chip and this is the most sensible explanation, for me. And why no gaming battery tests on the techreport when they've done that before? Is it because the HD4000 power numbers didn't make sense and the graph wasn't shown because of that?
 
Last edited by a moderator:
Why can't you believe that? ~12GB/s (or 9 apparently...) shared between the CPU and GPU is not much, and realize that framebuffer bandwidth is not what dominates most modern workloads, especially stuff like BF3.
Come on, Andrew, surely you must agree that it's quite incredible to lose 60% of performance solely from going dual channel to single channel. Even if the dual-channel system was bandwidth limited 50% of the time in these games (which is very high), that would only explain a ~33% performance drop for going single channel. These framerates also waaaaaay below the CPU limit, so they're going to be idle most of the time in these tests.

Either there's something broken in Intel's memory controller when running single channel (unlikely, IMO) or the clocks are very different in practice, whether due to the i5 exceeding its 17W TDP or the X202E's i3 staying below it.

Tom's results suggest that sometimes Intel CPUs go beyond the TDP for sustained amounts of time if cooling is good enough. No way should the rest of the system consume 17W.
 
Maybe you could point me to the real information I'm supposed to be looking at on the GPA?
Run system analyzer and look at the "power" metrics for GPU/CPU/Socket. You can even monitor the frequencies of all the relevant parts... none of this is a secret.

We know that IVB can have configurable TDP if the OEM decides to use it, so why can't that be a factor?
It could be a factor, I don't think I said otherwise. I just don't buy that it's the majority of the explanation for the disparity you are seeing when there are other large factors at play.

Come on, Andrew, surely you must agree that it's quite incredible to lose 60% of performance solely from going dual channel to single channel.
But its not *solely* from that, it's also a much slower CPU, and a variety of other factors. All I'm saying is that those are most likely responsible for a big chunk of the effect here, with TDP stuff being secondary.

These framerates also waaaaaay below the CPU limit, so they're going to be idle most of the time in these tests.
What are you basing that on? Desktop CPUs + discrete? Don't think that's a fair comparison.

Either there's something broken in Intel's memory controller when running single channel (unlikely, IMO) or the clocks are very different in practice, whether due to the i5 exceeding its 17W TDP or the X202E's i3 staying below it.
Meh, both possibilities are really easy to test with GPA or other tools. There's really no need for further speculation here - I'm happy to be proven wrong by the actual data :)
 
Last edited by a moderator:
To the dear mods out there, is it possible to create a new thread for Temash and Kabini and migrate these discussions over there?
 
To the dear mods out there, is it possible to create a new thread for Temash and Kabini and migrate these discussions over there?
There already is one in the processor section, though it started out to be about Jaguar. I guess the distinction between "processor technology" and "3D Architectures" gets a bit difficult these days at least when discussing the whole product not just some aspects from it... I agree though this stuff has not much to do with neither Ivy Bridge nor Trinity.
 
http://wccftech.com/amd-a10-6800k-richland-apu-performance-unveiled/
Leaked A10-6800k benchmarks.

A10-6800K-3DMark11_P.png

More benches at the source: http://diybbs.zol.com.cn/11/11_106618.html
 
Last edited by a moderator:
Come on, Andrew, surely you must agree that it's quite incredible to lose 60% of performance solely from going dual channel to single channel. Even if the dual-channel system was bandwidth limited 50% of the time in these games (which is very high), that would only explain a ~33% performance drop for going single channel. These framerates also waaaaaay below the CPU limit, so they're going to be idle most of the time in these tests.

No, its not. It's because it throttles.

It notes here that the GPU runs at only 800MHz in BF3.

http://www.notebookcheck.pl/Recenzja-Asus-X202E.85853.0.html

The user from Tech Report's review also says the same: http://techreport.com/discussion/24150/asus-vivobook-x202e-notebook-reviewed

The reason the gaming is so bad on this laptop is as soon as you fire up the GPU the CPU clock speed drops to 800mhz and the GPU speed falls to 300-350mhz. I owned this piece of crap for 2 days until i returned it.

even at 800mhz / 300mhz the cpu temps rise into the 90's C. The heatsink is too small, also the cpu is being software and hardware throttled to make it so it wont crash due to over heating.

Even the battery life numbers normalized is lot behind the Core i5 as well.
 
Last edited by a moderator:
Yes, without the laptop's own screen.

This is why we keep seeing weird results for the "17W" ULV i3's depending on system. Because the maximum turbo clock speed is based on temperature, better cooling will allow for higher clocks,

The original Trinity reference platform had a 35W CPU using 62W on a platform level. Yes, it can get pretty high.

Interesting thing from Notebookcheck. While other Ultrabooks show 33-40W maximum power use, the Asus Vivobook X200E in the Tech Report uses only 21W. So there's a side benefit of a throttling system.

I wonder what's going on, here. Ivy is rated at 17W, Kabini at 15W, so that's a 2W difference. Kabini also includes the southbridge, so let's say 4W.

The HM70/HM77(depending on config) chipset in the HP Sleekbook 15 tested has a TDP of 4.1W. Faster chip will also ramp up other components harder as well.
 
Last edited by a moderator:
intel datasheet:
"The TDP specification should be used to design the processor thermal solution. The TDP is
not the maximum theoretical power the processor can generate."

AMD datasheet:
"TDP. Thermal Design Power. The thermal design power is the maximum power a processor can
draw for a thermally significant period while running commercially useful software."

You see now why AMD batteries last longer. The 17wTDP of Intel would likely be 25+ under AMD measurements..

(nb: if my memory holds true, the 'commercially useful' is there to exclude the "GPU virus" from the list)
 
(nb: if my memory holds true, the 'commercially useful' is there to exclude the "GPU virus" from the list)
Which is a BS qualification (who is AMD to judge what's useful to me?), but let's not start that conversation up again.

In any case I don't think anyone is arguing that you shouldn't test both performance and power draw together. That obviously makes the most sense, especially if you want to try and draw any "efficiency" conclusions.

That said, efficiency only really becomes relevant over a certain performance bar. It might be the most efficient to render a frame of Battlefield 3 by running it on a super-low power device over 2 days, but that's hardly useful. That said, once I've hit a playable capped rate of 60fps, how long my battery lasts is more relevant than whether it could be running 100 or 110fps.
 
I'd be more interested in making certain of when the wording changed. Using some unspecified set of "relevant" software has been the method of both manufacturers in the past.
It may have changed with the more aggressive turbo options for Intel, but if so it's also of interest to AMD with later chips that can exceed TDP as well.
 
intel datasheet:
"The TDP specification should be used to design the processor thermal solution. The TDP is
not the maximum theoretical power the processor can generate."

AMD datasheet:
"TDP. Thermal Design Power. The thermal design power is the maximum power a processor can
draw for a thermally significant period while running commercially useful software."

You see now why AMD batteries last longer. The 17wTDP of Intel would likely be 25+ under AMD measurements..

(nb: if my memory holds true, the 'commercially useful' is there to exclude the "GPU virus" from the list)

Those definitions are essentially equivalent.
 
Those definitions are essentially equivalent.
No, they are not. This shows you why:
power-gaming-avg.png


On a not-average load like a game, Intel CPU draws way more power than AMD one.
AMD one stays on its maximum (i.e. remove NB and you ~ there).
So, comparing performance, you are comparing one that draws 17w to one that draws at least 28, 50% more power budget.
Like comparing a <150w card vs a 225w one.

The fact that the AMD definition adds an exception to exclude FURMark and such useless stuff, does not mean it cheat so much - as the above sampling shows.

I agree that AMD should publish the apps used to evaluate it, but the toms bench clearly shows who is using a flawed testbed for calculating the TDP, and who not.

This said, battery life directly depends on how much watts you draw. And If I buy a 17w processor, I would expect its peak to stay around 17 when I do gaming in my box.
 
That said, once I've hit a playable capped rate of 60fps, how long my battery lasts is more relevant than whether it could be running 100 or 110fps.

This begets a need for dynamic vsync, I wonder what's the state of support or non support on AMD and Intel (and whether there's a measurable and useful power use difference if you run a game with vsync on, vsync off, dynamic vsync)
 
No, they are not. This shows you why:
power-gaming-avg.png


On a not-average load like a game, Intel CPU draws way more power than AMD one.
AMD one stays on its maximum (i.e. remove NB and you ~ there).
So, comparing performance, you are comparing one that draws 17w to one that draws at least 28, 50% more power budget.
Like comparing a <150w card vs a 225w one.

The fact that the AMD definition adds an exception to exclude FURMark and such useless stuff, does not mean it cheat so much - as the above sampling shows.

I agree that AMD should publish the apps used to evaluate it, but the toms bench clearly shows who is using a flawed testbed for calculating the TDP, and who not.

This said, battery life directly depends on how much watts you draw. And If I buy a 17w processor, I would expect its peak to stay around 17 when I do gaming in my box.

That's the power consumption of the entire notebook, not just the CPU. Kabini lacks Turbo, and therefore probably draws less than its TDP is most situations, whereas Ivy Bridge boosts up to it or, for small periods of time (thermally insignificant ones, that is) slightly above. Hence the difference measured by Tom's.

Trinity sort of does the same thing, albeit less aggressively, and Richland probably behaves exactly like Ivy Bridge.

TDP means Thermal Design Power, and that's exactly what it is for both AMD and Intel: the power you have to consider when designing your thermal (i.e. cooling) solution. If Intel's CPUs did not respect it, they would burn.
 
Funnily Intel plays the same overestimated TDP game with their desktop CPUs. Celeron G1610 and Pentium G2020 are rated at 55 watts and have no turbo but it's safe to say they use a lot less power.
 
I think this discussion omits the fact that we have no meaningful statistical (will we ever?) data regarding power consumption for various chips. And for mobile systems this is even tougher..

Regarding the low-end lines, relaxed TDP evaluation may mean that manufacturer can harvest more. Just speculating here, don't have evidence.
 
Back
Top