Intel Broadwell (Gen8)

Makes you wonder what Broadwell or Skylake would be like if the rumored Intel takeover of Nvidia had happened a few years back.
 
The Us are good in power use, but they suck in absolute perf/watt. Again, Nvidia is better there.
How so? There's really no magic here... run the numbers and you end up getting very similar efficiency across modern GPU architectures if things are reasonably apples-to-apples (i.e. same precision, similar TDP, etc). There are only so many ways to make an ALU (and copy-paste it 100 times ;)). This even holds if you look just at theoreticals across the various power levels (Core M @ 4.5W, X1 @ 10W, BDW-U @ 15W, etc). There are fundamental physics at play here.

I also think you are greatly over-estimating the difference that a process node makes to a GPU at this point. You should be able to tell from HSW->BDW and K1->X1 (although NVIDIA sorta munges some of the specifics in marketing) that it's not a huge deal in terms of power improvement.

Nvidia has a *MUCH* better perf/watt profile. No comparison.
You could insert any two modern GPU companies into that comparison and it would be wrong; no two modern GPUs are that different in efficiency.

It's obviously still interesting to compare efficiency, but you need to be fairly fine grained and careful to tease out the actual differences. There are certainly specific *areas* where one architecture differs greatly from another in performance/efficiency, but making grandiose claims does not inspire confidence in the test methodology.
 
Last edited:
It might be vastly OT but if you blink one eye for die area and another one for higher frequencies you can actually break even between an ARM Mali T7x0 and a Series6 Rogue in the ULP SoC GPU IP space.

As for NV I'd agree within boundaries that perf/W differences aren't as vast as some might think; however when it comes to drivers and especially to Android Intel still has some work ahead and no nothing really that cannot be achieved fairly soon.
 
... and especially to Android Intel still has some work ahead and no nothing really that cannot be achieved fairly soon.
Do note that baytrail (I assume that's what you're talking about wrt. Android) is a tiny GPU that isn't really even meant to compete against the bigger mobile chips (that's more Core M territory at the moment).

I do agree though that drivers are still a confounding issue for architecture comparisons in the mobile space. To be honest I avoid mobile benchmarks in architecture discussions entirely though because it's worse than the 90's era of shenanigans and "cheating" there with no signs of getting much better, so most results are utterly meaningless ("I can put in better clip planes than you!" etc.). But that's another thread I guess...
 
Do note that baytrail (I assume that's what you're talking about wrt. Android) is a tiny GPU that isn't really even meant to compete against the bigger mobile chips (that's more Core M territory at the moment).

I do agree though that drivers are still a confounding issue for architecture comparisons in the mobile space. To be honest I avoid mobile benchmarks in architecture discussions entirely though because it's worse than the 90's era of shenanigans and "cheating" there with no signs of getting much better, so most results are utterly meaningless ("I can put in better clip planes than you!" etc.). But that's another thread I guess...

Baytrail competes fine from what I've seen so far things like ultra thin tablets and yes even in public benchmarks like Gfxbench3.0 which is at least highly GPU bound as a GPU benchmark unlike other rubbish that is available. In any case the couple of cases where I laid hands on recent Android tablets with an Intel SoC seem to need a wee bit more work, while there's obviously little to nothing to object for windows cases.
 
Baytrail competes fine from what I've seen so far things like ultra thin tablets and yes even in public benchmarks like Gfxbench3.0 which is at least highly GPU bound as a GPU benchmark unlike other rubbish that is available. In any case the couple of cases where I laid hands on recent Android tablets with an Intel SoC seem to need a wee bit more work, while there's obviously little to nothing to object for windows cases.

Bay Trail was fairly competitive at launch, they've just not followed it up with much else.
 
Really, between deceit with Core M performance
What "deceit"? If you're talking about stuff like the Yoga 3 Pro not performing as well as the Intel demos that unfortunately is completely on the OEM. For better or for worse they have the controls to ratchet up and down power (and thus performance) and what they do for their SKUs is entirely up to them. You should not be surprised if an OEM dropping the TDP (by large fractions of the total TDP) results in significantly lower performance.

You and I personally might think it's stupid, but the OEMs have their own ideas about what consumers want and they might even be right, who knows. Hopefully more devices will broaden the options and include some higher performance/premium ones in the near future (IIRC Asus hinted at CES that their Core M transformer would perform much better than the Lenovo - here's hoping.)

The fact that Core M @ 3-6W runs at a significant fraction of HSW-U @ 15W is pretty good IMO, but for whatever reason the power story gets lost in OEM device decisions (i.e. folks replacing HSW-U lines with Core M rather than BDW-U). But don't confuse the two when talking about architecture.
 
Last edited:
You could insert any two modern GPU companies into that comparison and it would be wrong; no two modern GPUs are that different in efficiency.

It's obviously still interesting to compare efficiency, but you need to be fairly fine grained and careful to tease out the actual differences. There are certainly specific *areas* where one architecture differs greatly from another in performance/efficiency, but making grandiose claims does not inspire confidence in the test methodology.

I think it's fair to say that Maxwell is much more efficient than the current incarnation of GCN, or Kepler for that matter. Granted, that may be a transient phenomenon due to the fact that NVIDIA released its new generation of GPUs ahead of AMD, but then again, it might not. We won't know until AMD launches something.

Either way, it shows that large gaps can happen, even if they typically only last a few months. And, IHV and OEM schedules being what they are, a few months here or there can turn into about a year for laptop designs. I'm not saying this is necessarily the case for Broadwell, but these things happen, and the latter is quite late, so it wouldn't be shocking.

All that being said, I'd like to look at a proper, through reviews before drawing any conclusions.
 
I think it's fair to say that Maxwell is much more efficient than the current incarnation of GCN, or Kepler for that matter.
You'd have to get more specific on a workload for me to agree. ex. there are certain workloads for which GCN is multiple factors more power efficient than Maxwell and vice versa. Certainly for a lot of games Maxwell holds its own vs. higher powered GCN chips pretty well, but it still does depend on the specifics. Thus I wouldn't personally make sweeping statements about power efficiency, particularly with words like "much" here, but I do get that there's a desire at least in the press to try and "simplify" stuff.

Beyond workloads, architectures are designed for specific power levels so relative efficiency varies across different SKUs and power levels. Ex. Haswell is fairly efficient for many workloads at 15W but when you start to get to the 50W+ SKUs it is heavily trading efficiency for a bit more performance. It's similar for discrete GPUs - each have design points and getting too far away from those typically hurts efficiency.
 
IDF Spring 2005 - Day 3: Justin Rattner's Keynote, Predicting the Future

platform2015.jpg


http://www.anandtech.com/show/1635/4
 
Managed to get hold of an i5-5200U based laptop (Acer Aspire E5-571-5814), and thought it would be rude not to put it up against a Haswell-U system. :)


Excuse the awful explanation of the differences between HD 4400 and HD 5500 at the start - tried to keep it relatively start. Overall the performance comes in 18% faster than HD 4400. I'll be running through various other titles on it as well over the weekend.
 
I'll be honest I didn't check the drivers. I'll try some GPU clock speed logging with XTU if it works.
 
Do either of these systems have the eDRAM buffer chip?

No, they are 15W Ultrabook chips with GT2 graphics.
I do have an Iris Pro 5200 laptop though, but it is paired up with a GTX 870M so I don't use the integrated much.
 
I'll be honest I didn't check the drivers. I'll try some GPU clock speed logging with XTU if it works.


Make sure you have the newest Build 4080 installed. I know that most Intel iGPU testers don't care about the driver but is important for an accurate test.
 
Back
Top