How so? There's really no magic here... run the numbers and you end up getting very similar efficiency across modern GPU architectures if things are reasonably apples-to-apples (i.e. same precision, similar TDP, etc). There are only so many ways to make an ALU (and copy-paste it 100 times ). This even holds if you look just at theoreticals across the various power levels (Core M @ 4.5W, X1 @ 10W, BDW-U @ 15W, etc). There are fundamental physics at play here.The Us are good in power use, but they suck in absolute perf/watt. Again, Nvidia is better there.
You could insert any two modern GPU companies into that comparison and it would be wrong; no two modern GPUs are that different in efficiency.Nvidia has a *MUCH* better perf/watt profile. No comparison.
Do note that baytrail (I assume that's what you're talking about wrt. Android) is a tiny GPU that isn't really even meant to compete against the bigger mobile chips (that's more Core M territory at the moment).... and especially to Android Intel still has some work ahead and no nothing really that cannot be achieved fairly soon.
Do note that baytrail (I assume that's what you're talking about wrt. Android) is a tiny GPU that isn't really even meant to compete against the bigger mobile chips (that's more Core M territory at the moment).
I do agree though that drivers are still a confounding issue for architecture comparisons in the mobile space. To be honest I avoid mobile benchmarks in architecture discussions entirely though because it's worse than the 90's era of shenanigans and "cheating" there with no signs of getting much better, so most results are utterly meaningless ("I can put in better clip planes than you!" etc.). But that's another thread I guess...
Baytrail competes fine from what I've seen so far things like ultra thin tablets and yes even in public benchmarks like Gfxbench3.0 which is at least highly GPU bound as a GPU benchmark unlike other rubbish that is available. In any case the couple of cases where I laid hands on recent Android tablets with an Intel SoC seem to need a wee bit more work, while there's obviously little to nothing to object for windows cases.
What "deceit"? If you're talking about stuff like the Yoga 3 Pro not performing as well as the Intel demos that unfortunately is completely on the OEM. For better or for worse they have the controls to ratchet up and down power (and thus performance) and what they do for their SKUs is entirely up to them. You should not be surprised if an OEM dropping the TDP (by large fractions of the total TDP) results in significantly lower performance.Really, between deceit with Core M performance
You could insert any two modern GPU companies into that comparison and it would be wrong; no two modern GPUs are that different in efficiency.
It's obviously still interesting to compare efficiency, but you need to be fairly fine grained and careful to tease out the actual differences. There are certainly specific *areas* where one architecture differs greatly from another in performance/efficiency, but making grandiose claims does not inspire confidence in the test methodology.
You'd have to get more specific on a workload for me to agree. ex. there are certain workloads for which GCN is multiple factors more power efficient than Maxwell and vice versa. Certainly for a lot of games Maxwell holds its own vs. higher powered GCN chips pretty well, but it still does depend on the specifics. Thus I wouldn't personally make sweeping statements about power efficiency, particularly with words like "much" here, but I do get that there's a desire at least in the press to try and "simplify" stuff.I think it's fair to say that Maxwell is much more efficient than the current incarnation of GCN, or Kepler for that matter.
Yep, both DDR3-1600 in dual channel mode (2 x 4GB).Both dual channel memory I assume? Not bad improvement; but looking forward to seeing other benches.
Do either of these systems have the eDRAM buffer chip?
I'll be honest I didn't check the drivers. I'll try some GPU clock speed logging with XTU if it works.