Haswell vs Kaveri

Not according to the OEM's who have given it a massive thumbs down, and that is a Fact.

That's a "fact" is it? It was launched today and the only evidence you have to backup that claim is a rather vague statement from Anand that "many" (not "all", not even "a majority of") OEM's have chosen NV's discrete alternative.

That in no way equates to OEM's giving it a "massive thumbs down" or it being in effect an obsolete product like you are attempting to suggest. We need far more time and far more information to make a judgement on that I'm more than confident that Iris Pro will prove itself to be a viable solutuon to a very real segment of the market.
 
That's a "fact" is it? It was launched today and the only evidence you have to backup that claim is a rather vague statement from Anand that "many" (not "all", not even "a majority of") OEM's have chosen NV's discrete alternative.

That in no way equates to OEM's giving it a "massive thumbs down" or it being in effect an obsolete product like you are attempting to suggest. We need far more time and far more information to make a judgement on that I'm more than confident that Iris Pro will prove itself to be a viable solutuon to a very real segment of the market.

http://www.fudzilla.com/home/item/31551-nvidia-mobile-boss-talks-haswell
 

You continue to miss the point. I'm not arguing value, if the value of Iris Pro is condsidered poor by the market resulting in poor sales then it will be lowered in price. That doesn't change the fact that it is technically a superior product to other options in the market segment it tries to address, i.e. mobile graphics performance in the 50w TDP range.

And the heading of that article is clearly misleading. 95% mobile design wins? I'm not even sure what that means but clearly there is not an NV GPU in 95% of laptops. And if it's meant to be 95% of a particular market segment then I'd like to know what segment it is.
 
3DMark results suggest that Iris Pro performance could be much improved by further graphics driver optimizations - IF intel actually engages in such optimizations that is.

In any case, while discrete graphics do have an edge - and with the additional power budget discrete brings, it would be amazing if it didn't - that is also discrete's achiles heel. Iris Pro is quite strong-performing at a significantly lower power level than CPU + dGPU.
 
Well I wish Intel the best of luck in getting people to pay more for slower integrated graphics considering AMD could barely get anyone to pay less for faster.
 
And the heading of that article is clearly misleading. 95% mobile design wins? I'm not even sure what that means but clearly there is not an NV GPU in 95% of laptops. And if it's meant to be 95% of a particular market segment then I'd like to know what segment it is.

It says "95% gaming notebook" design wins, which is believable.

3DMark results suggest that Iris Pro performance could be much improved by further graphics driver optimizations - IF intel actually engages in such optimizations that is.

They do, but its probably at least few months away if anything. The latest 15.31 driver improves performance by 10% according to Intel(tests vary anywhere from 5% to 20-30%).
 
Hmm, I could see the 4770R being 10% or so faster than this.

and combined with the supposed cpu power improvements (new low power states, PSR, etc

I doubt you'll see more than incremental improvement for non-U and Y series parts. Only the U and Y series based platforms are getting the low power states and PSR, etc.
 
Ok so for those 50W laptop gamers on the move for say, 25 minutes. Check. Damn you AMD for meekly surrendering this market to Intel!
A 50W CPU+GPU can fit into smaller devices and cooling solutions than a 80+W combo or similar, and you're not giving up a lot of performance. Sure it may cost more (although it's not totally clear), but then it's just a question of priorities. As usual... performance, power, form factor - pick two :)

Personally I think it's questionable that people game on laptops *at all* so the idea of carrying a giant laptop with discrete graphics is laughable. All of you arguments also apply to why people should just buy a desktop for gaming, you're just drawing a different division between what an acceptable form factor is *for you*. For me, 13" is the absolute max, 11" is better and much more than a 25W TDP is unworkable. If someone else draws the line at 50W ish TDP then why is that wrong for them? Obviously some people want gigantic "laptops" that barely deserve the name and that's fine, but everyone has different requirements.

Besides, you can hardly argue that these parts don't serve a purpose but then argue that AMDs desktop APUs do... 100+W desktop CPU+GPUs are what really doesn't fit anyone's needs.

The NVIDIA comments - if true - are also pretty hilarious unless they are pairing them with ditching all of their GPUs that perform worse than a 5200 since "serious folks" have to play on something faster than "integrated"... which they won't because as they know very well lots of folks buy those. Frankly the writing is on the wall as far as discrete in laptops goes, it's only a matter of time.

And as usual, you can pry my 680 and 7970 from my cold dead hands, but discrete is simply a bad compromise for laptops. The advantages of integration there are too high. Hopefully AMD comes out with some great APUs in the coming months.
 
Last edited by a moderator:
You continue to miss the point. I'm not arguing value, if the value of Iris Pro is condsidered poor by the market resulting in poor sales then it will be lowered in price. That doesn't change the fact that it is technically a superior product to other options in the market segment it tries to address, i.e. mobile graphics performance in the 50w TDP range.

Aren't you (and Anand) ignoring switchable graphics solutions such as Optimus that switch between iGPU and dGPU depending on the graphics workload? Wouldn't a Haswell i3/i5 desktop CPU + GT2 iGPU used with a switchable dGPU have lower average power consumption (and hence longer battery life) with most non-graphically intensive workloads compared to a Haswell i7 desktop CPU + GT3e iGPU?

Edit: Looking more carefully at Anand's review on Haswell, it appears that Intel is intentionally manipulating TDP of their desktop CPU + iGPU processors to make dGPU's look less attractive from a power perspective (by manipulating operating frequencies on 4th Gen desktop CPU + iGPU processors). Don't you find it odd that Intel 4th Gen [Haswell] Core i5 Desktop Processors have equal or higher "specified" TDP in each and every case compared to Intel 4th Gen [Haswell] Core i7 Desktop Processors: http://www.anandtech.com/show/7003/the-haswell-review-intel-core-i74770k-i54560k-tested/4 , even though integrated graphics are no better than the relatively weak [GT2] HD 4600 on 4th Gen i5 processors? Don't you find it odd that the GPU base clock operating frequency is 2x higher for [GT2] HD 4600 compared to [GT3e] Iris Pro 5200: http://www.anandtech.com/show/6993/intel-iris-pro-5200-graphics-review-core-i74950hq-tested/19 , even though max GPU turbo speeds are identical?
 
Last edited by a moderator:
Wouldn't a Haswell i3/i5 desktop CPU + GT2 iGPU used with a switchable dGPU have lower average power consumption (and hence longer battery life) with most non-graphically intensive workloads compared to a Haswell i7 desktop CPU + GT3e iGPU?
No not really for a couple reasons. Wider GPUs can run at lower frequencies for the same workload and eDRAM can actually reduce power considerably by avoid off-chip memory traffic. Power management will always be better on a single chip vs CPU+dGPU as well since it can adjust a single power budget between all of the components of the system effectively.

Don't you find it odd that Intel 4th Gen [Haswell] Core i5 Desktop Processors have equal or higher "specified" TDP in each and every case compared to Intel 4th Gen [Haswell] Core i7 Desktop Processors: http://www.anandtech.com/show/7003/the-haswell-review-intel-core-i74770k-i54560k-tested/4 , even though integrated graphics are no higher than [GT2] HD 4000 on these 4th Gen i5 processors?
Uhh what? This conspiracy theory is too complicated for me to even follow, let alone be convinced of...
 
Wider GPUs can run at lower frequencies for the same workload

True, but the average power consumption and peak power consumption can still be higher for a lower clocked GPU depending on how many more functional execution units are active. For instance, a GTX 780 will have higher avg. and peak power consumption than GTX 770, even though it has a significantly lower GPU clock operating frequency. And for non-graphically intensive workloads (which is what I was referring to), there would be no reason to run GT2 at a higher operating frequency than GT3e.

This conspiracy theory is too complicated for me to even follow, let alone be convinced of...

It is clear that Intel manipulated CPU + iGPU clock operating frequencies so that--from a power perspective--Core i5 (and probably Core i3) 4th Gen processors cannot stand out from Core i7 4th Gen processors (see first link in my post above), and so that HD 4600 cannot stand out from Iris Pro 5200 (see second link in my post above). In fact, the Core i7-4800MQ (with HD 4600 integrated graphics) has higher CPU and iGPU clock operating frequencies compared to Core i7-4850HQ (with Iris Pro 5200 integrated graphics), simply because Intel didn't want that option to look more attractive from a specified TDP perspective.
 
For load power base clock doesn't matter. Intels dynamic frequency does matter and how far can it go up. On both up to 1300 Mhz.
 
True, but the average power consumption and peak power consumption can still be higher for a lower clocked GPU depending on how many more functional execution units are active.
Sure, but that implies you're getting more work done. Efficiency is only relevant for a fixed set of work, and there are too many factors to spin theories that have any sort of general applicability (race to idle, balance of work, etc).

In fact, the Core i7-4800MQ (with HD 4600 integrated graphics) has higher CPU and iGPU clock operating frequencies compared to Core i7-4850HQ (with Iris Pro 5200 integrated graphics), simply because Intel didn't want that option to look more attractive from a specified TDP perspective.
Uhh I don't think this all works how you think it does (with respect to power and base clock). The GT3e parts tend to have a wider turbo range because they have to load balance the power budget with more components (eDRAM, etc). I don't think your conclusion that the i5s are going to arbitrarily suck more power is supported; I'd wait for the actual data on that given a fixed workload.
 
Last edited by a moderator:
http://techreport.com/review/24879/intel-core-i7-4770k-and-4950hq-haswell-processors-reviewed/13

L2zt989.png


AMD has based its sales pitch for APUs on converged computing and OpenCL acceleration. Looks to me like Intel isn't willing to cede any ground to its competitor here. Using its eDRAM cache, the Iris Pro 5200 IGP nearly triples the performance of the A10's Radeon IGP.
 
What is the price difference between them? also AMD have just released the 6800k, so i think that should be in the comparison.

From what I've seen the 6800K is barely any faster. AMD's main problem here, I think, is that Trinity's VLIW4 GPU is really showing its age:

08-OpenCL-02-Luxmark.png
 
From what I've seen the 6800K is barely any faster. AMD's main problem here, I think, is that Trinity's VLIW4 GPU is really showing its age:

08-OpenCL-02-Luxmark.png

That was my thinking too although I didnt realise the gap was so large. Kaveri should make vast improvements over Trinity/Richland by the looks of it. But by enough to overtake Iris Pro? That will certainly be an interesting battle.
 
That was my thinking too although I didnt realise the gap was so large. Kaveri should make vast improvements over Trinity/Richland by the looks of it. But by enough to overtake Iris Pro? That will certainly be an interesting battle.

With quad-channel memory it should have enough bandwidth to accomplish that.
One place where Intel has advantage in GPU OpenCL is that it allows L3/L4 cache to be used by GPU. Some workloads are responding very well to that and it shows in LuxMark. I do think this particular workload would fare brilliantly on XBONE APU.
 
Kabini might be useable as an indicator of Kaveri performance (over Trinity/Richland) when it's out in volume and with release drivers. From the previews, an A4-5000 seems to be 2-3 times the speed of an E-350 (Brazos) in may CL benchmarks. However, in Luxmark 2.0 in particular, it seems to be slower at every site who included it. Some sites seem to have discounted this as a bug, while other consider it a genuine performance regression vs. VLIW for Luxmark-like workloads.
 
Back
Top