Intel Broadwell (Gen8)

No, they are 15W Ultrabook chips with GT2 graphics.
I do have an Iris Pro 5200 laptop though, but it is paired up with a GTX 870M so I don't use the integrated much.
Why did a notebook manuf build a notebook with Iris Pro and then add a discrete GPU too?
 
Last edited:
15" Macbook Pros are like that IIRC. Because Apple, I suppose. And, the retina display. If you're running on iGPU, you'd want decent performance regardless. :)
 
15" Macbook Pros are like that IIRC. Because Apple, I suppose. And, the retina display. If you're running on iGPU, you'd want decent performance regardless. :)

L4 cache does give some benefits in some CPU workloads too iirc. The one I have is actually the Aorus X3 Plus, so i7-4860HQ combined with the GTX 870M with a 13.9" 3200x1800 screen. Nice little machine!
 
@Thorburn The L4 makes huge difference in certain scientific workloads especially, and probably just about anything involving very large datasets/random accesses. General tasks are sped up as well also of course, just not as much since these are served pretty well by existing cache structures. Next CPU I buy will definitely have the L4 die, that's a certainty.
 
Next CPU I buy will definitely have the L4 die, that's a certainty.
Same here. I am eagerly waiting to see the full Broadwell lineup with the big L4 cache (and GT3). Would be interesting to see how much the L4 boosts my CPU-based voxel rendering prototype.
 
Same here. I am eagerly waiting to see the full Broadwell lineup with the big L4 cache (and GT3). Would be interesting to see how much the L4 boosts my CPU-based voxel rendering prototype.

I'd be happy to run a comparison of it vs. an i7-4710MQ if you'd like.
 
A couple 3DMark videos - one uploaded (Ice Storm) and the other in progress (Cloud Gate). Both show 17% gains from i5-4210U to i5-5200U.

Both systems updated to the 4080 drivers - previously the HSW was on 35xx and the BDW was on 4062 or something like that. Retested Unigine on them and didn't see any noticeable difference, but good to be up to date.



Also have The Sims 4 and Titanfall videos queued to upload - damn 1Mbit upload sucks. :(
 
15-20% is a rather small increase. I wonder if this is because the turbo clock is lower (only a GPU clock log can prove this) or the changes in Gen8 just weren't big enough to make a bigger difference. Could be well possible that the GPU is running 100-200 Mhz lower than Haswell. Also I've read about throttling issues in some tests. In a polish test they noticed low clock speeds.

Immediately after starting the game, everything is in perfect order. The CPU is clocked at 2.9 GHz and the integrated graphics reaches about 950 MHz. The change in the situation, but you do not have to wait long. After a while, slowed down to a value of: CPU - 1.6 GHz GPU - 750 MHz, making the HD 5500, to put it mildly, does not develop wings.
https://translate.googleusercontent...5.html&usg=ALkJrhgwMFBHOz3TcrDIIIik6o5vbTe6vA
 
Its all about the TDP. 15-20% in the same power envelope and with equal memory bandwidth isn't so bad.
 
It is relatively bad considering this is a 22nm-14nm shrink and using a updated GenX. Look to IVB-->HSW. Same process, Gen7-Gen7.5 sidegrade update, reduced TDP and similar improvement.
 
It is relatively bad considering this is a 22nm-14nm shrink and using a updated GenX. Look to IVB-->HSW. Same process, Gen7-Gen7.5 sidegrade update, reduced TDP and similar improvement.

SNB to IVB is a better comparison - die shrink and GPU generation change. In fact in the 15W TDP parts the performance difference between HD 3000 and HD 4000 was basically zero.
 
I don't believe that sorry.

Actually yes, it looks like you were right to query that one - there were a couple tests where this was the case but the majority do show some quite good performance scaling so my apologies.
 
15" Macbook Pros are like that IIRC. Because Apple, I suppose. And, the retina display. If you're running on iGPU, you'd want decent performance regardless. :)
In OSX Apple actually uses a mix of iGPU and dGPU depending on the workload (ex. a lot of OpenCL stuff runs on the Iris Pro). The two are in the same ballpark of performance so the setup is still a bit weird but I suppose it's only the one MBP config in any case.
 
Bit of a demonstration of how performance scales with TDP - Core M 5Y10c vs. i5-5200U in GRID Autosport. Same 2+2 die, similar GPU base and max frequency speeds, but near linear scaling in performance with TDP increase.

 
Nice, very visual demonstration :)

I could have emphasised it a bit more by using higher detail settings and turning the HD 5300 in to a slideshow really, but I already had a capture of it running at 1280x720 Ultra Low detail and didn't want to faff about reinstalling it. ;)
 
Absolutely, I wasn't being sarcastic at all. I think folks sometimes gloss over how power-limited these chips are and near-linear scaling like you showed there with similar amounts of hardware is a good reminder. And yeah, I agree that picking "playable" settings for all SKUs is the best way. I always shake my head at reviews that draw conclusions based on one part being 2fps vs. another at 1fps (TWICE AS FAST!!! OMG!) ;)
 
I always shake my head at reviews that draw conclusions based on one part being 2fps vs. another at 1fps (TWICE AS FAST!!! OMG!) ;)
As whacked-up as that may be, what's way worse is reviews making a big deal about game X on hardware Y "only" being 97% the performance of hardware Z, when framerates are already well past 200... Ugh, lol.
 
As whacked-up as that may be, what's way worse is reviews making a big deal about game X on hardware Y "only" being 97% the performance of hardware Z, when framerates are already well past 200... Ugh, lol.
Lol yeah no argument there... once you're past vsync, you're done guys :) Trying to draw architectural conclusions based on microsecond differences at high frame rates is total silliness.
 
Back
Top