Intel Skylake Platform

I'd argue the reason why most applications don't use more than 4 cores is that we don't have more than 4 core CPU's in the mainstream. Pretty much all games these days make good use of 4 cores and most scale to some extent to 8. I've little doubt that if 8 cores were mainstream now (and had been for the last few years) then we'd be seeing pretty good scaling on them in lots of applications.

We've had quad cores for about a decade now. How many consumer programs actually USE all 4 cores significantly. A very small fraction of them. Programs that could actually USE more than that are an even tinier fraction.

Making 8 core CPU's available at consumer level prices isn't going to change anything when barely anything uses 4 cores. Yes, we're starting to see more games utilizing 4 cores. However, despite quad cores being quite mainstream for a while now, the vast majority of programs (including games) run just as well on dual cores as they do on quad cores.

I realize that in the epeen waving gaming world 8 cores is sexy even if the majority of that niche consumer group would rarely, if ever, use more than 4 cores.

Now going forward, we'll see more games efficiently using 4 cores potentially. When that becomes the norm rather than the niche, then maybe Intel will feel there's an actual need for a consumer level 6-8 core CPU. Until then, people can feel free to buy more expensive 8 core CPUs and then use only 2-4 of them while gaming. I'll be happily gaming on a far cheaper 4 core CPU.

Regards,
SB
 
We've had quad cores for about a decade now. How many consumer programs actually USE all 4 cores significantly. A very small fraction of them. Programs that could actually USE more than that are an even tinier fraction.

Making 8 core CPU's available at consumer level prices isn't going to change anything when barely anything uses 4 cores. Yes, we're starting to see more games utilizing 4 cores. However, despite quad cores being quite mainstream for a while now, the vast majority of programs (including games) run just as well on dual cores as they do on quad cores.

I realize that in the epeen waving gaming world 8 cores is sexy even if the majority of that niche consumer group would rarely, if ever, use more than 4 cores.

Now going forward, we'll see more games efficiently using 4 cores potentially. When that becomes the norm rather than the niche, then maybe Intel will feel there's an actual need for a consumer level 6-8 core CPU. Until then, people can feel free to buy more expensive 8 core CPUs and then use only 2-4 of them while gaming. I'll be happily gaming on a far cheaper 4 core CPU.

Regards,
SB

Just about every modern game out there runs faster on quad cores vs dual cores. Just because they aren't doubling in speed doesn't mean the cores aren't being used. All the arguments you've made above were made when the transition from 1 to 2 cores was made, and then again when 2 to 4 cores was made. And yep, we'll be having this same discussion again when we eventually move from 8 cores to 16 cores, and so on. But hey, 640k is enough for anybody, right?
 
All the arguments you've made above were made when the transition from 1 to 2
No. Just no.
Intel was explicitly saying that dual cores are the future, as they had run into the frequency wall, and were pricing them aggressively to encourage adoption, to avoid the chicken-and-egg problem.
Reviewers also considered dual cores worth it.
"When it comes to dual core vs. single core with Hyper Threading, there's a huge difference. While both improve system response time, dual core improves it more while also guaranteeing better overall system performance. Hyper Threading lets you multitask, dual core lets you actually get work done while multitasking."
Read the article. It's clear that dual cores were clearly considered the future. In 2005. Compare that to today's reviews of Haswell-E.

I bought my first computer then, and I chose to spend money on dual cores + IGP instead of single + discrete. That system is still serving me well as my parents' desktop.

And on another note, Haswell-E actually decreased the price of 8 core CPUs from $2000 to $1000 ;)
 
I appreciate your opinion but please stop trying to ram it down my throat, we're not going to agree on this. I stand by my belief that if 8 cores were mainstream and quads low end today then we'd see much greater use of those 8 cores in modern software. I also remember the transition from single core to dual core very differently to you (and my first computer came many years before that transition) but I really don't have the inclination to argue about it so how about we just agree to disagree.
 
I'd be curious to see how different applications scale with the number of cores, that could be an interesting article to write... (no time though, so anyone up for it ?)
 
I did this once for Skyrim using my 3930k. It was arduous because you must reboot to reset the active core count / hyperthreading settings. I also did the testing across three different speeds.

It was interesting that, at low speed (1.5GHz), more cores helped until 4c/4t, after which there was a slight dip when the whole 6c/12t CPU was enabled. At high speed (4.5GHz), the 2c/4t performance was enough to hit the glass ceiling.

https://forum.beyond3d.com/posts/1641725/
 
Took me a moment to read those graphs as the colour scheme wasn't very obvious. Turns out you can click on the bars to highlight them.

The Haswell-E shows significant improvement going from 6 cores to 8 cores on a couple of their tests. Watchdogs going from 61 to 81 fps. Crysis 3 going from 41.5 to 56 fps. That's absolutely huge. Not at all what I was expecting. Didn't expect 6 cores to be a bottleneck.
 
Contrary to most of you here actually I believe that Intel will try to consolidate 2 bigs cores 4 software cores (or more) as the basic of its "non pro" offering.
It makes no sense for Intel to push further in world whom is more and more power constrained. What I see is:
Intel continue to widen its core overall // Hyper-threading sometime has a negative impact on performances // extremely potent SIMD unit are going under used // the cheapest the chip...
Instead of raising the number of cores it would make sense to me for Intel to improve hyper threading as a mean to maximize the utilization all massive investments they make on their cores. I could see an increase of the L1 and L2 cache size to make further room for 2 threads (or more). Skylake is a new architecture it would be interesting to see them rework the way SIMD work is handheld: would it be possible for the CPU on its own to mix work coming from the 2 (or more) threads and send to the SIMD in "VLIW" manner? (think the SIMD is 16 lanes, each thread can use the the whole 16 lane, or the CPU could "pack" instructions from 4 threads doing say SSE4.2 work, or 2 thread doing AVX/AVX2 work).

As I see their chips intended to reach the personal realm could look more and more like Core M, 2 great cores, good IGP, 64 bit memory controller supporting fast DDR4, 64MB of CrystalWell. The Chip itself will be tiny at 10nm so easily mass produced yet is likely to kill anything below 35Watts or so (most laptops won't go higher and I guess in a lot offices those towers will soon be a thing of the past).
 
Last edited:
The Haswell-E shows significant improvement going from 6 cores to 8 cores on a couple of their tests. Watchdogs going from 61 to 81 fps. Crysis 3 going from 41.5 to 56 fps. That's absolutely huge. Not at all what I was expecting. Didn't expect 6 cores to be a bottleneck.
Notice how those are also the two games that show large gains from hyperthreading. Which does imply that those cores experience stalls/bottlenecks, and may point to an optimization issue. Yes, this is a WAG.
 
Took me a moment to read those graphs as the colour scheme wasn't very obvious. Turns out you can click on the bars to highlight them.



The Haswell-E shows significant improvement going from 6 cores to 8 cores on a couple of their tests. Watchdogs going from 61 to 81 fps. Crysis 3 going from 41.5 to 56 fps. That's absolutely huge. Not at all what I was expecting. Didn't expect 6 cores to be a bottleneck.



In case you can't understand all the text : they did go out of their way to find a very CPU hungry scene (esp. in those two games) and measured a short run through them.

Notice how those are also the two games that show large gains from hyperthreading. Which does imply that those cores experience stalls/bottlenecks, and may point to an optimization issue. Yes, this is a WAG.

Or the load is sufficiently math heavy that there are strong gains. I don't quite understand your argument : when a load is well multithreaded enough that you get a +30% performance increase on an i3 or i7 it seems like a good thing.
 
In case you can't understand all the text : they did go out of their way to find a very CPU hungry scene (esp. in those two games) and measured a short run through them.
Thank you for translating :)

Or the load is sufficiently math heavy that there are strong gains. I don't quite understand your argument : when a load is well multithreaded enough that you get a +30% performance increase on an i3 or i7 it seems like a good thing.
Hyperthreading only shows gains if some of the resources in the cores are underutilized. It may be that utilizing them without HT is impossible, it may be that code optimization would lead to large gains. Again, I do not really know anything about this and am just guessing here.
 
I don't think the performance jumps from 4->6/8 cores are what is important here, it's the jump from 2->4 cores since that's the current low end/mainstream market. Clearly if we are getting good scaling with 4 cores over 1 and 2 cores then multithreading is beneficial, but until 8 cores are mainstream, games aren't going to be actively targeting that number of cores and so we're unlikely to see significant scaling, just as we didn't see significant scaling when quad cores first released.

Incidentally, I'm not sure how good those benchmarks above are for judging CPU scaling since they're all running at max graphics settings moving a large chunk of the bottleneck off the CPU onto the GPU.
 
Hyperthreading only shows gains if some of the resources in the cores are underutilized. It may be that utilizing them without HT is impossible, it may be that code optimization would lead to large gains. Again, I do not really know anything about this and am just guessing here.
HT gives big gains for code that includes lots of back to back dependent (random access) memory lookups. Pointer list and tree traversals are a classic example of this. But even single memory accesses (pointer dereference, hash lookup) that miss the data cache can stall the CPU, if all the following instructions depend on this data. In this case OoO cannot hide the memory latency. HT will fill the ROB with independent instructions from the other thread.

Memory access pattern optimization will reduce the gains from HT, but not all random accesses can be optimized away if you want to be work optimal.
 
Incidentally, I'm not sure how good those benchmarks above are for judging CPU scaling since they're all running at max graphics settings moving a large chunk of the bottleneck off the CPU onto the GPU.
You could just knock down the screen resolution. A heavy-duty GPU running even the most advanced games software at max settings is unlikely to become bottlenecked at say, 720P for example...
 
Anandtech reviews the i7-6700K

article said:
... [the 6700K] isn't the only viable option right now: the six-core Haswell-E Core i7-5820K, which is priced very similarly to the 6700K, is a very strong performer indeed. With six physical cores in your system, multithreaded software applications will see a real boost to performance ... For gaming, though, the higher default and turbo clocks of the Core i7-6700K, plus the headroom for large overclocks, make it a better choice.
 
Back
Top