Intel Gen9 Skylake

I would actually prefer a low clocked 8 core Xeon with 72 EU GPU + EDRAM. It would both compile fast and be useful for integrated GPU shader optimization.

So what you actually want is the XBox One's APU with better CPU cores ;)
AMD seems to be planning that, though it probably won't come before 2017..

Except it is not $200 more. It's $85 more.
i5-4570: $192
i5-5675C: $276

Actual retail prices for the 4570's faster follow up 4590 and the 5675C are rather different, and for that $120 difference you can buy a discrete graphics card that will put Broadwell's Iris Pro to shame.
Not to mention those deals that one can get on GPU-less Xeon E3 models.

The price difference that Intel asks for their Iris models is hardly worth it. Plus, I've yet to see benchmarks that prefer the EDRAM models to the similarly priced LGA2011 CPUs capable of quad-channel DDR3/DDR4.
 
Actual retail prices for the 4570's faster follow up 4590 and the 5675C are rather different, and for that $120 difference you can buy a discrete graphics card that will put Broadwell's Iris Pro to shame.
Good point. Did not think to check retail prices. And Broadwell is only available on newegg. Let's hope Skylake will have better availability and pricing...

The price difference that Intel asks for their Iris models is hardly worth it. Plus, I've yet to see benchmarks that prefer the EDRAM models to the similarly priced LGA2011 CPUs capable of quad-channel DDR3/DDR4.
Those CPUs require more expensive motherboards.
 
Really? That sounds sweet, except when I tried enabling both GPUs in my system, the intel driver created a "virtual monitor" which could not be disabled from what I was able to tell, then windows put another, off-screen, desktop on that monitor which I could not access (but still lose my mouse pointer into since it attached to an edge of the screen), which randomly caused my monitor to display nothing at all but solid black when coming out of power save/turned on.
Might be glitchy on certain setups - note that there's an interaction with the BIOS as well so make sure you're all up to date there.

This is really all on the OS, not the IHV drivers, so it's really a combination of OS+BIOS that enable it.

I see anybody remotely interested in playing games on pc either going for a high and cpu and high end gpu or or hardly noticeably slower i5 or i7 and spend the 100 ~ 200 bucks they save on a gpu. The latter is undoubtedly going to give you much better performance for the same money.
If you're talking desktop then a dGPU makes sense, but an increasingly large number of gamers don't even have a desktop these days. And as soon as you start segmenting off "well they aren't *real* gamers" if they don't have X/Y/Z hardware and play A/B/C games, then it gets a little bit silly. Also, I'll win that competition you 4 core fake-gamer PEASANTS! :p (Although repi beats me heh.)
 
Actual retail prices for the 4570's faster follow up 4590 and the 5675C are rather different, and for that $120 difference you can buy a discrete graphics card that will put Broadwell's Iris Pro to shame.
Sure, but you lose a bunch of the CPU side. It's all a question of priorities. If you are primarily interested in perf/$ in gaming, I completely agree that LGA Iris Pro is not a good trade-off. But a lot of people are not that price sensitive to start with and the cool thing is that you're not really giving up any gaming potential by having a beefer iGPU.

The price difference that Intel asks for their Iris models is hardly worth it.
If you're going on ARK prices... don't go on ARK prices. The 5775c is slightly more expensive than a regular high end i7 (in MSRP - given apparent stock issues retail might be skewed for now), but not very much and you do get quite a bit of hardware for that cost.

Plus, I've yet to see benchmarks that prefer the EDRAM models to the similarly priced LGA2011 CPUs capable of quad-channel DDR3/DDR4.
http://techreport.com/review/28751/intel-core-i7-6700k-skylake-processor-reviewed/14
5775c comes out as the best CPU for gaming - over a 6700K and 5960x - despite a frequency and power deficit.

And yeah, they are not at all similarly priced once you get a LGA2011 motherboard + the quad channel RAM. Don't get me wrong I love the hell out of my 5960x, but if you're primary purpose is gaming it is not really worth the money for current games vs. other options. New games and DX12 may change that of course, we'll have to see.
 
I want an Intel 5x5 motherboard and a 72 EU GT4e GPU with a huge cache. A discrete GPU means a bigger case. I think 5x5 is only 65W though.
 
I want an Intel 5x5 motherboard and a 72 EU GT4e GPU with a huge cache. A discrete GPU means a bigger case. I think 5x5 is only 65W though.
Yeah a quick glance at the SKUs should demonstrate that these big GPUs are primarily designed for mobile devices where form factor and power are relevant constraints :) Let's all remember that the 5775c and ilk exist because enthusiasts specifically asked for them, and if my experience at IDF is an indication, at least the hardware tech press is very happy with them.

Agreed the 5x5 looks interesting, although they fit a 65W Iris Pro in a 4x4 ~NUC form factor with the Gigabyte Brix Pro. I personally would take the tradeoff of a bit bigger size for bigger/slower fans though :) In any case all of these are parts are 47/65W or less so that shouldn't be an issue.
 
5x5 is great, because in theory you can upgrade your CPU/GPU as long as they don't switch sockets, where NUC and Gigabyte Brix are soldered.
 
So what you actually want is the XBox One's APU with better CPU cores ;)
In my particular (niche) professional use case, I especially need a desktop chip with an Intel integrated GPU, since my workstation already can be configured with a pair of discrete (Crossfire) Radeons or (SLI) Geforces. Discrete Intel GPUs do not exist, meaning that I need to select my CPU based on the integrated GPU. Also the currently available AMD CPU cores are not fast enough for compiling code, compared to the new Skylake cores.

Zen is interesting because AMD seems to be pushing 8 core (16 thread) configurations to (reasonably priced) consumer products. This will increase the number of wide consumer CPUs, meaning that game developers have more insentive to scale up to high core/thread counts. 16 threads with HT = game engine needs to scale up to 16 threads to reach best performance. We need wider consumer CPUs to push developers towards better multithreading models.
 
Last edited:
IMO, all Intel K-series (unlocked) CPUs should have the eDRAM chip and Iris iGPU. It's the only thing that makes sense in a Win10/DX12 world going forwards here on out... :)
 
Sure, but you lose a bunch of the CPU side. It's all a question of priorities. If you are primarily interested in perf/$ in gaming, I completely agree that LGA Iris Pro is not a good trade-off. But a lot of people are not that price sensitive to start with and the cool thing is that you're not really giving up any gaming potential by having a beefer iGPU.


If you're going on ARK prices... don't go on ARK prices. The 5775c is slightly more expensive than a regular high end i7 (in MSRP - given apparent stock issues retail might be skewed for now), but not very much and you do get quite a bit of hardware for that cost.


http://techreport.com/review/28751/intel-core-i7-6700k-skylake-processor-reviewed/14
5775c comes out as the best CPU for gaming - over a 6700K and 5960x - despite a frequency and power deficit.

And yeah, they are not at all similarly priced once you get a LGA2011 motherboard + the quad channel RAM. Don't get me wrong I love the hell out of my 5960x, but if you're primary purpose is gaming it is not really worth the money for current games vs. other options. New games and DX12 may change that of course, we'll have to see.

Now that you've convinced us that a CPU with lots of fast L4 is better for gaming than a higher speed CPU without, all you need to do now is convince Intel to release one (on the latest IP and in the 4Ghz+ range) so that we can buy the damn thing ;)

Would be nice if you can pass on sebbbi's wish for an 8 core + L4 + 72EU part too, I'd likes me one of those!
 
Now that you've convinced us that a CPU with lots of fast L4 is better for gaming than a higher speed CPU without, all you need to do now is convince Intel to release one (on the latest IP and in the 4Ghz+ range) so that we can buy the damn thing ;)

Would be nice if you can pass on sebbbi's wish for an 8 core + L4 + 72EU part too, I'd likes me one of those!
Oh trust me I constantly advocate for these sorts of SKUs, but it's not an area that I have a huge amount of influence over. This stuff is mostly driven by the market and as such external folks have more influence than internal in most cases. We've made some progress with stuff like the 5775c existing at all - hopefully those sell well as that's a stronger story to the folks that decide these things than my whining :)
 
Oh trust me I constantly advocate for these sorts of SKUs, but it's not an area that I have a huge amount of influence over. This stuff is mostly driven by the market and as such external folks have more influence than internal in most cases. We've made some progress with stuff like the 5775c existing at all - hopefully those sell well as that's a stronger story to the folks that decide these things than my whining :)

Whining is no good, what you need is leverage. Often, it's just a matter of kidnapping the right children.
 
Define "support". I believe it will execute async tasks serially if that's what you're asking. I suppose I should note that gen is on the narrower side of architectures and would probably not see the same benefit that other architectures might get. It's unclear what performance they are leaving on the table.
 
I suppose I should note that gen is on the narrower side of architectures and would probably not see the same benefit that other architectures might get.

This is true. I wonder if anyone has ever studied Intel's DX11 GPUs on e.g. Tesselation performance.
OTOH, until Skylake's 72 EUs part comes out, there will be little to no incentive to even try enabling tesselation in a game using an Intel iGPU.


Hardware support. Maxwell doesn't support Asynchronous Shader in hardware unlike GCN. Does Gen9 support this in hardware like GCN or more similar to Maxwell?

1 - willardjuice said it'll execute async tasks serially so no, there's no capability to get graphics + compute queues at the same time.

2 - we still don't know for sure that Maxwell 2 doesn't support Asynchronous Shader.


Regardless, I think Intel's Gen9 GPUs in high profile games will probably be used in tandem with a discrete GPU using multi-adapter for post-processing effects through compute.. For this scenario, Async isn't very interesting.
 
Like I said I suspect it executes those tasks serially so closer to Kepler/Fermi/Maxwell 1 (I think it's too soon to give up on Maxwell 2 but the jury is still out). What I'm trying to stress though is a lack of "hardware support" doesn't necessarily mean it's "bad". Perhaps Intel already keeps it's execution units mostly fed. An architecture like GCN is wider so it has a greater likelihood of having idle units (and thus benefiting more from async compute). Hard to say what benefit async compute would bring to Gen (I'm sure some, but "how much" is difficult to answer).
 
What I'm trying to stress though is a lack of "hardware support" doesn't necessarily mean it's "bad". Perhaps Intel already keeps it's execution units mostly fed. An architecture like GCN is wider so it has a greater likelihood of having idle units (and thus benefiting more from async compute). Hard to say what benefit async compute would bring to Gen (I'm sure some, but "how much" is difficult to answer).
Right, this. I've commented before on this in other threads but the marketing/enthusiast understanding around this issue has gotten extremely confused. AMD obviously has reason to add confusion to the consumer marketing message and I get that, but it's annoying as even some developers are confused.

Let me try one more time here:

From an API point of view, async compute is a way to provide an implementation with more potential parallelism to exploit. It is pretty analogous to SMT/hyper-threading: the API (multiple threads) are obviously supported on all hardware and depending on the workload and architecture it can increase performance in some cases where the different threads are using different hardware resources. However there is some inherent overhead to multithreading and an architecture that can get high performance with fewer threads (i.e. high IPC) is always preferable from a performance perspective.

When someone says that an architecture does or doesn't support "async compute/shaders" it is already an ambiguous statement (particularly for the latter). All DX12 implementations must support the API (i.e. there is no caps bit for "async compute", because such a thing doesn't really even make sense), although how they implement it under the hood may differ. This is the same as with many other features in the API.

From an architecture point of view, a more well-formed question is "can a given implementation ever be running 3D and compute workloads simultaneously, and at what granularity in hardware?" Gen9 cannot run 3D and compute simultaneously, as we've referenced in our slides. However what that means in practice is entirely workload dependent, and anyone asking the first question should also be asking questions about "how much execution unit idle time is there in workload X/Y/Z", "what is the granularity and overhead of preemption", etc. All of these things - most of all the workload - are relevant when determining how efficiently a given situation maps to a given architecture.

Without that context you're effectively in the space of making claims like "8 cores are always better than 4 cores" (regardless of architecture) because they can run 8 things simultaneously. Hopefully folks on this site understand why that's not particularly useful.

... and if anyone starts talking about numbers of hardware queues and ACEs and whatever else you can pretty safely ignore that as marketing/fanboy nonsense that is just adding more confusion rather than useful information.
 
Last edited:
This is true. I wonder if anyone has ever studied Intel's DX11 GPUs on e.g. Tesselation performance.
OTOH, until Skylake's 72 EUs part comes out, there will be little to no incentive to even try enabling tesselation in a game using an Intel iGPU.


Tessellation performance was a strong point since Gen7.5, I doubt this is a bottleneck on Gen9.



55484wvx0o.png

http://www.anandtech.com/show/7032/...-gpu-on-the-desktop-radeon-hd-8670d-hd-4600/3
 
Back
Top