Intel "Coffee Lake" (2017/2018, 14 nm)

From PC Watch: "Intel, 2018 years to put the new CPU 'Coffee Lake' of 14nm" (original).

PC Watch (Google Translate) said:
According to the OEM manufacturer muscle of information of Intel, Intel is planning a product called "Coffee Lake" in the development code name of 2018 (coffee Lake), with up to 6 core, a built-in high-end GPU, such as GT3e It is to be the product.
[…]
Importantly, this Coffee Lake's that they are manufactured in 14nm process rules. Intel is not the 10nm is the most advanced at that point in time in Coffee Lake, it is considered that there are two reasons to manufacture using the previous generation of 14nm.

One is the plan B at the time when the start-up of 10nm did not work, it can be considered that there is aimed that to avoid the risk of die size with the new process rule to produce a large chip.
[…]
Another reason is, it is shortening of the period in accordance with the validation (operation verification).

According to the author of this report, this is what we're likely to see:
1.jpg
 
I know a guy who'd love the idea of a Coffee Lake... :LOL: He starts off each day with half a liter* of coffee. :oops: Then has several more cups throughout the day.

*I don't know how many flutonic pints or liquid inches that would be (or whatever strange measurements you imperials in the U.S. use...)
 
Only 2 cores when up to 28W up until 2019?
Basically, except for these 6-core models on 45W, everything stays pretty much the same as we've had since 2015's Skylake.

Wow, that's boring as hell. Let's hope AMD can spice things up on those TDP ranges with Zen.
 
I know a guy who'd love the idea of a Coffee Lake... :LOL: He starts off each day with half a liter* of coffee. :oops: Then has several more cups throughout the day.

*I don't know how many flutonic pints or liquid inches that would be (or whatever strange measurements you imperials in the U.S. use...)
We didn’t ‘ave these bleeding litres when I was a young man

A ‘alf litre ain’t enough. It don’t satisfy. And a ‘ole litre’s too much. It starts my bladder running. Let alone the price.’

America, holding strong against the tools of totalitarianism, like the metric system. America, f*** yeah!

Lol, I always remember that dialogue from 1984.
 
I know a guy who'd love the idea of a Coffee Lake... :LOL: He starts off each day with half a liter* of coffee. :oops: Then has several more cups throughout the day.

*I don't know how many flutonic pints or liquid inches that would be (or whatever strange measurements you imperials in the U.S. use...)

It's easy. 1 liter is roughly equivalent to 1 US quart (the UK Imperial quart holds quite a bit more). Half a quart is a Pint. :) And who doesn't like a Pint of beer?

It's pretty easy to use as well. Everything can be broken down into a binary (base 2) system. 2 cups = 1 pint (16 fl. ounces). 2 pints = 1 quart (~32 fl ounces). 2 quarts = half gallon (~64 fl. ounces). 2 half gallons = gallon (~128 fl. ounces). If robots ever take over the world, this is the unit of measurement they'd use. :yep2:

And who wouldn't like to have a unit of measurement called a Hogshead (63 gallons or ~238.5 liters)? :D

Regards,
SB
 
Last edited:
A poster on AnandTech forums has what appears to be a roadmap from Intel showing Coffee Lake in Q2 2018. It's different from the PC Watch one in that the 15/28 W Coffee Lake has 4 cores and the 45 W Coffee Lake has GT2.

97pc7s9n.jpg


Also, Cannonlake-Y is 5.2 W. I didn't expect the TDP to go up from Kaby Lake, I thought it would go down if anything.
 
Also, Cannonlake-Y is 5.2 W. I didn't expect the TDP to go up from Kaby Lake, I thought it would go down if anything.

Perhaps they're counting on compensating elsewhere, like using Hybrid Memory Cube or WideIO2.
 
I have been demanding processors with the GT3e (fastest iGPU + EDRAM) and more than 4 cores for several years already. 6 cores is better than 4. If Intel decides to make a 8 core desktop Coffee Lake i7 with the fattest iGPU they have + largest EDRAM, I will buy it immediately. Would be perfect processor for us graphics programmers. No need for extra laptop to test on Intel iGPU and perfect setup for DX12 explicit multiadapter development.
 
I have been demanding processors with the GT3e (fastest iGPU + EDRAM) and more than 4 cores for several years already. 6 cores is better than 4. If Intel decides to make a 8 core desktop Coffee Lake i7 with the fattest iGPU they have + largest EDRAM, I will buy it immediately. Would be perfect processor for us graphics programmers. No need for extra laptop to test on Intel iGPU and perfect setup for DX12 explicit multiadapter development.

The roadmap seems to suggest that the 4C+GT4e Skylake will be replaced by a 6C+GT2 Coffee Lake... I know that roadmap only goes up to H1 2018, but if there was a 6C+GT4e Coffee Lake after that, I think they would just continue the Skylake strip towards the end..
Then again, replacing a GT4e with a GT2 in supposedly the same segment doesn't make a whole lot of sense either.


Intel's roadmaps and releases regarding larger-than-GT2 iGPUs have been really strange, though:

- In Haswell the GT3e was reserved to laptops and embedded but only appeared in a couple of models (a Clevo and a MSI) plus the 15" Macbook Pro.

- Then with Broadwell they made a GT3e available to both desktop and laptop, but then it appears not a single laptop maker picked it up. Apple updated their 13" Macbook Pro model to the Broadwell GT3 but for the 15" model they kept the same Haswell.

- With Skylake they made a 15W GT3e variant that was only picked up for Microsoft's top-end Surface Pro and Dell's XPS13, and the 28W variant got another 2 models from Vaio and Acer: a total of 4 models using any version of Skylake's GT3e. 4 models is better than the 2 they got with Haswell, but it's still not exactly a stellar number of design wins.
Skylake also brought the new 4C+GT4e variant with 50% more execution units and twice the amount of EDRAM but apparently not a single laptop maker picked it up. That chip still resides only in intel's own Skull Canyon NUC (which BTW isn't getting exactly stellar performance results).

- Now Kaby Lake isn't bringing any GT4e model to the table, and again there's no sign of GT3/GT4 ever coming back to the desktop.


I thought that, with the smaller price premium for GT3e/GT4e that Intel has asked since Skylake, many laptops would start to appear with those chips, foregoing the need for an external GPU and getting either smaller or larger batteries.
But that simply never happened..


What I think has happened is the variants with GT3e and >15W TDPs simply didn't get any design wins from laptop makers (perhaps the "nvidia graphics" sticker gets them more sales even if it refers to a crappy old GPU?). And maybe desktop Broadwell probably didn't sell well enough to justify its development. Therefore, Intel has scrapped development for everything but 15W GT3e models.


That said, it seems your only hope for high performance iGPUs will rely on AMD's Zen APUs sometime in 2017. And even that is a big if.
 
Skylake also brought the new 4C+GT4e variant with 50% more execution units and twice the amount of EDRAM but apparently not a single laptop maker picked it up. That chip still resides only in intel's own Skull Canyon NUC (which BTW isn't getting exactly stellar performance results).
72 EUs is a bit too much for that low TDP. Also that processor only has 2 channel memory controller. Apple's A9 and A10 have much higher bandwidth. EDRAM helps with bandwidth, but Apple (PowerVR) has tiling to compensate.

I would like to see GT4e in desktop processor with 100W+ TDP. And eight newest CPU cores. Intel has 22 core Xeons (with 6 channel memory controllers) so it shouldn't be any problem to fit GT4e and 8 cores and 4 channel memory controller on the same die.
 
Like I said, it's not a matter of being possible or not. It's a matter of Intel's customers showing enough interest to justify the investment for its development.
And if we see the number of design wins of GT3e models with a TDP above 15W, it looks like the interest just isn't there.

The "Iris Graphics" brand never really took off, so that might have played a significant part in how things turned out.

AMD should have a better shot in high-performance/high-power APUs thanks to the "Radeon Graphics" brand, though.
 
72 EUs is a bit too much for that low TDP. Also that processor only has 2 channel memory controller. Apple's A9 and A10 have much higher bandwidth. EDRAM helps with bandwidth, but Apple (PowerVR) has tiling to compensate.

I would like to see GT4e in desktop processor with 100W+ TDP. And eight newest CPU cores. Intel has 22 core Xeons (with 6 channel memory controllers) so it shouldn't be any problem to fit GT4e and 8 cores and 4 channel memory controller on the same die.
I think a lot of enthusiasts would love to see that as well, the strategy from Intel executives seem half heated with how they implement this on a 65W Broadwell and not bother with higher TDP or extreme processors and especially desktop I5/I7 Skylake.
Cheers
 
I think a lot of enthusiasts would love to see that as well, the strategy from Intel executives seem half heated with how they implement this on a 65W Broadwell and not bother with higher TDP or extreme processors and especially desktop I5/I7 Skylake.
Cheers
Market segmentation.
Intel plays an extremely well balanced game of profit optimisation.
 
I have been demanding processors with the GT3e (fastest iGPU + EDRAM) and more than 4 cores for several years already. 6 cores is better than 4. If Intel decides to make a 8 core desktop Coffee Lake i7 with the fattest iGPU they have + largest EDRAM, I will buy it immediately. Would be perfect processor for us graphics programmers. No need for extra laptop to test on Intel iGPU and perfect setup for DX12 explicit multiadapter development.
I recall trying to briefly persuade some Intel guy on this forum a couple years back that they should add more cores, to which the reply was "no one uses more than 4 so it's pointless", I bet that was a common belief at Intel at the time.
AMD is going to make some gains due to that complacency. :yes:
 
Like I said, it's not a matter of being possible or not. It's a matter of Intel's customers showing enough interest to justify the investment for its development.
Intel has never released GT4e in a high end desktop part. Broadwell 5775c (GT3e) came close, but it had lower TDP and lower clocks than competing desktop products. Couldn't even beat previous gen in CPU performance. And that desktop Broadwell only had 48 EU GPU. Top end GT4e 72 EU models are only available as mobile CPUs with lower TDP. Socketed desktop chips with 72 EU GPU are not available. Who wants to buy a flagship CPU that is not the fastest in either CPU or GPU side.

It's hard to say anything about interest when the product is not there. Show me a desktop i7 with max clocks (4 GHz+) and high TDP + EDRAM + 72 EU GT4e. Enthusiasts would buy that CPU for the EDRAM alone, as it would be the fastest i7 quad core CPU available. 5775c trounced higher clocked Haswell in some benchmarks because of the EDRAM. Enthusiasts don't want to compromise. If you get EDRAM and faster iGPU with no reduction in CPU clocks, there will be market for it. And I am not talking about people that intend to use the iGPU as their sole GPU. People buying a CPUs like this already have a Geforce 1080 or Titan X.
 
I recall trying to briefly persuade some Intel guy on this forum a couple years back that they should add more cores, to which the reply was "no one uses more than 4 so it's pointless", I bet that was a common belief at Intel at the time.
Apple is the same. Still only dual core (+2 slow extra cores to save power). It's a chicken and egg problem. As long as there are 4 or less cores, software can be written mainly as single threaded or with a specific thread per task (such as render, physics, game logic). When the core count scales higher, it is no longer viable to split specific tasks to hardcoded threads. The programmer needs to start using either task based models or parallel loops. Usually a combination of both gets the job done. That's what console centric game engines usually do. The problem however is that console CPUs are substantially lower powered compared to PC counterparts. A quad core Intel CPU can easily execute 7 cores worth of console tasks in the 16.6 ms time limit. Games no longer show gains on faster PC CPUs. That's really the problem. Productivity software shows mostly gains when IPC and clocks are improved. So Intel improves these. But both are hard to improve, so the effort is spent in improving perf/watt. People like longer battery life in their laptops. Desktop owners are a minority. As a side effect these perf/watt improvements allow Intel to cram more cores to their server Xeons without hitting thermal limits (up to 22 cores now).

We just need a killer app for 8+ core consumer CPUs. A game as popular as Minecraft that shows huge gains on fast Intel 8+ core CPUs (with default settings). Unfortunately it is not commercially viable to ship such a game as most gamers don't have more than 4 cores. Also console CPUs would not be able to handle it either, so the whole market would be a handful of PC enthusiasts.

This means that game developers must offload work to GPGPU if they want to scale up their systems. Fortunately all gaming PCs have fast enough GPUs for this job. Unfortunately on PC we have latency issues on some GPUs. DX12 high priority compute queues work well only on AMD GCN and Nvidia Pascal. Other GPUs just ignore high priority, and push the commands after all graphics work (causing up to 2 frames of delay).
 
Last edited:
Market segmentation.
Intel plays an extremely well balanced game of profit optimisation.
Yeah agree, but it did nothing to help resolve the continued sales stagnation of x86 processors, which then also continued on from desktop to HPC (context here for HPC more about Broadwell rather than GT4e and seems many are waiting on the better architecture Skylake Xeons that do have some notable improvements specific to Xeons).
Intel's approach actually started to slowly kill their sales business, of course part of this was that Broadwell should had been cancelled due to its delays and associated problems (should had been used as a lessons learnt process instead with write-offs) and so this is impacting on product strategy.
Which makes it more unusual from a product strategy on how they committed with the EDRAM and GT4e, IMO anyway and with what Sebbi mentions.

Cheers
 
Apple is the same. Still only dual core (+2 slow extra cores to save power). It's a chicken and egg problem. As long as there are 4 or less cores, software can be written mainly as single threaded or with a specific thread per task (such as render, physics, game logic). When the core count scales higher, it is no longer viable to split specific tasks to hardcoded threads. The programmer needs to start using either task based models or parallel loops. Usually a combination of both gets the job done. That's what console centric game engines usually do. The problem however is that console CPUs are substantially lower powered compared to PC counterparts. A quad core Intel CPU can easily execute 7 cores worth of console tasks in the 16.6 ms time limit. Games no longer show gains on faster PC CPUs. That's really the problem. Productivity software shows mostly gains when IPC and clocks are improved. So Intel improves these. But both are hard to improve, so the effort is spent in improving perf/watt. People like longer battery life in their laptops. Desktop owners are a minority. As a side effect these perf/watt improvements allow Intel to cram more cores to their server Xeons without hitting thermal limits (up to 22 cores now).

We just need a killer app for 8+ core consumer CPUs. A game as popular as Minecraft that shows huge gains on fast Intel 8+ core CPUs (with default settings). Unfortunately it is not commercially viable to ship such a game as most gamers don't have more than 4 cores. Also console CPUs would not be able to handle it either, so the whole market would be a handful of PC enthusiasts.

This means that game developers must offload work to GPGPU if they want to scale up their systems. Fortunately all gaming PCs have fast enough GPUs for this job. Unfortunately on PC we have latency issues on some GPUs. DX12 high priority compute queues work well only on AMD GCN and Nvidia Pascal. Other GPUs just ignore high priority, and push the commands after all graphics work (causing up to 2 frames of delay).
I would like to add a bit to this picture.
Apple is in a great position to both see what people are actually using their phones/tablets for, and to design their SoCs and software accordingly, and for projected upcoming use cases. The SoC is a heterogenous multiprocessor. The CPU cores take care of the general housekeeping/general processing/traditional tasks and offers the most flexible programmability. The specific use cases that benefit from wider computing resources get specialised processing units. Graphics is what we are most concerned with here, but other tasks that were previously engaging CPU and/or GPU more also get their own processing units. Photo image processing, video capture/encoding/decoding, audio and whatnot (encryption, network processing?) have their own dedicated hardware, increasing performance/W and offloading the other units.
There are specific use cases that benefit from more CPU cores, but how widely used are they really in the greater scheme of things? I'm not sure that shifting the design budget of the classical CPU part of the SoC from single thread performance to heavily multi threaded performance makes sense. Especially when the option to shift some such applications to a more generally programmable GPU or to their own dedicated hardware block exists, and is utilised. Apple isn't as pushed to compete in the "bigger numbers" game as the various Android vendors for instance that basically offer the same software functionality, and they are trying to be noticed and picked by the consumers based on marketable hardware features. Even though they don't operate in a commercial vacuum, Apple can still choose solutions that are better from other aspects then pure direct marketability. (And do. They have some neat stuff that just isn't marketed or even published.)

This is miles away from the situation of for instance intel. They may have an effective monopoly in their x86 niche, but as component suppliers they are still forced to create market strata based on hardware features. Intel does supply the people who need multiple (beyond four) CPU cores - but at higher margin pricing. Regarding a killer app that benefitted greatly from many cores lets be honest, it's been a decade since intel launched their first consumer quad core, and no such killer app has shown yet. It's not bloody likely to come. And if it did show up, wouldn't it then merit its own hardware block, rather than trying to sell 32-core x86 legacy processors to consumers? (Even though intel would be sure to try!)

The case for massively multicore general CPUs is just not very strong. An evolution of heterogenous multiprocessing may make more sense for the foreseeable future.
 
It's easy. 1 liter is roughly equivalent to 1 US quart
Just to point out pre 1971 in the u.k there were 240 pence to the pound. and it wasnt just pounds and pence it was pounds shillings and pence (1 shilling = 12 pence)
Imagine how much fun it was to be an accountant back in the day...
 
Just to point out pre 1971 in the u.k there were 240 pence to the pound. and it wasnt just pounds and pence it was pounds shillings and pence (1 shilling = 12 pence)
Imagine how much fun it was to be an accountant back in the day...

Yeah, a lot of the older measurement systems had some kind of logic behind them. The whole cup, pint, quart, gallon, for example, was to make it easy for the common man to measure out liquid quantities (binary system so everything is half or double the next or previous measurement). Feet were basically that, feet. And yards were the equivalent of a man's stride. But I never really understood the old English currency system. 20 shillings = 1 pound. 12 pence = 1 shilling. I'll have to do more research into that some day. :)

There's also the US system with the penny, nickle, dime, quarter, and dollar. But it gets more bizarre. In the old days there was also the concept of Bits used in America. 2 bits = 1 quarter or 12.5 cents. That was due to the closeness between the US and Mexico and hence a mixing of Spanish/Mexican currency and US currency. A bit was a Spanish/Mexican coin worth 1/8th of a peso or 12.5 US cents. Hence when trading between the US and Mexican citizens it was common to refer to a quarter as a 2 bit coin. That idiom spread through the rest of the US via cowboys and literature about cowboys.

Regards,
SB
 
Back
Top