Intel "Kaby Lake"

These things run pretty hot when overclocked, even with liquid cooling. Most serious overclockers replace the low quality thermal paste between the chip and the heatspreader.
 
Getting 15-20% lower power consumption at the same performance level, without microarchitectural changes or a new process node is pretty impressive, IMHO.

I understand Kaby Lake looks disappointing for anyone on this forum; Most here care more about performance, CPU or GPU, more than anything else. It's the same if Bugatti announced their new Veyron had 20% better mileage instead of 20% more hp; Useful, but not sexy.

Cheers
While I see it being touted here and there, is there solid evidence for Kaby Lake being that much more efficient than Skylake? And by solid evidence I do not mean TDP classification but real world measurements with a sample size >1?

Our KBL efficiency testing in the desktop has been a mixed bag so far with 7700K being nearly on par efficiency-wise with 6700K while 7600K was more efficient than 6600k - all of course with a sample size of 1.
 
While I see it being touted here and there, is there solid evidence for Kaby Lake being that much more efficient than Skylake? And by solid evidence I do not mean TDP classification but real world measurements with a sample size >1?

Our KBL efficiency testing in the desktop has been a mixed bag so far with 7700K being nearly on par efficiency-wise with 6700K while 7600K was more efficient than 6600k - all of course with a sample size of 1.
It seems a bit of a strange one, Tom's Hardware shows the 7700K having the same or at times slightly higher power consumption to the 6700K (apples-to-apples same clock frequency), but then the 7700K has better (lower) Vcore with up to 8% improvement.
But they mention they have a very average 7700K sample compared to what their US office used, IMO some of this I think also comes down to the new motherboard-firmware that will need further refinement.

Cheers
 
I think Haswell was still pretty good.

Haswell brought GT3 GPUs with eDRAM. It didn't reach most of their models but it was a pretty big step up regarding integrated GPUs.
Especially as AMD kept being pushed back by a lackluster CPU architecture and outdated manufacturing processes.
 
Yeah that i5 is probably a response to AMD. Maybe it's an indicator of where their best offering generally sits. It's going to be interesting.
 
Last edited:
https://www.cpchardware.com/intel-prepare-la-riposte-a-ryzen/

Some sources are mentioning an upcoming Core i5 7640K being released with HyperThreading turned on. Only difference would be 6MB L3 cache instead of 8MB, and perhaps some security or virtualization features turned off.

It'll be interesting to see what kind of reactionary measures Intel will (or will not) do with RyZen's release.

The WCCFTech version of that article (http://wccftech.com/intel-core-i7-7740k-core-i5-7640k-amd-ryzen/) has some additional details (like a more specific 112W TDP), but it claims that the 7640K will only be a 4C/4T part (like the existing 7600K).

But it's a leak, either way, so the final configurations are probably still up in the air. A ~$200 4C/8T part would definitely be competitive from the blue corner.

IbMxlGw.png


EDIT: I just noticed the memory section of the table. It just dawned on me that these are on LGA2066. That seems like a weird way to compete with Ryzen because those systems tend to cost quite a bit (pricey motherboards, more RAM, etc). I feel like it'd be REALLY weird to see an LGA2066 part that only has 4 threads.
 
Does anyone know why the base and boost clocks of the rumored 7640K are the same (assuming this report is true)? That seems strange to me considering the "nearby" CPUs (7600K, 7700K, and 7740K*) have boost clocks that are higher than the base clocks.

* The report states that the 7740K has a 4.5 GHz boost but it is not shown in the table for some reason.
The Intel Core i7-7740K processor will become the fastest Core i7 chip in the Kaby Lake lineup. It will replace the Core i7-7700K with slightly better specs. This chip features a quad core, hyper-threaded design. The chip is based on the latest 14nm+ process node which delivers improved efficiency and performance on the existing 14nm FinFET technology.

The clock speeds are rated 4.3 GHz base and 4.5 GHz boost.
 
lga 1151 and lga 2066 did they really need to do that

I think that might count as Kaby Lake-X.

I agree that it's very confusing since there seems to be a Skylake-X (or is it Skylake-E?).

From a competitive perspective, I'm not certain why it makes sense to jump up to that platform. Maybe it was necessary for the tdp bump? I'm just grasping at straws at this point.
 
lga 1151 and lga 2066 did they really need to do that
I think that might count as Kaby Lake-X.

I agree that it's very confusing since there seems to be a Skylake-X (or is it Skylake-E?).

From a competitive perspective, I'm not certain why it makes sense to jump up to that platform. Maybe it was necessary for the tdp bump? I'm just grasping at straws at this point.

Just dawned on me that HEDT sockets historically use a "traditional" soldered TIM. Better OC without the need to delid!

Of course your first thought is, "Spartacus, can't they just use solder on LGA1151?" I remember with Haswell Refresh, Intel improved the TIM, but they didn't use solder. I can't find a source, but I believe Intel justified the continued use of a polymer-based TIM because of the cost to reconfigure their equipment. But since LGA2066-related infrastructure was probably set up to use a solder-based TIM, then there's no problem.

Also, on a different tangent, aren't the memory controllers on-die now throughout Intel's lineup? So wouldn't a hypothetical LGA2066-based 7740K still only support two memory channels? Just thinking back to that WCCFTech article.
 
From BenchLife: "More powerful GPU performance, Kaby Lake-G identified in the Intel program" (Google Translate title, original here).

BenchLife (Google Translate) said:
"-G" means that this series of Kaby Lake processor package will be through the PCIe x8 channel, directly connected to a separate GPU chip, and is equipped with HBM2 memory package GPU chip.
intel-kaby-lake-g.jpg


I think these chips could be used in future 15" MacBook Pros (65 W only) and iMacs (both). Also the HBM2 is interesting. Vega 11 is speculated to use 1 stack of HBM2, but I doubt that whatever GPU goes into Kaby Lake-G will be anywhere near as powerful. There's still a wide performance gap between current IGPs and speculated Vega 11 specs, so I'm not sure where a Kaby Lake-G GPU would land. Any thoughts from this community would be appreciated.
 
Last edited:
Kyle from HardOCP has mentioned this a while ago. It's true that the 100W power consumption doesn't really add up with a 4C+GT2 design.

What I don't get is why they supposedly have a MCM with the GPU so close to the CPU and then they connect it through a PCIe 8x bus.
 
What I don't get is why they supposedly have a MCM with the GPU so close to the CPU and then they connect it through a PCIe 8x bus.

TTM reasons.

Hiroshige Goto from PCWatch explains it further: http://pc.watch.impress.co.jp/docs/column/kaigai/1054618.html

As we already know, the CPU die is 4 cores plus GT2. While they could have used PCIe derivatives like OPI or DMI, the latter does not have enough bandwidth and former only exist in GT3e and GT4e dies. And this is not.

Further they say the HBM memory controller is on the GPU die. They plan to integrate HBM controller on their CPUs, but this is not the generation. Also, they were thinking of 2018 CPU parts for HBM but some reasons made them push forward the introduction. From what he is saying 2018 CPU sounds like 2019 realistically because we're already 1/3 into 2017 and it'll likely be end of 2017 for KBL-G at the earliest.

He also says that "other than Intel" GPU being used for the secondary die is talked about not only by motherboard manufacturers but software developers as well.
 
From Videocardz, "Intel preps dual-core i3-7360X for X299, but why?"

It’s only 100 MHz faster than 7350K. The turbo clock is 4.3 GHz. The TDP though, skyrockets to 112W.

According to the leaker, the i3-7360X is 1.25% faster than 7350K. The price of 7360X is expected around 1699 Yuans (220 USD), so it’s not cheap.
I have the same question as Videocardz. But at least it has Turbo Boost unlike the rest of the desktop Core i3 lineup….
 
For that times when you want to expense 3 times as much in your Motherboard than in your CPU and 2 times as much in Ram. Yeah its a logic path to me.
 
Back
Top