Intel Loses Steam Thread

I wonder if Intel could be doing what other companies have been doing and let them do some of the r and d expenditure and then lure away people with the knowledge.
 
I wonder if Intel could be doing what other companies have been doing and let them do some of the r and d expenditure and then lure away people with the knowledge.
Samsung already pulled that one with TSMC. If I were intel, I’d poach Samsung.
 
Personally I think at some point it will make more sense to go multi-layer CPU instead of going for finer processes, just like they did for flash.
How challenging (economical) this would be from a production point of view I'm not quite sure.

The issue isn't economics, it's that the performance of the upper layer of transistors is utter crap, roughly on the level of what was common 30 years ago. The reason for this is that implanted silicon has terrible performance compared to the perfect crystal of the substrate. For flash, this isn't much of a problem -- no-one cares that much about transistor performance in their flash memory. For logic it's just a complete non-starter.

In order for 3d integrated features (as opposed to single features with 3d elements, or multiple planar dies stacked together) to become feasible, we need to move from silicon to some other material that can be implanted better.
 
My understanding is that TSMC, Samsung and GloFo are all embracing EUV on their respective 2nd gen 7nm processes?

Well Samsung already has it at 1st gen, according to that article.
2nd gen, it's certainly plausible that everyone will try to go for it

Edit :
https://semiengineering.com/quantum-effects-at-7-5nm/ Could be viewed as a confirmation that GF is attempting EUV for 7 nm.
https://www.globalfoundries.com/technology-solutions/cmos/performance/7nm-finfet mentioning EUV "compatibility" means that likely it's not coming in the first iteration
 
Last edited:
Personally I think at some point it will make more sense to go multi-layer CPU instead of going for finer processes, just like they did for flash.
How challenging (economical) this would be from a production point of view I'm not quite sure.

Not a good idea considering that the power consumption (and thus heat) of CPU designs and their usage patterns (NAND transistors are accessed orders of magnitude less per second than CPU transistors) is orders of magnitude higher than that of Flash or even DRAM.

How are you going to dissipate all of that heat if you have layers of the CPU insulated by other layers of CPU, all of which serves to not only insulate the heat, but each layer then contributes heat to neighboring layers.

As tunafish mentions, we need some radical changes.

Regards,
SB
 
Well Samsung already has it at 1st gen, according to that article.
2nd gen, it's certainly plausible that everyone will try to go for it

Edit :
https://semiengineering.com/quantum-effects-at-7-5nm/ Could be viewed as a confirmation that GF is attempting EUV for 7 nm.
https://www.globalfoundries.com/technology-solutions/cmos/performance/7nm-finfet mentioning EUV "compatibility" means that likely it's not coming in the first iteration
Ye it's not coming for first gen at GF that's for sure, but 2nd gen should be like your first link suggests.
AFAIK same goes for TSMC and Samsung and as you said, Samsung is already on their first gen (LPE and LPP are counted as one gen though I think?)
 
Not a good idea considering that the power consumption (and thus heat) of CPU designs and their usage patterns (NAND transistors are accessed orders of magnitude less per second than CPU transistors) is orders of magnitude higher than that of Flash or even DRAM.

How are you going to dissipate all of that heat if you have layers of the CPU insulated by other layers of CPU, all of which serves to not only insulate the heat, but each layer then contributes heat to neighboring layers.

As tunafish mentions, we need some radical changes.

Regards,
SB

What about 3D micro fluid channels for cooling ?:

Microfluidic cooling has existed for years; tiny microchannels etched into a metal block — to cool the SuperMUC supercomputer. Now, a new research paper on the topic has described a method of cooling modern FPGAs by etching cooling channels directly into the silicon itself. Previous systems, like Aquasar, still relied on a metal transfer plate between the coolant flow and the CPU itself.

Here’s why that’s so significant. Modern microprocessors generate tremendous amounts of heat, but they don’t generate it evenly across the entire die. If you’re performing floating-point calculations using AVX2, it’ll be the FPU that heats up. If you’re performing integer calculations, or thrashing the cache subsystems, it generates more heat in the ALUs and L2/L3 caches, respectively. This creates localized hot spots on the die, and CPUs aren’t very good at spreading that heat out across the entire surface area of the chip. This is why Intel specifies lower turbo clocks if you’re performing AVX2-heavy calculations.

 
What about 3D micro fluid channels for cooling ?:

Microfluidic cooling has existed for years; tiny microchannels etched into a metal block — to cool the SuperMUC supercomputer. Now, a new research paper on the topic has described a method of cooling modern FPGAs by etching cooling channels directly into the silicon itself. Previous systems, like Aquasar, still relied on a metal transfer plate between the coolant flow and the CPU itself.

Here’s why that’s so significant. Modern microprocessors generate tremendous amounts of heat, but they don’t generate it evenly across the entire die. If you’re performing floating-point calculations using AVX2, it’ll be the FPU that heats up. If you’re performing integer calculations, or thrashing the cache subsystems, it generates more heat in the ALUs and L2/L3 caches, respectively. This creates localized hot spots on the die, and CPUs aren’t very good at spreading that heat out across the entire surface area of the chip. This is why Intel specifies lower turbo clocks if you’re performing AVX2-heavy calculations.


Sure, but all of that increases cost. That's fine when we're talking about server implementations and even specialized professional applications but isn't nearly as applicable for consumer applications.

Not only would the cost of the CPU increase due to increased manufacturing complexity, but cost of integration into a system goes up considerably.

Regards,
SB
 
Meh, in 2017 everyone would've said Intel's still ahead in manufacturing.
To even be talking that they're slightly behind, that's quite the change//
 
Meh, in 2017 everyone would've said Intel's still ahead in manufacturing.
To even be talking that they're slightly behind, that's quite the change//

Change in reality? Or the change in the other thing?

Hugely complex multi-dimensional optimisation problem reduced to a single number (mostly by the marketing dept). People judge reality based on single number, come to conclusions. Well maybe that hasn't changed.
 
Of course it's just a marketing term, it's been said enough times that Intel's 10nm = TSMC's 7nm. If Zen 2 is released on 7nm in 2019 and is clearly superior to Ryzen or Coffee lake on 14nm, and Intel are still trying to sort out their 10nm process, which CPU are customers going to buy?
 
Sorry for quoting Charlie, but he seem to be the iniatiator of this rumor : https://semiaccurate.com/2018/10/22/intel-kills-off-the-10nm-process/

If true, we should expect further delays and even more process rebrandings.

My opinion is that it would fit to much of the available evidence given current intel PR (reorganization of manufacturing unit, anaemic confidence in 10 nm parts) and execution (14nm++ less dense than their previouse 14nm class process, current manufacturing shortages of 14nm parts)
 
Intel already denied this.

When someone else has higher performing chips then I will start to think gee there is a point to this narrative. Till then meh... I actually am very happy AMD is challenging intel a bit, but they still have nothing for the top end. They are more focusing on value, cores/$ and performance per $ instead of absolute performance. Still it has been a great boon already to consumers now that core chips are out at a reasonable price.
 
Not sure why you tie this with performance of chips. Performance is heavily influenced by architecture too.

I think we will know if we see more delays. H1 2020 we were supposed to be consumer Cannonlake in significant volume?
If we do get that , than either Charlie was flat out wrong or he grossly overestimated some incremental (planned or not) changes to the process
 
Back
Top