Samsung already pulled that one with TSMC. If I were intel, I’d poach Samsung.I wonder if Intel could be doing what other companies have been doing and let them do some of the r and d expenditure and then lure away people with the knowledge.
Personally I think at some point it will make more sense to go multi-layer CPU instead of going for finer processes, just like they did for flash.
How challenging (economical) this would be from a production point of view I'm not quite sure.
My understanding is that TSMC, Samsung and GloFo are all embracing EUV on their respective 2nd gen 7nm processes?Yes, but I rate TSMC's 7nm as less bold due to the lack of EUV.
My understanding is that TSMC, Samsung and GloFo are all embracing EUV on their respective 2nd gen 7nm processes?
Personally I think at some point it will make more sense to go multi-layer CPU instead of going for finer processes, just like they did for flash.
How challenging (economical) this would be from a production point of view I'm not quite sure.
Ye it's not coming for first gen at GF that's for sure, but 2nd gen should be like your first link suggests.Well Samsung already has it at 1st gen, according to that article.
2nd gen, it's certainly plausible that everyone will try to go for it
Edit :
https://semiengineering.com/quantum-effects-at-7-5nm/ Could be viewed as a confirmation that GF is attempting EUV for 7 nm.
https://www.globalfoundries.com/technology-solutions/cmos/performance/7nm-finfet mentioning EUV "compatibility" means that likely it's not coming in the first iteration
Not a good idea considering that the power consumption (and thus heat) of CPU designs and their usage patterns (NAND transistors are accessed orders of magnitude less per second than CPU transistors) is orders of magnitude higher than that of Flash or even DRAM.
How are you going to dissipate all of that heat if you have layers of the CPU insulated by other layers of CPU, all of which serves to not only insulate the heat, but each layer then contributes heat to neighboring layers.
As tunafish mentions, we need some radical changes.
Regards,
SB
What about 3D micro fluid channels for cooling ?:
Microfluidic cooling has existed for years; tiny microchannels etched into a metal block — to cool the SuperMUC supercomputer. Now, a new research paper on the topic has described a method of cooling modern FPGAs by etching cooling channels directly into the silicon itself. Previous systems, like Aquasar, still relied on a metal transfer plate between the coolant flow and the CPU itself.
Here’s why that’s so significant. Modern microprocessors generate tremendous amounts of heat, but they don’t generate it evenly across the entire die. If you’re performing floating-point calculations using AVX2, it’ll be the FPU that heats up. If you’re performing integer calculations, or thrashing the cache subsystems, it generates more heat in the ALUs and L2/L3 caches, respectively. This creates localized hot spots on the die, and CPUs aren’t very good at spreading that heat out across the entire surface area of the chip. This is why Intel specifies lower turbo clocks if you’re performing AVX2-heavy calculations.
Somehow, that's the second time the forums do not display the links to the articles you're quoting here. Do you maybe use a very niche browser or add-ons that inhibit correct linkage?What about 3D micro fluid channels for cooling ?:
Microfluidic cooling has existed for years […]
Somehow, that's the second time the forums do not display the links to the articles you're quoting here. Do you maybe use a very niche browser or add-ons that inhibit correct linkage?
You need to remember that TSMC's 7nm is roughly equivalent to Intels 10nm, so it's not as far ahead as it soundsOuch. I mean just ouch.
Intel possibly need to headhunt people to fix their broken 10nm. I mean TSMC seems to already be in full production on 7nm.
Meh, in 2017 everyone would've said Intel's still ahead in manufacturing.
To even be talking that they're slightly behind, that's quite the change//