Intel Loses Steam Thread

There's a fair amount of scientific literature (which I haven't reviewed) that suggests sub-5nm transistors just aren't possible. Of course this doesn't mean we can't do other things, like 3D stacking, bigger wafers, or whatever power-efficiency improvements process people will be able to think of.

Intel can make sub 5nm transistors chips.

Cost is a separate matter. Moore's law is about cost.
 
Did they demonstrate such a chip and do you have a link? I failed to find. Thanks!

See, there is still no link.. :LOL: but, hey, come on, probably the links are not accessible by the general public, that can be inside the company information, kind of secret kept away from curious eyes :LOL:
 
The presentation only showed transistors, but I don't see why making a CPU would be a problem.
Cost vs benefit? shrinking the process node is not the only way to boost performance; By the time we get to 5nm carbon nanotubes/graphene etc should be ready/almost ready.
 
Wynix said:
By the time we get to 5nm carbon nanotubes/graphene etc should be ready/almost ready.

The scary part to me is that there doesn't look to be any serious alternatives once we hit the silicon wall other than graphene. And even if we get graphene to a viable state, it will still be more or less a one-off.

After that you're left with power consumption and structural heat management refinements for the rest of eternity....

That gives us with an ~1000x performance cap in comparison to what we have today (for parallel applications).
 
The presentation only showed transistors, but I don't see why making a CPU would be a problem.
As Wynix wrote a few transistors don't prove viability and anyway some labs have already shown smaller transistors for years.

I'm just surprised I couldn't find any news or article saying Intel made such a demo (in 2012 they showed a slide mentioning 5nm in the future, but that's all). I'm wondering what is their latest public demo or tech article discussing their upcoming processes.
 
Cost vs benefit? shrinking the process node is not the only way to boost performance; By the time we get to 5nm carbon nanotubes/graphene etc should be ready/almost ready.

CNT has proven to be a deadend for logic.

Graphene doesn't have a bandgap, so single layer graphene transistors are impossible.
Multi layer graphene has a bandgap, but that has alignment issues at nano-scale geometries.
 
As Wynix wrote a few transistors don't prove viability and anyway some labs have already shown smaller transistors for years.

I'm just surprised I couldn't find any news or article saying Intel made such a demo (in 2012 they showed a slide mentioning 5nm in the future, but that's all). I'm wondering what is their latest public demo or tech article discussing their upcoming processes.

I agree that a few small transistors don't prove viability of a multi-billion transistor SoC. But if a few transistors can be made with current tools, it's reasonable to assume that the rest of the problems are solvable, modulo cost, of course.
 
The scary part to me is that there doesn't look to be any serious alternatives once we hit the silicon wall other than graphene. And even if we get graphene to a viable state, it will still be more or less a one-off.

After that you're left with power consumption and structural heat management refinements for the rest of eternity....

That gives us with an ~1000x performance cap in comparison to what we have today (for parallel applications).

There are tons of process refinements at the same feature pitch and we've already seen some like doping with rare-earths, SOI, and FIN-FETs, but I look forward to the more rapid gains in architecture once we hit those limits. We're already seeing big gains in performance per watt at the 28nm node with Maxwell from what seems like architecture alone.

Architecture will be more important even for brute replication because of the desire for coherency. We've already hit a sweet spot (4-8 cores) for the number of useful CPU cores in a consumer product, better caches and memory is the next big thing. The ability to add more useful on-die cache is starting to depend a good bus architecture. Things are getting more unwieldy for AMD whose die layouts are a mess compared to the nice and tight ring-bus connected Intel designs which elegantly to 15 cores w/ Ivytown EX. Even L3 cache sharing with the GPU is possible with the ring bus. AMD's one saving grace is the unified CPU / GPU memory space in Kaveri which is showing huge gains on some applications, and I think we'll see it really take off when next gen stacked memory becomes common place.
 
They finally finished the plant they initially planned on ramping up half a decade ago. They're also pivoting more and more to the data center due to slowing general consumer sales:

http://www.anandtech.com/show/11115...n-core-on-14nm-data-center-first-to-new-nodes

and extending the tock on the 14nm node to a third leg.

The moat they had with the tight grasp of x86 architecture licensing could realistically be challenged now, the main issue being that no devs really use something like ARM to develop for ARM yet, so the native infrastructure just isn't as built out and there may be more hiccups in general. They're down to one main source of profit in their server CPUs and each diversification they're trying has a more established competitor not burdened w/ the massive capex and risk for fab improvement and execution.

They're guiding down profits and losing further steam; competitors could catch them flat footed between now and the 7nm node which is only coming in 5 to 6 years. I think they may need to spin off the fabs or find better ways to fill that capacity.
 
They finally finished the plant they initially planned on ramping up half a decade ago. They're also pivoting more and more to the data center due to slowing general consumer sales:

http://www.anandtech.com/show/11115...n-core-on-14nm-data-center-first-to-new-nodes

and extending the tock on the 14nm node to a third leg.

The moat they had with the tight grasp of x86 architecture licensing could realistically be challenged now, the main issue being that no devs really use something like ARM to develop for ARM yet, so the native infrastructure just isn't as built out and there may be more hiccups in general. They're down to one main source of profit in their server CPUs and each diversification they're trying has a more established competitor not burdened w/ the massive capex and risk for fab improvement and execution.

They're guiding down profits and losing further steam; competitors could catch them flat footed between now and the 7nm node which is only coming in 5 to 6 years. I think they may need to spin off the fabs or find better ways to fill that capacity.

As far as I know, their consumer CPUs are still extremely profitable, and even with real competition from AMD (for the first time since eons ago), that's expected to remain true for quite some time, at least.
 
As far as I know, their consumer CPUs are still extremely profitable, and even with real competition from AMD (for the first time since eons ago), that's expected to remain true for quite some time, at least.
That may be true, but consumers are generally keeping their PCs for longer as there's less to be gained from an upgrade with each coming generation. It may be a long shot, but having Windows on ARM is potential challenge for those artificially fat margins gotten from restrictive x86 licensing.
 
That may be true, but consumers are generally keeping their PCs for longer as there's less to be gained from an upgrade with each coming generation. It may be a long shot, but having Windows on ARM is potential challenge for those artificially fat margins gotten from restrictive x86 licensing.

Perhaps, but that may be offset by stronger sales in emerging markets. More to the point, is revenue from Intel's consumer business actually decreasing?

If this is to be believed, it looks fairly flat year over year: https://seekingalpha.com/article/4139798-intel-earnings-preview
 
Back
Top