Semiconductor photolithography, how can they keep improving the process?

Hrm, guess I wasn't thinking about just using superconductors in the chip, but rather shoehorning superconductors into current silicon technology.

That would be interesting, and would bring us quite a bit closer to the absolute minimum amount of power draw per processed bit (in time, at least....probably not at first), but would definitely be quite expensive.

Anyway, you can't go that high in frequency before your circuit starts to lose energy to electromagnetic radiation. How high depends upon the size of the circuit (A ~1cm circuit starts to lose energy to EM radiation in the ~10 GHz range).
 
MfA said:
You can pipeline everything, including cross chip communication.
I suppose. But every stop along the way adds to the power consumption, too, so you can't entirely escape the problem that way.
 
Experimental new techniques that could dramatically improve the process.
Yongfa Fan, a doctoral student in RIT’s microsystems engineering Ph.D. program, accomplished imaging rendered to 26 nanometers —a size previously possible only via extreme ultraviolet wavelength, Smith says. By capturing images that are beyond the limits of classical physics, the breakthrough has allowed resolution to smaller than one-twentieth the wavelength of visible light, he adds.

The development comes at least five years sooner than anticipated, using the International Technology Roadmap for Semiconductors

“Immersion lithography has pushed the limits of optical imaging,” Smith says. “Evanescent wave lithography continues to extend this reach well into the future. The results are very exciting as images can be formed that are not supposed to exist.”
link

Using spintronic designs, I've heard transistors could be made below 10nm, 1-3 nanometers in size or something like that(I think it was in the physics today journal I read that). I think it also helps with many other issues too.

3d circuitry would be the long-term ideal that could allow us to go for many more decade. Especially if we could somehow dev. the ability to manipulate matter at the molecular scale with utmost precision. Then it'd be a matter of design and we'd have virtually unlimited computing resources.
 
Last edited by a moderator:
Chalnoth said:
Ah, but circuits aren't just wires. Especially integrated circuits, where most of the resistance comes from the current flowing through the semiconductor itself. Then you have power loss due to transistor switching. And then you have power loss due to current leakage between neighboring conductive lines. And then you have power loss due to current leakage through the oxide layer between the gate and the rest of the transistor.

In other words, you'd only be solving one small portion of the problem by moving to superconductors, even if it were feasible to do so (with currently-known superconductors it's not).

You're still talking about current ie electron flow...
 
I remember reading a very short article somewhere recently about a grid based computing solution that allowed for redundancy as feature sizes shrink below the limit where quantum "errors" begin to creep in. IIRC, the circuit was composed of rows of conductors overlaying columns of conductors, with the ability to make a connection between any row and any column (a physical array of sorts).

I don't remember whether the making of a temporary connection drove the signal routing, or was more a result of it. I also didn't take the time to see how this grid based circuit topology provided the desired redundancy against random errors.

Anyone more familiar with this approach want to fill us all in?
 
Back
Top