will 300 GHZ processors make it to the desktop?

Techno+

Regular
hi guys,

do u think thay 300 GHZ processors will make it to the desktop?, imagine owning a single core processor at 300 GHZ or a quad core at 150 GHZ, something I never dreamt of. If yes at what timeframe?
 
Well, we had that diagram which showed how the P4 would increase in watt output in relation to clock increase. In that diagram it didn't take long before it had the same heat dissipation as the surface of the Sun.
 
Last edited by a moderator:
Well, we had that diagram which showed how the P4 would increase in watt output in relation to clock increase. In that diagram it didn't take long before it had the same heat dissipation as the surface of the Sun.

What, like 7 ghz?
 
well did u hear about IBM's and georgia tech's collaboration where they made a chip that can run at 500 GHZ when cooled to absoulte zero and 350 ghz at room temperature.
 
well did u hear about IBM's and georgia tech's collaboration where they made a chip that can run at 500 GHZ when cooled to absoulte zero and 350 ghz at room temperature.

I'm pretty sure what you read was that they made a transistor that can operate at 500ghz.

If you could make a CPU out of just 1 transistor props to you. But otherwise, expect that number to drop by a factor of 100 for a real CPU.
 
A single action is a cascade of a lot of transistors switching in sequence. And the longest chain determines the maximum clockspeed.

But increasing the length of the chains is the better solution to speed things up nowadays. And generates MUCH less heat.
 
well did u hear about IBM's and georgia tech's collaboration where they made a chip that can run at 500 GHZ when cooled to absoulte zero and 350 ghz at room temperature.

That was an analog chip I believe and those already weigh in in the 20GHz range I believe. I don't remember all the specifics, but it was not directly related to desktop technology although they were hoping--as usual--that some of the R&D could spill over.
 
That was an analog chip I believe and those already weigh in in the 20GHz range I believe.
Yes.

Switching from on to off or the reverse is expensive. It's much more efficient to only amplify signals.

Then again, we have roughly two types of transistors: bipolar (PNP, analog) and fets (CMOS, digital). The first is very fast, but is bad in switching. It's an amplifier. The second is a lot slower, but very good at switching.

It's near to impossible to build a current CPU or GPU from bipolar transistors, comparable to the fet ones. They would be much faster, but require an absurd amount of cooling. And the integrity of the signal would be lost pretty soon.
 
There's a physical reason why CPUs won't reach 300GHz. At that clock, light wouldn't have enough time to cross an ALU, and that's the maximum speed of anything.

At higher speeds, it might not have the time to cross a single layer of logic, making the chip pointless.
 
Yes.

Switching from on to off or the reverse is expensive. It's much more efficient to only amplify signals.

Then again, we have roughly two types of transistors: bipolar (PNP, analog) and fets (CMOS, digital). The first is very fast, but is bad in switching. It's an amplifier. The second is a lot slower, but very good at switching.

Both can amplify, and both can switch. You can think of a bipolar transistor as current controlled current source and a FET as voltage controlled current sources.

You can get fantastic switching speed with bipolar, either with pure silicon or with silicon germanium transistors using ECL circuit methodology. Trouble is that ECL circuits always burn power because of bias current. CMOS (using MOS FETs) only burn power when switching (ignoring leakage).

It's near to impossible to build a current CPU or GPU from bipolar transistors, comparable to the fet ones. They would be much faster, but require an absurd amount of cooling.
Yes

And the integrity of the signal would be lost pretty soon.

No

Cheers
 
There's a physical reason why CPUs won't reach 300GHz. At that clock, light wouldn't have enough time to cross an ALU, and that's the maximum speed of anything.

At higher speeds, it might not have the time to cross a single layer of logic, making the chip pointless.

Life would be interesting for compiler writers (and ASM hacks) if basic ALU operations had multi-hundred cycle latencies :D

Cheers
 
It would be interesting for circuit designers as well.

They'd be told to design hardware that runs so fast that the clock signal moves too slowly to synchronize the circuitry.
 
It would be interesting for circuit designers as well.

They'd be told to design hardware that runs so fast that the clock signal moves too slowly to synchronize the circuitry.

Not my field, but I think you're right. It would be quite impossible to maintain a synchronous clock across a die at those frequencies. You'd probably want to make the thing self-timing all the way, which again would make verification really interesting.

Cheers
 
Both can amplify, and both can switch. You can think of a bipolar transistor as current controlled current source and a FET as voltage controlled current sources.

You can get fantastic switching speed with bipolar, either with pure silicon or with silicon germanium transistors using ECL circuit methodology. Trouble is that ECL circuits always burn power because of bias current. CMOS (using MOS FETs) only burn power when switching (ignoring leakage).
Yes, but as you said, you could say that bipolar transistors amplify current, while fets amplify a voltage. More or less. And we want our logical states to be defined by voltage, not current.

And because of resistance (as you're amplifying a current) and the need to keep the current down, the maximum output voltage of bipolar transistors decreases after each stage. Applying as much current as needed to steer bipolar transistors completely open would be almost like shorting the circuit. That's why you need to buffer the signal often, to pump the voltage back to a high level.

It would be a different thing, if we could easily store currents, but that isn't very practical. While you can use a flip-flop to store a current, it would use quite a bit of power. It's much easier to store a voltage, especially in it being there or not.


Edit: the thing that makes fets such good switches, is that you can almost completely close them, or have them output the maximum voltage while only using extremely tiny currents. And what makes them bad amplifiers is, that they can switch currents that are magnitudes higher than the amount of power they could dissipate.
 
Last edited by a moderator:
An interesting view in this is asynchronous circuits. No clock signal to stagger and time things. But that is pretty hard to get right, although there are some very promising techniques and examples. There are even a few asynchronous processors on the market that you can buy or program into a gate array.

I think we'll start to see more of those when we hit the quantum wall in lithography (32 nm), as that will offer a new way to go forward from that. But the design issues are huge, as there is little common knowledge in that field.

And after that, we probably get the quantum devices. While they are triggered, they operate in such a completely different way (you set them up to execute a number of different calculations at the same time on the same 'transistors' and then take some time filtering the result(s)), that you can hardly speak of a clockspeed as such.
 
Asynchronous circuits aren't a silver bullet. Individual stages still need some form of synchronization like some kind of handshake to coordinate passing signals between them.
It might even make things worse in some cases, since it requires bouncing confirmation both ways.

All that go-between still takes time, so it's not like removing a central clock will be the same as going 300 GHz, or that data will magically travel faster to make it worthwhile.

What going asynchronous does do is allow pipe stages to go their own pace and avoids problems with driving a clock reliably throughout the chip (but it is now thousands to millions of local synchronizers that must be designed), but independent clock domains can often approach the same effect without removing a clock signal entirely.
 
Last edited by a moderator:
Back
Top