Core clock speeds. Please enlighten me?

Fuz

Regular
My question is, why hasn't the the GPU speeds ramped up as quickly as cpu speeds, or even remotely been able to keep pace.

If my memory serves me correctly, when the Voodoo core was at 50MHz, CPU speeds were around 133MHz. Now we have 2.5+ GHz CPU's and only 300MHz GPU's.

What are the key factors limiting GPU core speeds to ramp up quickly?
Will we see 1GHz cores by 04?

Any one have any idea?

Fuz
 
Fuz said:
If my memory serves me correctly, when the Voodoo core was at 50MHz, CPU speeds were around 133MHz. Now we have 2.5+ GHz CPU's and only 300MHz GPU's.

MHz=!computation power.

A voodoo had about 1M transistors. A NV25 has 63 M of them.
 
I see two reasons:
1 - CPUs are much hotter today, typically 50 to 60 watts. GPUs are limited by the AGP spec.
2 - GPUs use massive paralelism today.
 
CPU's are built around a known type of architecture (Von Neumann\Harvard Architecture) and are essentially general purpose adding machines whereas GPU's (though still adding machines) are very much a specialised piece of silicon.

The mass of transistors that makes up a GPU has to be pretty specific in what is accomplishes and this requires that the clock speed be kept down due to heat and power requirements. Mainly because GPUS's have a huge number of transistors compared to CPU's e.g.

Athlon Thunderbird ~37million
Pentium 4 ~42million

GF4 ~63million
Parhelia ~80million
Radeon DDR ~30million

I believe that's the main reason they aren't Giga hurtin yet :D
 
Thanks ppl.

Pascal,
you say that GPU's MHz are limited by AGP spec, how so? Do you mean the power available through AGP isn't enough? If thats the case, why not just add an external power source, using one of the many fan headers on motherboards.
 
Why not escalate this into audio chips, disk controllers, etc.? Because they are task-specific integrated circuits, not CPUs. (This is where the Nvidia coinage"GPU" appears harmful. It's, after all, not really related.) Heck, I'd like to see the 1 GHz Northbridge.
 
1GHz GPU would be severely bottlenecked waiting for the CPU to tell it what to do next. Another limitation would be the fact that the AGP spec doesnt have enough bandwidth to push information a 1GHz GPU would require (this ties in with the fact that the CPU would not be able to keep up with the requests from the GPU).

Another problem is cooling. I dont think anyone would like a 60mm Delta fan whining away on their gfx card as well as their processor.

In about 2 to 3 years time 1GHz GPU's will be coming out and we will have 5GHz+ processors probably with hopefully enough memory and system bandwidth to cope.

:)
 
One reason is the type of design. CPUs go through a design called "full custom." This means that the engineers that design it spend years tweaking the individual transistors and capacitors. Massive amount of time is spent simulating different performance issues with the processor. For example, Intel probably new to within a couple of frames per second how the P4 would perform on Q3 before it was even sent to the fab.

GPU designers don't have enough time to tweak all the parts of the chip because of the market they are in. They just borrow cell libraries from the process designers and don't go into as much depth deciding how everything will work together. As a result, the chip design is not as efficient as a full custom design, but it allows a faster time to market. At least, this is how I understand it.
 
Doomtrooper said:
There has been a alot of effort put into low power transistors being used by Intel and AMD.

A good read here...

http://chip-architect.com/news/2000_11_07_process_130_nm.html

IBM, fairly, recently announced the fastest ever transistor that had switching speeds of up 210GHz using heterojunction bipolar transistor (HBT) that uses silicon germanium instead of gallium arsenide so keeping the cost down, the HBT also allows for electron flow not only horizontally but also vertically. When these technologies along with the low power consumption hit the mainstream then things will be very different indeed methinks... I'll be dead for one thing, or worse still... too old to use a computer :cry:

I just want to see some of the wetware research hit the home market. Imagine, if you wanted some more memory in your PC you'd just release some juicy hormones and grow more.
 
Fuz said:
Thanks ppl.

Pascal,
you say that GPU's MHz are limited by AGP spec, how so? Do you mean the power available through AGP isn't enough? If thats the case, why not just add an external power source, using one of the many fan headers on motherboards.

I wouldn't say GPUs are limited by the AGP spec, but more power certainly wouldn't go a miss. R300 rev2. draws about 40-45W of which ~20-25W is the GPU and the rest the mostly memory subsystem; ~1W per module and then ~0.5W for each I/O branch. During development Intel boards were universally used as they could be relied upon to provide the full complement of 48W across three rails. Other boards apparently aren't quite as dependable in that respect. The initial revision of R300 had to use an external power supply and it looks like the AIW version of R300 may need one despite the GPU now being more frugal.

I guess ATi are really pushing the limits of 0.15u but still, as chips grow in complexity I'm sure we'll see the 50W limit approached but future products, even on a smaller process. AGP 3.0, the specification to which R300 should comply, still states a total maximum power consumption figure of ~48W across the port.

MuFu.

P.S. That's another thing I don't get; AGP 8x is part of AGP 3.0, but that's only in draft form at the moment. What do you think will happen with regards to R300?
 
Fuz said:
you say that GPU's MHz are limited by AGP spec, how so? Do you mean the power available through AGP isn't enough? If thats the case, why not just add an external power source, using one of the many fan headers on motherboards.

Yes, todays high end GPUs are mainly power limited. A NV25 or R200 boards already use all three voltages (5V, 3,3V and 12V) to power the board components, which makes the boards expensive (cheap boards with low end chips normaly only use the 3,3V lines), as you need a lot of transformers and stuff like capacitors. Enthusiast versions often use the maximum current speced in AGP specification.

IMO, 'external' power supply like the Voodoo 5 5500 has is a very nice idea for enthusiast cards. It is less error-prone than maxing out the AGP spec, as you never know how good the motherboards deals at the limit. It may even be cheaper to build a board which is powered only over the one voltage over the 4 pin plug instead of using three different AGP voltages.
 
"In about 2 to 3 years time 1GHz GPU's will be coming out and we will have 5GHz+ processors probably with hopefully enough memory and system bandwidth to cope."

Isn't that why we innovate? ;)
 
elimc said:
One reason is the type of design. CPUs go through a design called "full custom." This means that the engineers that design it spend years tweaking the individual transistors and capacitors. Massive amount of time is spent simulating different performance issues with the processor. For example, Intel probably new to within a couple of frames per second how the P4 would perform on Q3 before it was even sent to the fab.

GPU designers don't have enough time to tweak all the parts of the chip because of the market they are in. They just borrow cell libraries from the process designers and don't go into as much depth deciding how everything will work together. As a result, the chip design is not as efficient as a full custom design, but it allows a faster time to market. At least, this is how I understand it.

I'm guessing that GPUs aren't 100% standard-cell designs. GPUs are datapath-intensive, with many identical functional units (adders, subtracters, multiliers, etc.) These circuit blocks are 'relatively standard' in that maybe a small # of unique components spans >70-80% of the used instances in the GPU's datapath. Even just a 'semi-custom' relayout of those highly repetitive blocks could result in huge area/power-savings (which conversely could be put to upping the clock frequency.)

Not saying this is what they do (because I don't know any engineers who work there), but I wouldn't be surprised.
 
BM, fairly, recently announced the fastest ever transistor that had switching speeds of up 210GHz using heterojunction bipolar transistor (HBT) that uses silicon germanium instead of gallium arsenide so keeping the cost down, the HBT also allows for electron flow not only horizontally but also vertically. When these technologies along with the low power consumption hit the mainstream then things will be very different indeed methinks... I'll be dead for one thing, or worse still... too old to use a computer

I don't remember but a few months ago AMD announced a 15nm transistor at a switching speed of 3.33 Thz :eek: . And Intel done also one around 1.5 Thz. The 15 nm Technology should be used in 2009-2010.

P.S. The AMD transistor was a CMOS unit. ;)
 
CPUs, being general in nature needs the highest clock speed. To acheive this, they aim for minimum transistor count. This allows CPUs to run cool, hence fast clock speeds.

GPUs, being specific in nature needs the most transistors. To do this, they push the transistor count as high as possible and aim for the best clock speed after that. This results in many specific feature support but very dense and hot, and hence relatively low in clock speed.

In a nutshell, GPUs run slow because they tend to push the transistor limit on their fab process. I bet you can have a Geforce4 Ti running at 1 Ghz if it only had 630 transistors. :D
 
Back
Top