Have graphics cards really beaten Moore's Law?

jasonjlee

Newcomer
Just a quick question...

I keep reading statement regarding Graphics cards 'beating Moore's Law'.
which I think everyone takes as doubling performance. (I seem to remember a cost part as well, but I'll leave that to one side for the moment).

Now, is this actually true IF YOU DON'T UPGRADE ALL THE MACHINE?

Did GF1 2 3 4 really actually double in speed with the same processor and memory?

If not (as I suspect) then it seems a bit high and mighty to proclaim how much faster graphics cards are beating Moore's Law then processors are, when its those processors that are giving you a big hand....?

Don't get me wrong, I know graphics cards don't exist in some timeless space where they don't use CPU's, but seems a little unfair.

J
 
huh

I'm most likely wrong but doesn't that law pertain to clock speeds . Like if a chip was running at 50mhz a year later it will be running at a 100mhz . If it really means doubling performance I doubt that has happened in a very long time . I mean really in the last year to 18 months have cpu's doubled in performance ? How about gpu's have they really doubled in performance ? I doubt cpus have but mabye gpus and i guess it depends on how you test them .
 
Actually Moore's law predicts the amount of transistors you can cram onto a certain area of Si.

Speed is a biproduct as features get smaller.

Cheers
Gubbi
 
GPUs now are limited by Moore´s Law.

In the begining (1996) graphics chips were very simple/small compared to CPUs, that is why it growed faster than Moore´s law for some time.
 
Theres lots of incarnations of Moore's law, they cant necessarily all be attributed to him of course ... but meaning follows from use :/
 
Still, GPU's have 2-3x the amount of transistors as today's consumer CPUs, and much of the CPU's extra transistors is dedicated to cache, not logic. The GPUs are scaling by adding more and deeper pipelines and more execution units but the CPUs really aren't. The amount of FPU power in SSE2 or the 3DNow of the Athlon is pretty pathetic compared to the DX9 GPUs, or even the PS2 EE.

There is a difference between CPUs and GPUs, and it's the parallelism factor. Perhaps there needs to be a new law "The GPU law" that measures the transistor or FP vector performance gap between the two. Also, the amount of bandwidth attached to a GPU has scaled way faster.
 
DemoCoder said:
Also, the amount of bandwidth attached to a GPU has scaled way faster.

Hmm. P4s have had 4.3 GB/s nominal for some months now, and the R9700 will bring GPUs to roughly 20 GB/s nominal.
Going back to the Voodoo 1 vs the pentium 166s at the time, the ratios haven't changed all that much. Quite surprising, actually, given the 8 parallell pipes of the R9700 vs the single one on the Voodoo1.

All with you on the GPU=parallell vs CPU=sequential/dependent/branching of course.

Entropy
 
Not to be a wet blanket, but why is this refered to as a "law". This isn't a scientifically proven law or even a theory. At best it could be considered a hypothesis...but I'm not sure even about that since it's obvious that it could be proven false at any given time.
 
From the article above:
The press called it "Moore's Law" and the name has stuck

Moore´s Law is about trsnsistors not architecture.

The GPUs will generally be limited by what they can get with ~20watts (AGP limts). CPUs have more freedom and desktops CPus go is high as 70~90watts. The product transistors x frequency x process are beyond the limit with R300, then it need extra power.

I dont expect to see some 200millions transistors .13 micron GPU next year.
 
The reason it's "beating" Moore's Law is because graphics is a highly parallel and you can throw more pipelines at it and get more speed. You can't do that with general purpose CPU's.
 
fresh said:
The reason it's "beating" Moore's Law is because graphics is a highly parallel and you can throw more pipelines at it and get more speed. You can't do that with general purpose CPU's.

Perhaps in relation to CPU's preformance, but 3D Chips themselves are still held back by Moores law. Infact, due to the concurrency of the 3D architecture and the fact that they rely on parrallel processing, aren't they a perfect example of the law? Over the past few years, 3D architectures have been limited in both programmability and preformance by the number of transistors that can be packed into a die - unlike CPUs, they're [3D Micro-Architecture] constantly pushing lithography techniques to the bleeding edge to extract that last bit of preformance threw parrallelisation.
 
Vince said:
fresh said:
The reason it's "beating" Moore's Law is because graphics is a highly parallel and you can throw more pipelines at it and get more speed. You can't do that with general purpose CPU's.

Perhaps in relation to CPU's preformance, but 3D Chips themselves are still held back by Moores law. Infact, due to the concurrency of the 3D architecture and the fact that they rely on parrallel processing, aren't they a perfect example of the law? Over the past few years, 3D architectures have been limited in both programmability and preformance by the number of transistors that can be packed into a die - unlike CPUs, they're [3D Micro-Architecture] constantly pushing lithography techniques to the bleeding edge to extract that last bit of preformance threw parrallelisation.

The thing is, it's very easy to predict the graphics pipeline and build a chip which is hardcoded to do certain things, because EVERY vertex & pixel goes through the same pipe (transform, lit, gradients calculated, rasterized, gouraud interpolated, etc..) - it's all very predictable and you can build a deeeeeeep pipeline. That's why state changes are so deadly on the GPU, it forces a flush of the pipeline. A general purpose CPU running a program can't make these assumptions. Every program is different and does different operations. Branches are evil, random memory access are evil, compares are evil, etc - but the cpu has to deal with all that shit. Think of the ultimate parallelisation - 1 GPU per pixel on the screen. You can do that with graphics, you can't do that with general purpose cpu's. A dual-cpu processor does NOT give you double the performance in 99% of the cases (trust me I have one!), but a dual pixel pipeline GPU will certainly give you close to double the fillrate.
 
Back
Top