I was a little bored tonight and since I had been wondering for a while by how much performance of GPU's has increased when they go from one generation to the next, this was the moment to dig up some numbers.
Here are the criteria that I used:
- use data on anandtech.com, since they have existed for a long time and results are fairly easy to search.
- find the introduction article for a particular GPU
- take a benchmark at a particular resolution that was typical for that time.
- divide number of the highest performing newcomer and divide by the performance of highest performing previous generation card
- Sometimes arbitrary choices had to be made. When there were results for both 16-bit mode (remember those?) and 32-bit mode, I choose 16-bit. For later models, I chose higher quality settings (e.g. 4x AA/AF).
I started with Nvidia since they have had the most consistent roadmap over the years. The benchmarks used are mostly based on an id Software engine.
Here are the numbers:
TNT / 128: 2.25 (Forsaken 800x600)
TNT2 / TNT: 1.53 (Quake 2 800x600)
Gf 256 / TNT2 Ultra: 1.52 (Quake 3 1024x768)
Gf 2 GTS / Gf 256 DDR: 1.75 (Quake 3 1028x1024)
Gf 3 / Gf2 Ultra: 1.25 (Quake 3 1600x1200)
Gf 4 Ti 4600 / Gf 3 Ti 500: 1.30 (Quake 3 1600x1200)
Gf FX 5800 / Gf 4 Ti 4600: 1.47 (Quake 3 1600x1200)
Gf 6800 Ultra Extreme / Gf FX 5950: 1.57 (Wolfenstein 1600x1200)
Gf 7800 GTX / Gf 6800 Ultra: 1.18 (Wolfenstein 1600x1200)
Gf 7900 GTX / Gf 7800 GTX512: 1.04 (Quake 4)
* The Gf FX 5800 was soo late that it's predecessor was completely obsolete. Anandtech didin't bother to compare them. I used number from digit-life.com. I should have compared with the Ti4800 because I believe that one was faster, but I didn't find numbers.
* I added the 7900 because it was the last one released...
All numbers geometrically averaged, not including the 7900, the average speed increase between generations is x1.50.
It will be interesting to see what's going to happen with g80.
Next up... ATI.
Here are the criteria that I used:
- use data on anandtech.com, since they have existed for a long time and results are fairly easy to search.
- find the introduction article for a particular GPU
- take a benchmark at a particular resolution that was typical for that time.
- divide number of the highest performing newcomer and divide by the performance of highest performing previous generation card
- Sometimes arbitrary choices had to be made. When there were results for both 16-bit mode (remember those?) and 32-bit mode, I choose 16-bit. For later models, I chose higher quality settings (e.g. 4x AA/AF).
I started with Nvidia since they have had the most consistent roadmap over the years. The benchmarks used are mostly based on an id Software engine.
Here are the numbers:
TNT / 128: 2.25 (Forsaken 800x600)
TNT2 / TNT: 1.53 (Quake 2 800x600)
Gf 256 / TNT2 Ultra: 1.52 (Quake 3 1024x768)
Gf 2 GTS / Gf 256 DDR: 1.75 (Quake 3 1028x1024)
Gf 3 / Gf2 Ultra: 1.25 (Quake 3 1600x1200)
Gf 4 Ti 4600 / Gf 3 Ti 500: 1.30 (Quake 3 1600x1200)
Gf FX 5800 / Gf 4 Ti 4600: 1.47 (Quake 3 1600x1200)
Gf 6800 Ultra Extreme / Gf FX 5950: 1.57 (Wolfenstein 1600x1200)
Gf 7800 GTX / Gf 6800 Ultra: 1.18 (Wolfenstein 1600x1200)
Gf 7900 GTX / Gf 7800 GTX512: 1.04 (Quake 4)
* The Gf FX 5800 was soo late that it's predecessor was completely obsolete. Anandtech didin't bother to compare them. I used number from digit-life.com. I should have compared with the Ti4800 because I believe that one was faster, but I didn't find numbers.
* I added the 7900 because it was the last one released...
All numbers geometrically averaged, not including the 7900, the average speed increase between generations is x1.50.
It will be interesting to see what's going to happen with g80.
Next up... ATI.