AMD GPU gets better with 'age'

gongo

Regular
Not sure if it is a good thing....i been noticing if i look at the latest benchmark graphs of the new/older games and all, i see 290X beating/matching the 780Ti, the 7970GE beating/matching the GTX680/770, the 7870 beating/matching the 660Ti...rinse and repeat...

The question of the day, why should owners of AMD GPU have to 'wait' to enjoy class leading perf?

Why does AMD not work harder to make their GPU at launch the perf king...?

I have a feeling as awesome as 390X will be, it will lag behind the 980Ti on launch...and the reports will cheer for Nvidia victory again..
 
Not sure if it is a good thing....i been noticing if i look at the latest benchmark graphs of the new/older games and all, i see 290X beating/matching the 780Ti, the 7970GE beating/matching the GTX680/770, the 7870 beating/matching the 660Ti...rinse and repeat...

The question of the day, why should owners of AMD GPU have to 'wait' to enjoy class leading perf?

Why does AMD not work harder to make their GPU at launch the perf king...?

I have a feeling as awesome as 390X will be, it will lag behind the 980Ti on launch...and the reports will cheer for Nvidia victory again..

I think a lot of it comes down to designing GPU more towards future workloads than competition.

Everyone was agreeing at the launch of GTX680 that its design is more geared towards current games than R7970 which was teraflop monster with good compute capabilities.
 
- The 7970 GHZ Edition was allready equal or even faster of the 680 in average.. ( remember that the 7970 had been released some months before the 680, and as it was the first 28nm chips, clocks was really conservative with 925mhz core ). the 7970GHZ have been released with 1050mhz for fix that.

- The 290X was not really a lot under the 780TI, specially on higher resolution and it was really depend the game..

- Games used in review are not forcibly the same as when the gpu's have been released.. Review was really mixed too, depending the panels of games you had one in front of the other, and vice versa.. Reviewers use more now the 2560x1440 as baseline instead of the 1080p.

- Driver optimizations overtime count for it too, specially on some specific games .
 
Last edited:
I may be wearing red-tinted glasses (pun intended) but I recall AMD / ATI having this "challenge" for many generations. I specifically recall the x1800/x1900 series having a pretty significant shelf life, I thought I recalled the 5800-series having the same history too.

Even though I tend to keep video cards longer now, I'm not sure how much I specifically care about the AMD part being "faster longer".
 
Yes, the X1900XT played pretty much all 7th-gen console ports with great performance up to 2010 and later.
I remember seeing the X1900XT getting almost twice the performance of the 7900GTX in the later DX9 games like Bioshock 2.
 
Seems like another symptom of AMD GPU driver having higher CPU overhead than NVidia in general. Later benchmarks tends to use latest and thus faster CPUs.

This also implies that benchmarks that only using the fastest CPU are arguably irrelevant for most gamers with mainstream CPUs.
 
Seems like another symptom of AMD GPU driver having higher CPU overhead than NVidia in general. Later benchmarks tends to use latest and thus faster CPUs.

No, I'm pretty sure the performance disparity didn't have the symptoms of CPU-limited results.
We're talking about something like 20FPS on the 7900GTX vs. 40FPS on the X1900XT.
 
Sorry but I posted under the context of topic at hand, which is AMD GPU, not ATI :) More specifically GCN and later.
 
I've heard this blip about "AMD / ATI drivers have more CPU overhead" bandied about the forums before, but I've never seen conclusive proof of it. Examples of online reviews that looked into this general direction:

http://www.tomshardware.com/reviews/crossfire-sli-scaling-bottleneck,3471.html
http://www.guru3d.com/articles-pages/radeon-hd-7970-cpu-scaling-performance-review,1.html
http://lab501.ro/placi-video/gigabyte-radeon-hd-7970-oc-partea-iv-scalare-cu-procesorul <-- bring a translator

I could drag in about four dozen articles that cover later generations of AMD and NV cards, but the general idea is that CPU scaling certainly exists but not necessarily causally linked to drivers. It is certain that a slower CPU delivers slower gaming benchmarks within certain contexts, but when AMD cards are compared to "equal" GF cards on equal CPU's, the scaling of each GPU is (within marginal error rates that effectively null out) equal across both vendors.

I admit there may be singular games that expose higher driver CPU usage, but I'm not convinced that's a global driver "issue". And it goes both ways -- NVIDIA provided a higher-threaded optimization to their driver for a recent game (was it Civ? Or was it Star Swarm?) to deliver better performance, but the caveat was more CPU usage. Does the tradeoff truly matter if you have CPU to spare? I'd prefer the additional frames, to be honest.

Back to AMD's being "faster over time"
I wondered if GCN continued this heritage of staying faster, longer. To start, we needed to find a point where NV was roughly on parity with GCN, and that's obviously when the 680's came out back in March 2012. Actually that's a bit of a lie as you'll see later, because the 680 was manhandling the original 7970, it didn't really even out until the 7970 GHz edition. I'm not arsed enough to keep plodding through it, we'll just compare the GTX 680 to the 7970 first edition and be done.

To find more recent benchmarks, I went thumbing through GTX 980 reviews that kept older card scores in place. I looked at reviewers who obviously re-tested cards at a later date, being proxy-proven by comparing their reviews of the 680 from March 2012 versus their review of the 980 from September of 2014.

Old review: http://www.anandtech.com/show/5699/nvidia-geforce-gtx-680-review/8
New review: http://www.anandtech.com/show/8526/nvidia-geforce-gtx-980-review/10

This isn't going to be an easy comparison, because those two reviews only contain a single game in common: Crysis Warhead. For that single game, 7970 performance tracks quite closely with GTX 680. Back in 2012, there was a a 10% variance between the two -- three years later, it's still around a 10% variance (albeit they both got faster during the interim.)

To make a "general" case for each, I tallied up how many times each card came out ahead in a single game benchmark in each review -- I did not count compute or synthetic benches. Also I counted only the "1920" resolution scores (the earlier review is 1920x1200, the later review is 1920x1080.) And I only counted the average framerate benches, not the minimums. If both cards scored within ~3%, I tallied it as a Tie. This is psuedo-scientific at best, but what else is there to do?

Code:
Year    680 Wins    Tie     7970 Wins
2012    7           1        2
2014    3           3        3

Well, depends on how you want to count it. If eliminate the "tie" buffer, the 7970 looks worse in 2012 and looks better in 2014. Somehow, either based on more CPU power, better driver optimization, or else just pure luck, the 7970 appears to be doing "better" against the GTX 680 as it finds newer games to chew through.

Again, it's psuedo-science, but I'm not sure how else to quantify it given the data at hand. Anyone else care to refute it using different data?
 
Could it not be argued the reverse: Why does NV stop optimising old architectures so soon?

One of the things that most put me off NV is that they were horribly quick to discard old generation for driver/firmware updates when I had their stuff.
 
isnt AMD is the same? they dropped the support on desktop HD 4xxx series faster than on notebook (that dropped a few monhts later).
 
One of the things that most put me off NV is that they were horribly quick to discard old generation for driver/firmware updates when I had their stuff.
NVidia stopped supporting G80 only recently while AMD stopped that support much longer ago.
 
7970 was AMDs first scalar architecture, which implies a significant amount of driver rewrite as well as a lot of new insight to be gained.

Cheers
 
index.php


GCN beasting it in GTAV.

Titan X though is pretty beasty itself! Impressive job on the 20nm.....
The price/perf of 980 is weeeeak, that part should have been the 960Ti...
 
GCN beasting it in GTAV.

Titan X though is pretty beasty itself! Impressive job on the 20nm.....
The price/perf of 980 is weeeeak, that part should have been the 960Ti...

you mean 28nm i think...

This said, i see a lot of contradictory benchmarks on this title with 290 ( way lower on some other review ), but i will surely more trust gruru than some obscure russian gaming site.
 
Wonder how much the 15.4 beta driver helped? I haven't installed it yet myself, but with a 7970 OC part I'm probably getting roughly equivalent numbers to what is posted above.
 
Wonder how much the 15.4 beta driver helped? I haven't installed it yet myself, but with a 7970 OC part I'm probably getting roughly equivalent numbers to what is posted above.

Downloaded, but not installed yet, have some render to run and the last driver is extremely stable and fast with Opencl path (Blender, luxcore API ), will test it tomorrow.
 
I might wait for you to test then :D Or at least find a benchmark site somewhere who thinks critically about it and tests. I'm very happy with the "Omega" release, and unless the performance change is significant, I may not be convinced to move.
 
GCN beasting it in GTAV.

Titan X though is pretty beasty itself! Impressive job on the 20nm.....
The price/perf of 980 is weeeeak, that part should have been the 960Ti...

I guess the engine really likes GCN. A couple of thoughts:

1. Basically, you need at least a GTX 980 or R9 290X to get a comfortable 60 FPS, and even then, probably not a steady 60 FPS, so you'd either need Free/G-Sync or a faster card for smooth gameplay. And that's with only 2X of MSAA, albeit at a pretty decent 2560×1440 definition.

2. I wonder if there's any link between the presence of GCN in console hardware and its performance in this console port.
 
Back
Top