I've heard this blip about "AMD / ATI drivers have more CPU overhead" bandied about the forums before, but I've never seen conclusive proof of it. Examples of online reviews that looked into this general direction:
http://www.tomshardware.com/reviews/crossfire-sli-scaling-bottleneck,3471.html
http://www.guru3d.com/articles-pages/radeon-hd-7970-cpu-scaling-performance-review,1.html
http://lab501.ro/placi-video/gigabyte-radeon-hd-7970-oc-partea-iv-scalare-cu-procesorul <-- bring a translator
I could drag in about four dozen articles that cover later generations of AMD and NV cards, but the general idea is that CPU scaling certainly exists but not necessarily causally linked to drivers. It is certain that a slower CPU delivers slower gaming benchmarks within certain contexts, but when AMD cards are compared to "equal" GF cards on equal CPU's, the scaling of each GPU is (within marginal error rates that effectively null out) equal across both vendors.
I admit there may be singular games that expose higher driver CPU usage, but I'm not convinced that's a global driver "issue". And it goes both ways -- NVIDIA provided a higher-threaded optimization to their driver for a recent game (was it Civ? Or was it Star Swarm?) to deliver better performance, but the caveat was more CPU usage. Does the tradeoff truly matter if you have CPU to spare? I'd prefer the additional frames, to be honest.
Back to AMD's being "faster over time"
I wondered if GCN continued this heritage of staying faster, longer. To start, we needed to find a point where NV was roughly on parity with GCN, and that's obviously when the 680's came out back in March 2012. Actually that's a bit of a lie as you'll see later, because the 680 was manhandling the original 7970, it didn't really even out until the 7970 GHz edition. I'm not arsed enough to keep plodding through it, we'll just compare the GTX 680 to the 7970 first edition and be done.
To find more recent benchmarks, I went thumbing through GTX 980 reviews that kept older card scores in place. I looked at reviewers who obviously re-tested cards at a later date, being proxy-proven by comparing their reviews of the 680 from March 2012 versus their review of the 980 from September of 2014.
Old review:
http://www.anandtech.com/show/5699/nvidia-geforce-gtx-680-review/8
New review:
http://www.anandtech.com/show/8526/nvidia-geforce-gtx-980-review/10
This isn't going to be an easy comparison, because those two reviews only contain a single game in common: Crysis Warhead. For that single game, 7970 performance tracks quite closely with GTX 680. Back in 2012, there was a a 10% variance between the two -- three years later, it's still around a 10% variance (albeit they both got faster during the interim.)
To make a "general" case for each, I tallied up how many times each card came out ahead in a single game benchmark in each review -- I did not count compute or synthetic benches. Also I counted only the "1920" resolution scores (the earlier review is 1920x1200, the later review is 1920x1080.) And I only counted the average framerate benches, not the minimums. If both cards scored within ~3%, I tallied it as a Tie. This is psuedo-scientific at best, but what else is there to do?
Code:
Year 680 Wins Tie 7970 Wins
2012 7 1 2
2014 3 3 3
Well, depends on how you want to count it. If eliminate the "tie" buffer, the 7970 looks worse in 2012 and looks better in 2014. Somehow, either based on more CPU power, better driver optimization, or else just pure luck, the 7970 appears to be doing "better" against the GTX 680 as it finds newer games to chew through.
Again, it's psuedo-science, but I'm not sure how else to quantify it given the data at hand. Anyone else care to refute it using different data?