AMD GPU gets better with 'age'

I'm not sure that AMD can get GCN's perf-per-watt ratio in games close to Maxwell 2 levels, without sacrificing (even more) FP64 performance or general compute flexibility.
And I'm not sure AMD wants to sacrifice FP64 performance or flexibility at all.
Perhaps AMD should convince a AAA game developer to use lots of FP64 calculations for some kind of useless simulation, just to put the 980 Ti helplessly on par with a Pitcairn 270X.
Like that super-tesselated concrete slab and invisible super-tesselated ocean in Crysis 2, or unavoidable PhysX computations in Project Cars (which, in its CPU-accelerated version, probably still uses dirt-old inefficient x87 instructions) or maybe super-uselessly-tesselated hair in Witcher 3.

That said, I still think that power consumption on load has been terribly overrated for desktop cards. The extra value just isn't there for 99% of the people who play games frequently.
The cost of the chips (area, process, yields), PCBs and other components may be more important than pushing 190W instead of 230W when playing Call of Duty 8 hours/week.
 
Last edited by a moderator:
TitanX has like 200GFLOPs of DP, that's lower than 4870 from 2008.

AMD would be helped by more games like Hitman Absolution and reviewers like TechSpot.

http://www.techspot.com/articles-info/977/bench/Hitman.png

Voila, efficiency per shader and W is within striking distance of maxwell, get a game like project cars out and job well done, comrades!

Project cars, is really a special case, in witcher, it was somewhat easy to identify the problem.

In project cars, i dont see a real reason why the AMD cards are at this level. we are speaking about half the performance of Nvidia gpu's. Or i dont remember the performance was so bad when i was test the game 1 year ago. Maybe they have rewrite completely the engine, well i dont know.

I will be pretty happy to see a site like TR doing a bit more evaluation on this game and try to identify what exact thing is pushing down the performance like that.
 
techreport apparently dropped dirt showdown when AMD were ruling the roost, according to an AT forums poster.

Anyway, OT, do amd gpu get better with age, or amd drivers do?

AMD'd 15.5

API overhead test

DX11 multi 874 308
DX11 Single 931 772
Mantle 9 777 904

Modded 15.15

DX11 multi 1 120 471
DX11 single 1 165 430
Mantle 9 808 061

http://forums.overclockers.co.uk/showpost.php?p=28195575&postcount=6122

Another comparison,

http://www.3dmark.com/aot/31276

http://www.3dmark.com/aot/32328

If it continues, expect a hawaii-kepler repeat with maxwell.
 
techreport apparently dropped dirt showdown when AMD were ruling the roost, according to an AT forums poster.

Anyway, OT, do amd gpu get better with age, or amd drivers do?

AMD'd 15.5

API overhead test

DX11 multi 874 308
DX11 Single 931 772
Mantle 9 777 904

Modded 15.15

DX11 multi 1 120 471
DX11 single 1 165 430
Mantle 9 808 061

http://forums.overclockers.co.uk/showpost.php?p=28195575&postcount=6122

Another comparison,

http://www.3dmark.com/aot/31276

http://www.3dmark.com/aot/32328

If it continues, expect a hawaii-kepler repeat with maxwell.

I think the Hawaii-Kepler change was born out of the following 2 factors:

1. Games started making better use out of the GCN architecture thanks to games built around the consoles around this time.
2. Early benchmarks between Hawaii and the 780/Ti were generally done at 1080p while more recent benchmarks tend to be done at 1440p and 4K.

The first of those factors likely has a little life left in it but will be at least some way tapped out by now while the second was a one time only boost that's already been used up.
 
I think the Hawaii-Kepler change was born out of the following 2 factors:

1. Games started making better use out of the GCN architecture thanks to games built around the consoles around this time.
2. Early benchmarks between Hawaii and the 780/Ti were generally done at 1080p while more recent benchmarks tend to be done at 1440p and 4K.

The first of those factors likely has a little life left in it but will be at least some way tapped out by now while the second was a one time only boost that's already been used up.

To add to that early reference cards had problems staying at 1GHz core and throttling lowered performance, nowadays all R9 cards come with decent cooler.
 
I think the Hawaii-Kepler change was born out of the following 2 factors:

1. Games started making better use out of the GCN architecture thanks to games built around the consoles around this time.
2. Early benchmarks between Hawaii and the 780/Ti were generally done at 1080p while more recent benchmarks tend to be done at 1440p and 4K.

The first of those factors likely has a little life left in it but will be at least some way tapped out by now while the second was a one time only boost that's already been used up.

I'm not so sure about that. DX12 (and potentially Vulkan, if it's actually used) should allow developers to implement some of the low-level optimizations that they use on consoles but that were impossible or impractical in DX11.
 
To add to that early reference cards had problems staying at 1GHz core and throttling lowered performance, nowadays all R9 cards come with decent cooler.

I'm not so sure about that. DX12 (and potentially Vulkan, if it's actually used) should allow developers to implement some of the low-level optimizations that they use on consoles but that were impossible or impractical in DX11.

Yes both good points. I think the console connection still has some way to go yet. Although I think it's unlikely we'll see another Kepler-Hawaii situation. It's also worth considering whether the console connection will still be of benefit to AMD under the next generation of GPU's (Arctic Islands and Fiji) with AMD moving ever further away from the GCN1.1 design and hopefully, more emphasis on PC exclusive features like FL12_1 and up.
 
DX11 multi 874 308
DX11 Single 931 772
Mantle 9 777 904

Modded 15.15

DX11 multi 1 120 471
DX11 single 1 165 430
Mantle 9 808 061

Are you going to tell us what the 3 sets of numbers represent and why the first 2 results for the 15.5 drivers only have 2 sets of numbers
 
Because the three "sets" represent draw calls in millions, two sets is the number in hundreds of thousands.

That's the big hope about DirectX 12 - that it will improve many times over DX11, and even over Mantle.
 
Last edited:
Techpowerup have made a few changes to their benchmarking(W10, Skylake, newer games) and the standings have changed as well, big gains for GCN1.0 cards.

370 gained 18%, putting it on-par with the GTX 760 & 950.
270X gained 17%, putting it on-par with the GTX 960.
285 gained 8%, putting it slightly ahead of the 770.
280X gained 18% over the 770.
290 gained 5%, putting it on-par with the GTX 970.
Fury X gained 7%, putting it about 5% away from the 980 Ti.
295X2 gained 6% over the Titan X.

http://hardforum.com/showthread.php?t=1880254

The biggest wins for AMD being 370 matching 950, 270X on par with 960 and 280X being 20% faster than 770 and all this being at 1080p.
 
That sounds like a pretty good summary and confirmation of what most of us have suspected for a while. Looks like that console connection is paying big dividends.
 
In my experience, AMD's software (drivers) has always lagged their hardware.

When the 7970 launched it had lacklustre performance compared to the Nvidia competition, as a result I got a good deal on a factory overclocked one (effectively a GHz edition with a silent cooler). Five months later AMD officially released the GHz edition to glorious reviews, taking the performance crown. The GHz edition was over 50% more expensive than my 7970.

What really happened was five months of driver improvements and a moderate bump in operating frequency. I saw an increase in Skyrim fps of more than 30% in that period, - and the elimination of stutter. I'm guessing going from VLIW4/5 to a scalar architecture was more challenging for the driver team than expected.

I also expect gains for the Fury cards over time; The driver needs to be clever about managing the 4GB RAM.

Cleverness will be added.

Cheers
 
When the 7970 launched it had lacklustre performance compared to the Nvidia competition
When Radeon HD 7970 launched, it had no competition for about three months, because Nvidia launched GeForce GTX 680 one quarter later.
 
I also expect gains for the Fury cards over time; The driver needs to be clever about managing the 4GB RAM.
I wouldn't expect too much in the way of that. They'll want to release 8GB cards ASAP so they can go back to not having to spend additional engineering resources on cleverness. After all, they are busy going bankrupt, and the fury series is very niche and from all info we have not selling very well. :p
 
And how many times have said predictions failed to come true?
Recent comparisons of Tahiti vs. GK104, Hawaii vs. GK110 and Hawaii vs. GM204 are pretty much self explanatory right now.


How so can you please explain? Cause I see memory limitations and shader limitations changing based on applications needs having more of an effect......
 
How so can you please explain? Cause I see memory limitations and shader limitations changing based on applications needs having more of an effect......

It seems to me you answered your own question.
Either if 2GB for GK104 or 3GB for GK110 ended up cutting the cards' lives too short, and/or something else contributed to the end result, the premise stands regardless.

But there's a whole 5-page thread talking about that and I'm hoping for this thread to just wither and die so feel free to present your points in there:

https://forum.beyond3d.com/threads/amd-gpu-gets-better-with-age.56336/
 
Back
Top