AMD Navi Product Reviews and Previews: (5500, 5600 XT, 5700, 5700 XT)

This has only been proven true in the Kepler vs early GCN generation, Fiji generation aged badly for AMD, so much that FuryX is behaving like an RX 580/590 in a great deal of titles in the past two years.

Its true for Paxwell too, just not quite as bad as Kepler. I wouldn't be surprised if it happens again with Turing. Isnt FuryX the only AMD GPU that hasnt aged significantly better than its competition since the introduction of GCN?
 
Its true for Paxwell too, just not quite as bad as Kepler. I wouldn't be surprised if it happens again with Turing. Isnt FuryX the only AMD GPU at hasnt aged significantly better than its competition since the introduction of GCN?
No evidence of that whatsoever, Maxwell aged considerably better than Fiji. Too early to tell for Pascal or Turing. But Pascal holds fine in any recent comparison.
https://www.techpowerup.com/review/amd-radeon-rx-5700-xt/28.html
 
No evidence of that whatsoever, Maxwell aged considerably better than Fiji. Too early to tell for Pascal or Turing. But Pascal holds fine in any recent comparison.
https://www.techpowerup.com/review/amd-radeon-rx-5700-xt/28.html

Pascal has been losing ground for some time now. Go look at its relative performance compared to various AMD GPUs at launch vs today. They have all regressed in their standing. BFV, RDR2, FH4, COD MW, and plenty of others run rather poorly on Pascal compared to AMD. Typically the more impressive, high end titles dont fare that well on Nvidia.
 
Pascal has been losing ground for some time now. Go look at its relative performance compared to various AMD GPUs at launch vs today. They have all regressed in their standing. BFV, RDR2, FH4, COD MW, and plenty of others run rather poorly on Pascal compared to AMD. Typically the more impressive, high end titles dont fare that well on Nvidia.
More blank statements with not evidence.
A 1080 was only 3~4% faster than a Vega 64 at launch, it's now 1% faster or equal with different game selections.
https://www.techpowerup.com/review/amd-radeon-rx-vega-64/31.html
https://www.techpowerup.com/review/amd-radeon-rx-5700-xt/28.html
 
More blank statements with not evidence.
A 1080 was only 3~4% faster than a Vega 64 at launch, it's now 1% faster or equal with different game selections.
https://www.techpowerup.com/review/amd-radeon-rx-vega-64/31.html
https://www.techpowerup.com/review/amd-radeon-rx-5700-xt/28.html

Plenty of evidence, even with TPU focusing on too many old and similar engine games it still shows an undeniable regression. It would be even larger when you focus on only recent titles, especially those that generally garner high praise for their visuals.
 
It would be even larger when you focus on only recent titles, especially those that generally garner high praise for their visuals.
But we don't only focus on the last 4 titles, because that is subject to change a lot, I could only focus on the numerous UE4/Unity titles (in which Pascal has a considerable advantage over GCN), and the picture would flip completely. A Vega 64 is often slightly above a 1070 in most UE4 titles or Assassin's Creed titles, but I wouldn't call that GCN regression.
 
But we don't only focus on the last 4 titles, because that is subject to change a lot, I could only focus on the numerous UE4/Unity titles (in which Pascal has a considerable advantage over GCN), and the picture would flip completely. A Vega 64 is often slightly above a 1070 in most UE4 titles or Assassin's Creed titles, but I wouldn't call that GCN regression.

Its not just those 4 titles, those are just some examples. UE4 and Unity are not engines representative of high end graphical performance at all. Both are mostly used in low budget titles where visuals and performance are completely mediocre anyway.
 
Yep. The RDNA CU's are larger and have a definite performance advantage when running a wide mix of game shaders, but you don't get the same performance gain on typical compute workloads where most of the shader instructions are coming from optimized math libraries rather than game code.

Do you, or anyone, think it would make sense to keep iterating on Vega for this space/compute efficiency?
 
Without knowing about other possible pitfalls, I think when you design a half-rate DP compute monster for Data Centers and HPC anyway and you basically have the choice between two proven architectures, you can always use the one that creates more money for you.

That said, for HPC energy efficiency might come into play as well. And right now I'm not really sure about the respective levels in pure compute.

edit: The "l" in levels
 
Last edited:
In that sense having a compute centric card would be to their benefit I'd think, less leakage from parts that go basically unused. But if AMD puts a majority of its investment into rDNA, how long before a Navi design is more power efficient regardless? I'd also imagine the texture processor BVH solution that's rumoured for rDNA2 would go to some use in HPC solutions even if the new WG design doesn't add much to HPC. Unless it's "easy" to back-port efficiency and BVH enhancements to Vega, and economically feasible to make new patterns for limited production.

I dunno, I just like the idea of Vega living on if it can. And I don't really know why. Underdog mentality? I'm like this with a lot of things that has potential that goes unfulfilled.
 
UE4 and Unity are not engines representative of high end graphical performance at all. Both are mostly used in low budget titles where visuals and performance are completely mediocre anyway.
I disagree. Gears 5, Outer Worlds, Jedi Fallen Order, Crackdown 3, Borderlands 3, Mortal Kombat 11 are not games with mediocre graphics or performance at all.


Its not just those 4 titles, those are just some examples.
I don't see other examples to be frank. I can play the same game and give you examples from the other side too, Tomb Raider, Hitman, Metro, Control, Assassin's Creed, Ghost Recon ..etc.
 
UE4 and Unity are not engines representative of high end graphical performance at all. Both are mostly used in low budget titles where visuals and performance are completely mediocre anyway.
I disagree. Gears 5, Outer Worlds, Jedi Fallen Order, Crackdown 3, Borderlands 3, Mortal Kombat 11 are not games with mediocre graphics or performance at all.


Its not just those 4 titles, those are just some examples.
I don't see other examples to be frank. I can play the same game and give you examples from the other side too, Tomb Raider, Hitman, Metro, Control, Assassin's Creed, Ghost Recon ..etc.
 
I disagree. Gears 5, Outer Worlds, Jedi Fallen Order, Crackdown 3, Borderlands 3, Mortal Kombat 11 are not games with mediocre graphics or performance at all.



I don't see other examples to be frank. I can play the same game and give you examples from the other side too, Tomb Raider, Hitman, Metro, Control, Assassin's Creed, Ghost Recon ..etc.

Gears 5 ill give you, but it’s sufficiently customized that it doesn't represent typical slow/mediocre stock UE4 and as such, pascal is no longer faster than AMD. MK11 doesnt use UE4 and AMD outperforms heavily in that title regardless. The other titles you mentioned are the typical, mediocre UE4 fare.

Tomb Raider, Metro and Control all run better on AMD.

There are very few high end titles that AMD doesnt outperform pascal in these days. You will mostly be limited to Ubisoft games.
 
This has only been proven true in the Kepler vs early GCN generation, Fiji generation aged badly for AMD, so much that FuryX is behaving like an RX 580/590 in a great deal of titles in the past two years.
At least as long as TPU held Fury X in their results, it's performance improved compared to 980 Ti which was a tad faster on Fury X launch but later loses by slightly bigger margin.
Also, compared to Polaris, seems to have gotten pretty similar increases, at RX 480 launch 1440p Fury X was 30% faster, at RX 590 launch the difference between Fury X and 480 at 1440p was 31%
 
Last edited:
At least as long as TPU held Fury X in their results
TPU doesn't bother updating it's Fury X results since a long time ago, there is a large dumb of performance benchmarks showing the 980Ti holding 30% fps advantage over FuryX, and in many instances even the regular 980 is faster than FuryX.

https://hardforum.com/threads/furyx-aging-tremendously-bad-in-2017.1948025/
2019
2018
2017

Also, some publications:

[BabelTech] AMD’s “fine wine” theory has apparently not turned out so well for the Fury X.
https://babeltechreviews.com/amds-fine-wine-revisited-the-fury-x-vs-the-gtx-980-ti/3/

[BabelTech] The GTX 980 Ti is now even faster than the Fury X for the majority of our games than when we benchmarked it about 2 years ago
https://babeltechreviews.com/the-gtx-1070-versus-the-gtx-980-ti/3/

980Ti is 30% faster than FuryX @1080p, 1440p and 4K.
http://gamegpu.com/test-video-cards/podvedenie-itogov-po-graficheskim-resheniyam-2017-goda
 
Last edited:
TPU doesn't bother updating it's Fury X results since a long time ago, there is a large dumb of performance benchmarks showing the 980Ti holding 30% fps advantage over FuryX, and in many instances even the regular 980 is faster than FuryX.
Fury X is hitting 4GB memory limit hard in many cases
 
Back
Top