No DX12 Software is Suitable for Benchmarking *spawn*

Well that is an "independent trust worthy source" in germany isn't it? its the same as canardPC they are magazine not websides.
 
  • Nvidia GPU + Intel CPU = Good
  • AMD GPUS + Intel CPU = Good
  • AMD GPUS + AMD CPU = Good
  • Nvidia GPU + AMD CPU = Bad
Fixed that for you, CF scaling gives a definitive boost on Ryzen compared to Intel (at least in Tomb Raider)

I fully expect a FuryX to not follow the same path, this one will not suffer scaling issues between AMD and Intel, it will have lower DX12 performance compared to a 980Ti/1070 in this title. Thus it will reveal Ryzen DX12 performance for what it truly is. This is the original correct premise without any unstable variables.
Here is the FuryX vs 1070 benches on Ryzen, all games are tested with DX12:


Nothing out of the ordinary, the 1070 easily beats the FuryX especially in Ashes of The singularity. And decimates it in Tomb Raider (23% faster).
 
Last edited:
Isn't he hitting GPU bottleneck with the Fury X though? 100% GPU util with Fury X and not with the 1070. That doesn't really prove (or disprove) anything. One way to ago about it is keep every setting at Ultra but drop the res to the point where both GPUs are not being utilized fully. Then we can see how AMD and Nvidia behave with Ryzen.
 
And what would be the point? the guy claimed NV had bad DX12 with Ryzen compared to Intel 7700K, but HardwareUnboxed completely invalidates this claim when they tested both the FuryX and 1070 with Ryzen and 7700K. Both GPUs had the same relative performance with Ryzen and 7700K.

30vj2h5.jpg


Isn't he hitting GPU bottleneck with the Fury X though? 100% GPU util with Fury X and not with the 1070.
Isn't that what Adored TV did when coming out with his rushed unfounded theory? the 1070 was well OC'ed in his video, so it was massively underutilized, On contrary to the 480CF which had massive scaling under Ryzen. Not the single card though, The 480 is close to the FuryX and the 1070 with Ryzen because they are CPU limited in the area of Geothermal valley (It's a CPU hog). The Titan XP was only 12% faster than the 1070 in this area, again because it's CPU limited.

One way to ago about it is keep every setting at Ultra but drop the res to the point where both GPUs are not being utilized fully. Then we can see how AMD and Nvidia behave with Ryzen.
If you did so, you will find all sorts of anomalies in both AMD and NV GPUs, as both drivers become CPU limited. You will see Ryzen smashing hard against the CPU limitation wall. Intel CPUs will massively advance here as they usually do in almost all game benchmarks. Remember, single thread is not Ryzen's strong point. We also know AMD GPUs become CPU limited faster than NV @DX11. FuryX loses the 1080p and 1440p but put up a good fight @4K.
 
Fixed that for you, CF scaling gives a definitive boost on Ryzen compared to Intel (at least in Tomb Raider)


Here is the FuryX vs 1070 benches on Ryzen, all games are tested with DX12:


Nothing out of the ordinary, the 1070 easily beats the FuryX especially in Ashes of The singularity. And decimates it in Tomb Raider (23% faster).


so pretty much it looks like AMD Xfire drivers are not working well with Intel CPU's? And nV drivers need improvement on Ryzen processors, their utilization looks bad lol.

Interesting that a 295x2 is beating a Titan XP in that game....
 
In the end, it's not NVidia's fault as they haven't been programming for Ryzen for months. It's probably fair to say that reviewers who see gigantic differences are just lazily assuming that their test is valid.
Actually, NVIDIA was a Ryzen launch partner. AMD made a big deal of this late last year at Sonoma, to show that even though they also compete on GPUs, they weren't going to shoot themselves in the foot by excluding NVIDIA on Ryzen. This way you could have both the best video card (Titan XP) and the best CPU (Ryzen).

I don't know exactly how long they've had a Ryzen, but they were certainly one of the early access groups. However I would be surprised if that value isn't "months," going back at least as far as December 2016.
 
FWIW, my colleague compared 20 games with and without hyperthreading with a 1080 Ti and a Fury X in 720p (with AA/AF and AO turned off as much as possible):
In Rise of the Tomb Raider and Hitman (both in DX12 mode) the Fury X managed to beat the 1080 Ti.
In AotS and Sniper Elite, the 1080 Ti was faster, as well as in all DX11 and DX9 titles (Overwatch, For Honor, BF1, Fallout 4, DEMD, Witcher 3, Goat Sim, FC Primal, Grid Autosport, GR Wildlands, GTA 5, WoW, WD2 and TW:WH). Curiously, in Outcast 1.1, which uses a software-render, the Fury X was ever so slightly faster as well (24,1 vs. 23,7 fps with and 22,8 vs. 22, without SMT).

So yes, in some (rare? maybe evolving into a trend - who knows?) cases, the Fury X can be more forgiving on the CPU than the GTX 1080 - or probably rather their respective drivers.
 
Isn't DX 12 is all about giving full control to developers?
Maybe AMD should talk to Crystal Dynamics? They worked with Oxide and boost AotS performance on NVidia hardware (see footnote) -> http://www.amd.com/en-us/press-releases/Pages/stardock-and-oxide-2017mar30.aspx
The PC version of Tomb Raider is an Nvidia sponsored game...so.you can probably forget about that ;) they don't care at all about performance on competing GPUs at all (can't blame them..they are a business not a charity)
 
So yes, in some (rare? maybe evolving into a trend - who knows?) cases, the Fury X can be more forgiving on the CPU than the GTX 1080 - or probably rather their respective drivers.
So when the CPU limitation reaches it's breaking point, Ryzen will work better with DX12 implementation/driver of AMD GPUs than NVIDIA's? I wonder about the implications of this on Ryzen as a choice for gamers.
 
Last edited:
So when the CPU limitation reaches it's breaking point, Ryzen will work better with DX12 implementation/driver of AMD GPUs than NVIDIA's? I wonder about the implications of this on Ryzen as a choice for gamers.

I'd wait a few months for Nvidia to sort their drivers out before saying that.
 
Actually, NVIDIA was a Ryzen launch partner. AMD made a big deal of this late last year at Sonoma, to show that even though they also compete on GPUs, they weren't going to shoot themselves in the foot by excluding NVIDIA on Ryzen. This way you could have both the best video card (Titan XP) and the best CPU (Ryzen).

I don't know exactly how long they've had a Ryzen, but they were certainly one of the early access groups. However I would be surprised if that value isn't "months," going back at least as far as December 2016.
Were there any early access groups that did not encounter issues with Ryzen? TBH coordination could have been better based on initial "witch hunts" to identify performance issues. And once the performance issues cropped up it seemed none of the early access groups had any clue ... hardware, operating systems, and software were all open as possible causes.
 
The PC version of Tomb Raider is an Nvidia sponsored game...so.you can probably forget about that ;) they don't care at all about performance on competing GPUs at all (can't blame them..they are a business not a charity)

Ryzen is targeting the segment where there hasn't been much competition with regard to GPU. That said I'm not expecting it to happen either. Better spend resources on upcoming titles and AMD should know that already. Devs should have gotten their hands on of Ryzen by now.

I'd wait a few months for Nvidia to sort their drivers out before saying that.

Ryzen userbase probably still to small for them to be relevant.
 
Ryzen userbase probably still to small for them to be relevant.

Right now? Sure. I don't expect that to remain true for much longer, with 4 core/8 thread and 6 core/12 thread sub $300 offerings from AMD.
 
So when the CPU limitation reaches it's breaking point, Ryzen will work better with DX12 implementation/driver of AMD GPUs than NVIDIA's? I wonder about the implications of this on Ryzen as a choice for gamers.

Maybe. Maybe the Nvidia driver is just more heavily threaded, increasing contention for the CPU's ressources, which in some cases is detrimental to performance? I don't know.
 
FWIW, my colleague compared 20 games with and without hyperthreading with a 1080 Ti and a Fury X in 720p (with AA/AF and AO turned off as much as possible):
In Rise of the Tomb Raider and Hitman (both in DX12 mode) the Fury X managed to beat the 1080 Ti.
In AotS and Sniper Elite, the 1080 Ti was faster, as well as in all DX11 and DX9 titles (Overwatch, For Honor, BF1, Fallout 4, DEMD, Witcher 3, Goat Sim, FC Primal, Grid Autosport, GR Wildlands, GTA 5, WoW, WD2 and TW:WH). Curiously, in Outcast 1.1, which uses a software-render, the Fury X was ever so slightly faster as well (24,1 vs. 23,7 fps with and 22,8 vs. 22, without SMT).

So yes, in some (rare? maybe evolving into a trend - who knows?) cases, the Fury X can be more forgiving on the CPU than the GTX 1080 - or probably rather their respective drivers.

Might be worth replacing the 1080ti with a GTX1080 in RoTR and Hitman while keeping the 720p resolution - I am assuming he used the more recent 'DX12 Ready' driver that improved Nvidia with Hitman (378.74).
The reason I mention this is that Hardware Unboxed had some unusual behaviour using a GP102 Titan Pascal with RoTR in both DX12 and DX11 on the Ryzen platform.
Cheers
 
Last edited:
Maybe. Maybe the Nvidia driver is just more heavily threaded, increasing contention for the CPU's ressources, which in some cases is detrimental to performance? I don't know.
It is still curious it happens only in Tomb Raider and Hitman. Ashes for example uses a lot more CPU resources, but doesn't show the same issue. And we know from HardwareUnboxed testing it only happens @720p in Tomb Raider. The 1070@1080p didn't exhibit any problem
 
Last edited:
Still, this is different than the whole CF thing. Testing @720p is a good way to expose CPU limitation further. what we have here

It is still curious it happens only in Tomb Raider and Hitman. Ashes for example uses a lot more CPU resources, but doesn't show the same issue. And we know from HardwareUnboxed testing it only happens @720p in Tomb Raider. The 1070@1080p didn't exhibit any problem

Doesn't biggest cpu loads in ashes come from AI calculations anyway? Driver wouldn't have that big impact in that case.
 
Back
Top