No DX12 Software is Suitable for Benchmarking *spawn*

That's not very constructive discussion on the video content which is rather compelling. Or are you dismissing the findings completely as rantings of a fanboy and if so, why?


His testing methodology is not good lol (settings). He is not isolating what he is looking for and that is a big problem, why would you use a dual card rx 480 system vs a single 1070? We know the DX11 benchmark reasons for AMD cards, but really dual GPU in DX12 showing a huge increase? Come on we know that is going to happen too! So the tests don't show us anything, the dual cards might be hitting a GPU bottleneck on the processors that we don't see. So all his conclusions might not be accurate, now we can see there is something wrong with nV's DX12 drivers on Ryzen that is pretty obvious in that specific game and specific levels, but the final conclusion he draws about the CPU's that we don't know enough about.

If you look at the Syria level, it seems like the GPU bottleneck is as great, so the 7700k pulls ahead. So there is nothing conclusive based on what he showed.
 
Last edited:
I'm saying that the comparison might not be valid, if the graphics settings are not equivalent between AMD and NVidia. The difference in settings might cause a difference in CPU workload. I don't have the game, so I have no idea if there are settings that cause a CPU bottleneck when the game should be GPU-bound.

Unfortunately, some people appear to be interpreting this video as a performance evaluation of D3D12. It isn't. It's an investigation into the CPU performance difference between AMD and NVidia with 1800X versus 7700K. e.g. in a gameplay test in Geothermal Valley:

  • 7700K with D3D12 on AMD graphics is 7% faster than 1800X
  • 7700K with D3D12 on NVidia graphics is 34% faster than 1800X
7% is a "normal" difference for 7700K versus 1800X. 34% with NVidia graphics is indicative of something else going on.

The problem we have is we can't tell if the NVidia card is running exactly the same graphics settings. If there are differences in graphics, are those differences having an effect on CPU workload? Though it seems unlikely to me that extra CPU work would increase the gap between 1800X and 7700K to as much as 30+%.

As far as I can tell the explanation that some people have is that the NVidia driver and Ryzen are a bad match in this game. It's not the game, but the game + driver combination.

EDIT: In theory NVidia has invested a lot of effort in this game. There might even be hand-tuned CPU code for some graphics effects (but only in the NVidia driver?). That code might be worst-case performance on Ryzen just because Ryzen is so different from Intel's processors :???:
 
Last edited:
Some find similar result with Division and the logic behind VF vs 1070 is to get out of GPU bottleneck to test how much FPU both CPU can handle. Yes in an ideal world they could use a vega GPU or a SLI of 1060 or 1070 but you need to remember this are not rich guy with unlimited hardware, they use what they had in hand.

What this means is that further testing is needed.
 
Yes these videos are about Ryzen performance being handicapped by all this testing on Nvidia, not specifically about D3D12 performance. When performance goes up going from a Titan X to an RX480, you know there's something really f'd up.

What this means is that further testing is needed.
Definitely agree there.
 
Some find similar result with Division and the logic behind VF vs 1070 is to get out of GPU bottleneck to test how much FPU both CPU can handle. Yes in an ideal world they could use a vega GPU or a SLI of 1060 or 1070 but you need to remember this are not rich guy with unlimited hardware, they use what they had in hand.

What this means is that further testing is needed.


all he has to do is lower resolution, easiest way, doesn't need to be at 1080p ;) damn the game could look like crap at lowest settings too, if its a driver retailed problem it will show up regardless.
 
I'm saying that the comparison might not be valid, if the graphics settings are not equivalent between AMD and NVidia. The difference in settings might cause a difference in CPU workload. I don't have the game, so I have no idea if there are settings that cause a CPU bottleneck when the game should be GPU-bound.

Unfortunately, some people appear to be interpreting this video as a performance evaluation of D3D12. It isn't. It's an investigation into the CPU performance difference between AMD and NVidia with 1800X versus 7700K. e.g. in a gameplay test in Geothermal Valley:

  • 7700K with D3D12 on AMD graphics is 7% faster than 1800X
  • 7700K with D3D12 on NVidia graphics is 34% faster than 1800X
7% is a "normal" difference for 7700K versus 1800X. 34% with NVidia graphics is indicative of something else going on.

The problem we have is we can't tell if the NVidia card is running exactly the same graphics settings. If there are differences in graphics, are those differences having an effect on CPU workload? Though it seems unlikely to me that extra CPU work would increase the gap between 1800X and 7700K to as much as 30+%.

As far as I can tell the explanation that some people have is that the NVidia driver and Ryzen are a bad match in this game. It's not the game, but the game + driver combination.

EDIT: In theory NVidia has invested a lot of effort in this game. There might even be hand-tuned CPU code for some graphics effects (but only in the NVidia driver?). That code might be worst-case performance on Ryzen just because Ryzen is so different from Intel's processors :???:

Just saw the video, he is comparing Nvidia and Radeon DX12 drivers and showing that Nvidia implementation is detrimental to 1800X.

I wouldn't fear that he used different graphics settings between runs, that's benchmarking 101... He even mentioned that he changed fullscreen mode to improve performance overall...
 
all he has to do is lower resolution, easiest way, doesn't need to be at 1080p ;) damn the game could look like crap at lowest settings too, if its a driver retailed problem it will show up regardless.
Wasn't his entire point that lower resolutions aren't representative of future performance?
 
As far as I can tell the explanation that some people have is that the NVidia driver and Ryzen are a bad match in this game. It's not the game, but the game + driver combination.

EDIT: In theory NVidia has invested a lot of effort in this game. There might even be hand-tuned CPU code for some graphics effects (but only in the NVidia driver?). That code might be worst-case performance on Ryzen just because Ryzen is so different from Intel's processors :???:
This has to be expected. Intel has been the sole gaming CPU for ages. All drivers (including AMD GPU drivers) are well optimized for Intel CPUs. Ryzen is brand new. As IHVs get their hands on Ryzen, they can analyze their graphics driver bottlenecks and optimize them. Obviously AMD has had access to Ryzen internally for longer time as it is their own CPU. They have had time to optimize their drivers for Ryzen.

I am sure Nvidia will optimize their drivers for Ryzen ASAP. They don't have their own CPU, so they have nothing to lose. In contrary, if Nvidia drivers keep having performance problems with Ryzen, it would make Nvidia GPUs look bad in benchmarks. Some GPU reviewers will use Ryzen CPUs in future. Some gamers will also buy Ryzen CPUs. These gamers are potential Nvidia GPU customers.
 
some people appear to be interpreting this video as a performance evaluation of D3D12
That's his interpretation, he is saying D3D12 on NV is bad, so that's why Ryzen has worse fps in D3D12. He even states that Vega will lay waste to the 1080Ti in this title because DX12 is bad on NV! Only that is not the case at all. NV GPUs enjoy a considerable advantage in this title vs AMD counterparts.
As far as I can tell the explanation that some people have is that the NVidia driver and Ryzen are a bad match in this game. It's not the game, but the game + driver combination.
That's his theory alright. Only that it is a bunch of crap. AMD themselves sent out Ryzen review guidelines with NV graphics cards. If they knew their GPUs are that much more efficient when coupled with Ryzen, they would have gone with it anyways.

And his theory is what? Ryzen is good because NV is bottlenecking it with bad DX12? Most sites ran games tests on DX11 (as most games are DX11), and Ryzen is still worse than Intel under these circumstances, and in many cases considerably worse than Intel.

What he is doing is measuring CF scaling between different APIs and different CPU vendors!

I am gonna use his numbers from the Geothermal valley area:
f3bdc5.jpg


Intel DX12 __________________ Intel DX11
GTX1070 91 _______________GTX1070 80
So the 1070 with Intel gained 13% more frames in DX12 than DX11, so clearly NV is gaining here from DX12.

AMD DX12 __________________ AMD DX11
GTX1070 68 __________________ GTX1070 60
Again here NV is gaining about 8fps (13%) from the switch to DX12 under AMD Ryzen CPU. Same as Intel! Nothing weird happening there!


Intel DX12 __________________ Intel DX11
CF480 101 __________________ CF480 55
Clearly AMD CF is broken under DX11 and Intel, it achieves only 55fps. DX12 CF achieves 83% more fps than DX11!


AMD DX12 __________________ AMD DX11
CF480 94 __________________ CF480 50
Here CF is also broken under DX11 and Ryzen CPU. Switch to DX12, and CF480 is now 88% faster than DX11.


I am gonna do the same thing with his Soviet installations numbers:
hwgyv7.jpg


Intel DX12 __________________ Intel DX11
GTX1070 98 _________________GTX1070 97
NV is tied in DX11 and DX12 with Intel CPUs.

AMD DX12 __________________ AMD DX11
GTX1070 73 _________________ GTX1070 75
Again here this is a virtual tie between DX11 and DX12 under Ryzen.

Intel DX12 __________________ Intel DX11
CF480 104 __________________ CF480 71
CF is scaling well with Intel DX12, it's 46% faster than DX11

AMD DX12 __________________ AMD DX11
CF480 98 __________________ CF480 64
CF Scaling is even better under Ryzen DX12, it's getting 53% more frames than DX11

The take away? CF scales better with AMD CPUs than Intel CPUs: 88% vs 83% in one area, and 53% vs 46% in another area!

The other take away? NV driver is fine with Ryzen, NV gained exactly the same % under DX12. It was the same 13% for Intel and AMD.

What's obvious here is the general lacky performance of Ryzen compared to Intel, one could actually go as far as to claim Ryzen can't handle NV drivers well. It could be it's L3 bottleneck and thread migration problem that is causing the slowdown in DX12.

Couple that with the better CF scaling under Ryzen, which is probably due to driver optimizations, as well as the massive CF scaling in DX12, and you have yourself that weird picture above.

I fully expect a FuryX to not follow the same path, this one will not suffer scaling issues between AMD and Intel, it will have lower DX12 performance compared to a 980Ti/1070 in this title. Thus it will reveal Ryzen DX12 performance for what it truly is. This is the original correct premise without any unstable variables.
 
Last edited:
Some find similar result with Division
That Division video is also flawed as well: Firstly he tests Ryzen only, no Intel CPUs. So no comparisons. Secondly he is testing the internal benchmark at god awful Medium settings, AMD GPUs gain more from DX12 medium than NV. For example, In the case of Deus Ex, AMD posted large gains from running the game on High preset (the game's medium) on the RX480. Switch to Ultra however, and these gain evaporate quickly, especially during gameplay.

An accurate test would stick to Ultra/Very High, and test both Intel and AMD CPUs.
 
What are you talking about? CF 480s should be on par with a 1070 or slightly better. That's the case with Intel but not Ryzen. It doesn't show anything about the CPU itself (as it can reach high framerates with an AMD config). What you should be looking at is this:
  • Nvidia GPU + Intel CPU = Good
  • AMD GPU + Intel CPU = Good
  • AMD GPU + AMD CPU = Good
  • Nvidia GPU + AMD CPU = Bad
The most logical explanation is that the Nvidia driver is not yet optimized for Ryzen.
 
I fully expect a FuryX to not follow the same path, this one will not suffer scaling issues between AMD and Intel, it will have lower DX12 performance compared to a 980Ti/1070 in this title. Thus it will reveal Ryzen DX12 performance for what it truly is. This is the original correct premise without any unstable variables.
Like this you mean?:

http://i.imgur.com/OjSIcgM.jpg

Ryzen Fury X 20 or 24% faster at 720p lowest or high settings than Ryzen Titan X Pascal?

Yep, NVidia's code is entirely to blame, apparently.
 
Yep, NVidia's code is entirely to blame, apparently.

Isn't the Titan X significantly faster than the Fury X in those tests with a 6900k though? I think it's more plausible to suggest that Nvidia drivers are the problem here rather than magic secret sauce AMD hardware that was hidden for 2 years in the Fury X.
 
Yes, NVidia's driver, in some games, appears to be the sole cause of a gigantic slowdown on Ryzen. The problem is certainly not present in all games with the same card and processor, though. So there are some "special paths" lurking in the way the driver works. D3D11 or 12 doesn't appear to be a root cause. It's just another variable.

It seems to be similar to how Windows settings can make a difference in Ryzen performance. Often one change to a setting will help one game and hurt another, it seems.

In the end, it's not NVidia's fault as they haven't been programming for Ryzen for months. It's probably fair to say that reviewers who see gigantic differences are just lazily assuming that their test is valid.
 
Yes, NVidia's driver, in some games, appears to be the sole cause of a gigantic slowdown on Ryzen. The problem is certainly not present in all games with the same card and processor, though. So there are some "special paths" lurking in the way the driver works. D3D11 or 12 doesn't appear to be a root cause. It's just another variable.

It seems to be similar to how Windows settings can make a difference in Ryzen performance. Often one change to a setting will help one game and hurt another, it seems.

In the end, it's not NVidia's fault as they haven't been programming for Ryzen for months. It's probably fair to say that reviewers who see gigantic differences are just lazily assuming that their test is valid.


It could be nV's multi-threaded driver going over the CCX modules, I can see that easily hurting performance.
 
It could be nV's multi-threaded driver going over the CCX modules, I can see that easily hurting performance.
If that was the case it wouldn't be much of a problem with faster ran so it have to be something else.
 
Back
Top