No DX12 Software is Suitable for Benchmarking *spawn*

Given this is a complete disaster probably not. A hell of a lot of games do run better under low level APIs on AMD GPUS though. Both RDNA and older GCNs from the Paxwell era.

I think you need a few examples to prove that’s actually true. My recollection of the past few years is that whenever DX12 is slower it’s slower on all architectures.
 
I think you need a few examples to prove that’s actually true. My recollection of the past few years is that whenever DX12 is slower it’s slower on all architectures.
There were several games where D3D12 would end up being faster on AMD while D3D11 would be faster on Nvidia.
I was thinking recently that it would be cool if someone would revisit them now to see if anything has changed.
My personal example is DXMD built-in benchmark where I'm getting +28% of performance in D3D11 these days when compared to D3D12 running on 5900X+4090 -- it was more or less even back on Pascal.
 
I think you need a few examples to prove that’s actually true. My recollection of the past few years is that whenever DX12 is slower it’s slower on all architectures.
When it’s bad its bad on all vendors, but the worst on Nvidia. When good it’s still slower on Nvidia just to a smaller degree, while AMD sees a performance uplift. Do people really not remember the regular practice of benchmarkers using DX11 for Nvidia GPUs and DX12 for AMD?
 
When it’s bad its bad on all vendors, but the worst on Nvidia. When good it’s still slower on Nvidia just to a smaller degree, while AMD sees a performance uplift. Do people really not remember the regular practice of benchmarkers using DX11 for Nvidia GPUs and DX12 for AMD?

Any recent examples of DX12 being a net performance win on AMD while being a net loss on Nvidia?
 
Any recent examples of DX12 being a net performance win on AMD while being a net loss on Nvidia?
Very few benchmarks of Pascal or Maxwell these days so I couldn't say as all the API comparisons done by outlets like Computerbase always use recent GPUs. Only GameGPU really has results for those older GPUs in newer DX12 titles and GCN consistently overperforms compared to Paxwell.

 
Very few benchmarks of Pascal or Maxwell these days so I couldn't say as all the API comparisons done by outlets like Computerbase always use recent GPUs. Only GameGPU really has results for those older GPUs in newer DX12 titles and GCN consistently overperforms compared to Paxwell.

I remember Paxwell’s struggles. The claim though was that poor DX12 performance is a hardware problem. If we can’t find examples of any architectures that actually gain performance in DX12 mode doesn’t that imply the problem is actually elsewhere?
 
I remember Paxwell’s struggles. The claim though was that poor DX12 performance is a hardware problem. If we can’t find examples of any architectures that actually gain performance in DX12 mode doesn’t that imply the problem is actually elsewhere?
I can show you examples from that era of games with GCN gaining and Paxwell losing. Just not in any recent titles for the aforementioned reason. My claim was that it was a hardware problem for Nvidia, their older architectures in particular. It’s harder to determine now since API comparisons are so rare as few games of any relevance offer both. Also just to clarify I’m not absolving the software side of things, but when virtually no developer is able to offer a better experience on GPUs from only a specific vendor can we place all the blame on them and none on the hardware?

They were certainly more compatible with DX12 specs than AMD. They supported Conervative Rasterization and Raster Order, all GCN GPUs lacked such features completely.

Nope.
What use have these features brought consumers? The 2 titles that use VXAO or HFTS where said GPUs are too slow for most gamers to realistically use?
 
Last edited:
only GameGPU really has results for those older GPUs in newer DX12 titles and GCN consistently overperforms compared to Paxwell.
This is incorrect, don't over-generalize unless you have a full set of data points, you are selecting examples and drawing the wrong conclusions, your selection has nothing to do with DX12.

Modern Warfare 2: the engine is notorious for it's preference of AMD GPUs, all of them perform way above their class, even if we compare Ada vs RDNA3, so this has nothing to do with DX12.
Warzone 2: same engine, same thing.
Uncharted: same thing, has nothing to do with DX12.
The Matrix demo: really?

The only valid points you have are Cyberpunk 2077, Plague Tale and Miles Morales. But then I can point to Ampere being much superior than RDNA2 in these games. In Plague Tales, and Miles Morales the 3080 is faster than 6900XT, so does that mean Ampere is better in DX12 than RDNA2? Of course not, some games perform better on certain architectures. That's it.

At the end of 2021, GameGPU conducted an average review of 7 games (6 of them use DX12), the GTX 1080 was equal to Vega 64 at both 1080p and 1440p.

Here is a more modern example for you: Dying Light 2 in DX11 vs DX12, Pascal actually gains fps in DX12, while Vega loses fps!

You are also cherrypicking examples, I can list many modern 2022/2021 games with DX12 where Pascal is equal to Vega or faster (I am not gonna even bother you with DX12 UE4 games where Pascal surpasses Vega).

GRID Legends
Need For Speed Unbound
Forza Horizon 5
Warhammer 40K Darktide
Halo Infinite
Guardians of the Galaxy
Deathloop
Elden Ring
Sackboy
Spider-Man Remastered
Shadow Warrior 3
 
Last edited:
Very few benchmarks of Pascal or Maxwell these days so I couldn't say as all the API comparisons done by outlets like Computerbase always use recent GPUs. Only GameGPU really has results for those older GPUs in newer DX12 titles and GCN consistently overperforms compared to Paxwell.
There was (and is) no difference in D3D11 vs D3D12 performance on "Paxwell" vs Turing+. The issue with lower D3D12 performance on "Paxwell" wasn't in h/w, it was in bad optimization of said renderers for Nvidia h/w, and this is still true even on Lovelace - even more so I'd say as the absolute performance numbers are considerably higher now and thus the absolute delta between 11 and 12 is also higher.
 
I assume it's that DX12on11 thing. Presumably you're getting DX11 performance as the baseline with the additional overhead of the conversion.
There is no "dx12on11". There is dx11on12 so the opposite, and they do seem to use it for something (my guess would be some 3rd party plugin like Hairworks? is there a dx12 version of the library available?} but not for the whole engine probably as that wouldn't allow them to use RT.
 
There is no "dx12on11". There is dx11on12 so the opposite, and they do seem to use it for something (my guess would be some 3rd party plugin like Hairworks? is there a dx12 version of the library available?} but not for the whole engine probably as that wouldn't allow them to use RT.

Yeah wrong way round but that's essentially what I meant. i.e. this is a DX11 game that they then run on a translation layer (DX11on12) to enable those DX12 only effects in the DX12 mode. But at it's heart, it's still a DX11 game, hence why it runs much faster in the DX11 mode.


D3D11On12 is a mechanism by which developers can use D3D11 interfaces and objects to drive the D3D12 API. D3D11on12 enables components written using D3D11 (for example, D2D text and UI) to work together with components written targeting the D3D12 API. D3D11on12 also enables incremental porting of an application from D3D11 to D3D12, by enabling portions of the app to continue targeting D3D11 for simplicity while others target D3D12 for performance,
 
cd projekt red games have always been buggy messes and performance hogs that have to go through many cycles of patches and updates to fix and improve things. In this case they're taking an old game and glueing on a bunch of new stuff. Their games are always going to be the exception rather than the rule when it comes to performance. It'll never be a good game to extrapolate information, because it's d3d11on12 and it's just another weird result from cdpr. Probably not worth that much effort drilling down into what's going on.
 
Do we have any evidence of the game actually being D3D11on12 as a whole? I find it hard to believe considering that you can't add RT to D3D11 in the first place.
 
Do we have any evidence of the game actually being D3D11on12 as a whole? I find it hard to believe considering that you can't add RT to D3D11 in the first place.

Control is D3D11on12 on pc and it has RT, so it works.

Edit: Back when I tried to profile Control nsight wouldn't work because it didn't support D3D11on12, at least not at the time.
 
Last edited:
Don't forget the dev was probably a mess. Saber , who did the amazing switch port, worked on this update for a long time. They're a good tech studio... Then CDPR took it back in house aaaannnd we've this... I don't know what happenef, but I've a hard time that this was the "Saber way" to do the work...
 
Back
Top