No DX12 Software is Suitable for Benchmarking *spawn*

Really?
This is what Guru3d used according to the review:
I get the confusion, those were the old settings used for DX11 back when the game launched. Yesterday the site added an extra page for DX12, it had it's own settings and drivers. They used latest drivers from both vendors:

AMD Radeon (Crimson 16.9.1 / High Quality):
NVIDIA GeForce (GeForce 372.70 / High Quality):

http://www.guru3d.com/articles_page..._graphics_performance_benchmark_review,8.html
 
And again, it has been mentioned by many why internal benchmarks are not ideal and can be designed to skew results - and you want to go by AMD on this :)
Even specific maps can be skewed towards specific hardware so isn't a realistic alternative.
 
Computerbase did a very good update to the article, testing with an AMD FX-3870, and a Core i7 6950X, results were the same, all GPUs receiving big hits, frames were also more erratic with shooting spikes in DX12 mode. They also tested the Built-In benchmark, which at max settings gave them slightly better fps in DX12 (confirming Guru3D results), that was the only place were DX12 was useful.
https://www.computerbase.de/2016-09/deus-ex-mankind-divided-dx12-benchmark/3/
 
Computerbase did a very good update to the article, testing with an AMD FX-3870, and a Core i7 6950X, results were the same, all GPUs receiving big hits, frames were also more erratic with shooting spikes in DX12 mode. They also tested the Built-In benchmark, which at max settings gave them slightly better fps in DX12 (confirming Guru3D results), that was the only place were DX12 was useful.
https://www.computerbase.de/2016-09/deus-ex-mankind-divided-dx12-benchmark/3/

Yeah another example why the internal benchmark cannot be taken at face value and its context-methodology-scope should be very carefully explained by the developer; can you remember where you saw AMD suggested that the internal benchmark should be used?
If the source can be validated publications IMO should start looking into doing articles into the said 'internal benchmark' and the risk of it being used for marketing/skewing data.

There are many forums and other sites that are using the Guru3d data as an example of how AMD is doing well with DX12 and Nvidia bad.
Ok relatively Nvidia is still doing bad with DX12 in this game :)
But AMD's performance also shows something is very wrong, although ironically it seems a 1060 is faster than the 480 in DX12 and yet slower in DX11 on the 10-core 6950X :)

And the news situation will only become worst as more sites/publications just use the internal benchmark to test the API.
Thanks
 
Last edited:
Yea, it's still quite unpolished. Radeons even exhibit some artifacts and missing effects in DX12:
If you look closely, the DX11 version has artifacts as well.

E.g. the shadows occasionally missing for entire persons, in the whole sequence since 1:57. It's not the same persons when comparing the DX11 and DX12 versions, but there are persons without shadows in both versions.

Apart from that different tone mapping in DX12, with what appears to be a slight jitter on the HDR calibration loop. Apart from the jitter, better use of the available dynamic range though.
 
If you look closely, the DX11 version has artifacts as well.

E.g. the shadows occasionally missing for entire persons, in the whole sequence since 1:57. It's not the same persons when comparing the DX11 and DX12 versions, but there are persons without shadows in both versions.

Apart from that different tone mapping in DX12, with what appears to be a slight jitter on the HDR calibration loop. Apart from the jitter, better use of the available dynamic range though.
Maybe an effect/influence of the game's option for Shadow Quality and Contact Hardening Shadows whether intentional or bugged?
Cheers
 
Something a bit strange is the ram usage, but not sure if this is due to the game or the records.. at start he have 2GB more of RAM usage ( system memory, not Vram ), then more than 3 GB difference. outside if he have many software running, and this just due to DX12, thats a big big difference .
 
ComputerBase updated their results to include a i7 6950X and an FX8370
IJ74abV.png
 
1080 and 1060 equally fast under DX12? That's a new dimension of atrocity. Wheter this is the fault of the game or the driver.
 
The game is work-submission limited. Either because of synchronization/lock-step, or bad setup, unremoved dependencies, something in that direction. The GPU is starving really bad there. Nothing to do with driver per se.
 
1080 and 1060 equally fast under DX12? That's a new dimension of atrocity. Wheter this is the fault of the game or the driver.

At 720p...
Did you think DX12 would miraculously eliminate CPU bottlenecks in all cases? I thought you didn't believe everything you saw in slides...
 
1080 and 1060 equally fast under DX12? That's a new dimension of atrocity. Wheter this is the fault of the game or the driver.
I find it more amusing to see the 480 below the 1060 with DX12 for both CPUs while 480 is better in DX11 with the 6950X at 1080p and higher :)
The game does seem a bit screwed for now from an optimised-API perspective with how DX12 is hurting all cards pretty badly (and separately will be interesting to see if Nvidia can get some performance gain in DX11 with optimisation from either driver or later game patches), and will any other publication look into the performance-behaviour trait difference of the internal 'benchmark' to actual gameplay.
The difference is enough to be frown worthy and if cynical designed for marketing, and the results need to be seen if they can be replicated by others using 3rd party frame analysis.

Cheers
 
Last edited:
At 720p...
Did you think DX12 would miraculously eliminate CPU bottlenecks in all cases? I thought you didn't believe everything you saw in slides...
The CPU bottleneck seems to be far away with DX11 though. Even if DX12 did zero/nada/jack for CPU limitedness, it's interesting that it's so much slower. Without any effects added, if I'm not mistaken.

Oh - and yes, of course in 720p. Which is where the CPU bottleneck would show up most naturally of all tested resolutions. Strange that you find it worth posting such obvious things, only to be able to shove in that side remark about me and slides - especially when I did not even mention any slides here.
 
The CPU bottleneck seems to be far away with DX11 though. Even if DX12 did zero/nada/jack for CPU limitedness, it's interesting that it's so much slower. Without any effects added, if I'm not mistaken.
That's exactly why so many people are skeptics of the benefits of lower level API up to this point, in most cases the addition of DX12 doesn't result in any tangible reduction to the CPU overhead problem, quite the contrary! It seems DX12 has been reduced to a single item only: Async Compute! Despite the myriad of features that were promised.

Better CPU utilization? Nope, CPU performance appears to be largely unaffected.
Better VRAM management? Nope, these APIs increase VRAM consumption for no good reason.
Better Visual features? Nope, all the visual enhancements tiers are left biting the dust.
Better performance? Nope! Only in selected cases, and on select hardware!

The only real game to deliver on some of those promises is Ashes, the rest has their own problems.

  • Hitman 2016, widely considered to be a good showcase for DX12 considering the large fps lead that AMD GPUs enjoy, but that lead has nothing to do with DX12, the lead is massive even in DX11 (always has been since the days of Hitman Absolution), DX2 just adds a few percents of performance (typically less than 5%) even in the canned built-in benchmark. NV can even close the gap in some DX12 gameplay tests (1,2,3) depending on the area (Also the built-in benchmark), but mostly suffer massive drops in fps for no good reason.

  • Rise of Tomb Raider, The inverse of Hitman's situation, NV enjoys a large lead here, but it is independent of DX12 as it is present on DX11 as well. DX12 adds nothing to GPUs from both vendors, just induces fps loss on all hardware and breaks VXAO, Also AMD GPUs can close the gap in DX12 modes in certain non stressful areas.

  • Warhammer Total War, Bad DX12 barely able to match NV's DX11 performance.

  • Battlefield 1: So far the DX12 does nothing but reduce fps on all hardware.

  • Deus Ex Mankind Divided: DX12 breaks fps consistency, reduces performance.

As for DX12 native games, I think we need further analysis on them (Gears Of War 1, Forza 6 Apex) to determine the behavior of GPUs in them, as results vary. Sometimes a game will perform better on the high-end GPUs of one vendor compared to the competition, but worse on the middle end of the same vendor. Even Ashes exhibit the same behavior, running better on Fury X compared to even the 1070, but running worse on X 480 to the point of being in a tie with the 1060. Geara and Forza also received several updates which have gone untested. Though these games are about to be followed by successors such as Gears Of War 4 and Forza Horizon 3. I hope many publications will get to analyse them in greater depths.

Quantum Break will provide a very good case study when the DX11 version is released and is compared to DX12.
 
Last edited:
Why exactly are we ignoring Doom, which performs better with the low level API? Or are we keeping these comparisons limited to only tacked on DX12 ports that aren't expected to have ideal performance?

I have no idea why you're so against the low level APIs. They've already won and all the engines are moving in that direction. Just because current engines are largely hamstrung by DX11 requirements doesn't mean they are flawed.

The CPU bottleneck seems to be far away with DX11 though. Even if DX12 did zero/nada/jack for CPU limitedness, it's interesting that it's so much slower. Without any effects added, if I'm not mistaken.
They definitely seem to have some pipeline issues. Too many fences or bad dependencies would seem logical.
 
I have no idea why you're so against the low level APIs. They've already won and all the engines are moving in that direction. Just because current engines are largely hamstrung by DX11 requirements doesn't mean they are flawed.
Without the collective effort and data tracking in this thread we wouldn't have reached the awareness that most are bad ports. Some of us would be spewing the same mindless PR shenanigans of the wonderful flawless DX12 execution.

The API foundations are solid, but without a solid execution it is not going to gain any significant ground anytime soon. And what we have is a far cry from being called a win, rather a limping walk backwards. Lower level API also inherits the burden of developer support for current, old, and future architectures, developers who up until now proved incapable of being trustworthy of this important task (due to their own limitations and the restrictions of ecosystems around them), threatening the very existence of the core PC experience, which hinges on the concept of forward/backward compatibility.

Why exactly are we ignoring Doom, which performs better with the low level API? Or are we keeping these comparisons limited to only tacked on DX12 ports that aren't expected to have ideal performance?
And Doom is no exception to the rule, it's game with a tacked on lower level API, yet it succeeded in achieving some of the promised features (not all though, and still had it's own problems), and thus has traveled further than most of the poor DX12 titles. So the argument that games with tacked on APIs are inherently bad and not representative of the true form is quite baseless. There you have an example that is just that, achieving better results.
 
Last edited:
Back
Top