No DX12 Software is Suitable for Benchmarking *spawn*

It's nice to see but wonder why it's what not here from the start since the first one had it on PC...
The 2016 Hitman's DX12 renderer wasn't really much faster if at all compared to DX11, clearly they wanted to redo the whole DX12 renderer and it wasn't ready for launch
 
The 2016 Hitman's DX12 renderer wasn't really much faster if at all compared to DX11, clearly they wanted to redo the whole DX12 renderer and it wasn't ready for launch

Ah yes indeed. Still "sad" for people who already done the game.
 
It's a pity Kepler DX12 isn't supported. I tried it anyway and it starts, but the game freezes at a black screen after loading the mission.

DX12 made wonders already in Hitman 1 for this old Phenom II X4.


Actually, Hitman 2 DX12 does work on Kepler now. The latest Nvidia driver mentioned a fix for Hitman 2 crashing, so I tried the game again and now it works in DX12.
 
Last edited:
What would we be benchmarking against? Different Nvidia models?
It's about as useful as the PCIe 4.0 test.

But didn't Intel already show this test in their Ice Lake Gen11 GPU presentation?
 
It's about as useful as the PCIe 4.0 test.

But didn't Intel already show this test in their Ice Lake Gen11 GPU presentation?

I know where your coming from, but these tests are showing real improvements which once game engines start tapping into, will give us more eye candy. In case of PCIe 4 we have been there already with PCI, AGP x1 x2 x4 x8, PCIe 1.0 2.0 ...
If we were to go back now to AGP x4 with our modern game engines I think you would spot the difference ;)

For now obviously it is a feature which gives something to professionals working with huge data sets, but for gamers it's a non-important logo on a box. In the next 5 years it will be of benefit even in games.
I personally won't upgrade my Threadripper motherboards because of PCIe 4.0 as I have enough lanes not to be bothered with it, but when I decide to do it, it will be nice to be able to visually verify that it works as intended and tests like this are one way of doing it.

But ... PCIe 5 might come before I will do that!
 
I know where your coming from, but these tests are showing real improvements which once game engines start tapping into, will give us more eye candy. In case of PCIe 4 we have been there already with PCI, AGP x1 x2 x4 x8, PCIe 1.0 2.0 ...
If we were to go back now to AGP x4 with our modern game engines I think you would spot the difference ;)

For now obviously it is a feature which gives something to professionals working with huge data sets, but for gamers it's a non-important logo on a box. In the next 5 years it will be of benefit even in games.
I personally won't upgrade my Threadripper motherboards because of PCIe 4.0 as I have enough lanes not to be bothered with it, but when I decide to do it, it will be nice to be able to visually verify that it works as intended and tests like this are one way of doing it.

But ... PCIe 5 might come before I will do that!



The thing is this test doesn't seem to be made to simulate realistic scenarios. It's made to show the tech's potential in an ideal scenario. We already have real in-game implementations for variable rate shading, and they don't show nearly as much of a performance upgrade as this test implies:
https://techreport.com/review/34269...ading-with-wolfenstein-ii-the-new-colossus/3/

Also:

I think in theory it only benefits large resolutions because pixel shading performance supposedly needs to scale linearly with resolution increase.

The tech is actually more usable for VR, especially when paired with eye-tracking. The general idea is to use higher detail in the zone where the eye is looking, and significantly lower the detail in the surrounding area. I doubt we'll have eye tracking on monitors, so the use in those is rather limited.
nVidia even puts the feature under their VRWorks libraries.
 
It's about as useful as the PCIe 4.0 test.

But didn't Intel already show this test in their Ice Lake Gen11 GPU presentation?

Actually this test allowed me to see that my x99 - Vega combo has terrible pcie performance, even if it's running at pcie3 16x :neutral:
 
Stardock (the developer of the first ever DX12 game: Ashes of Singularity) has recently posted a blog post explaining why they abandoned DX12/Vulkan in their newest game: Star Control, in favor of DX11.

It basically boils down to the extra effort it takes to develop the DX12/Vulkan path, longer loading times and VRAM crashes are common in DX12, if you don't manage everything by hand. Performance uplift is also highly dependent on the type of game, and they only achieved a 20% uplift on the DX12 path of their latest games, which they say is not worth all the hassle of QA and bug testing.

In the end, they advice developers to pick DX12/Vulkan based on features not performance, Ray Tracing, AI, utilizing 8 CPU cores or more, Compute-Shader based physics ..etc.

https://www.gamasutra.com/blogs/Bra...k1u3fwoIL_QUdwlVhagyopZhQre_YywJ1qxtRiH8Rn0Zo

Some observers have noted that Stardock's most recent two releases, Star Control: Origins and Siege of Centauri were DirectX 11 only.

the biggest difference between the two new graphics stacks and DirectX 11 are that both Vulkan and DirectX 12 support multiple threads to send commands to the GPU simultaneously. GPU multitasking. Hooray. i.e. ID3D12CommandQueue::ExecuteCommandLists (send a bunch of commands and they get handled asynchronously). In DirectX 11, calls to the GPU are handled synchronously. You could end up with a lot of waiting after calling Present().

not all is sunshine and lollipops in DX12 and Vulkan. You are given the power but you also get handed a lot of responsibility. Both stacks give you a lot of rope to hang yourself. DX11 is pretty idiot proof. DX12 and Vulkan, not so much.
Some examples of power and responsibility:
-You manage memory yourself
-You manage various GPU resources yourself
-All the dumb things people do with multiple threads today now apply to the GPU

Here's one: Long load times in DirectX 12 games by default. Is that DX12's fault? No. It's just that many developers will do shader compiling at run-time -- all of them.

In DirectX 11 if I overallocate memory for someone's 2GB GPU, it just throws the rest into main memory for a slow-down. On Vulkan and DX12, if you're not careful, your app crashes.

The power that you get with DirectX 12 and Vulkan translates into an almost effortless 15% performance gain over DirectX 11. In when we put our GFX engineers onto it, we can increase that margin to 20% to 120% depending on the game.

Stardock has DirectX 12 and Vulkan versions of Star Control: Origins. The performance gain is about 20% over DirectX 11. The gain is relatively low because, well, it's Star Control. It's not a graphics intensive game (except for certain particle effects on planets which don't benefit much from the new stacks). So we have to weigh the cost of doubling or tripling our QA compatibility budget with a fairly nominal performance gain. And even now, we run into driver bugs on DirectX 12 and Vulkan that result in crashes or other problems that we just don't have the budget to investigate.

Other trends coming to games will essentially require Vulkan and DirectX 12 to be viable. Ray Tracing, OSL, Decoupled Shading, Compute-Shader based physics and AI only become practical on Vulkan and DirectX 12. And they're coming.
 
Looks like the DX12 bad performance story refuses to end even in 2020, and even with Turing and Navi, after the horrendous DX12 path for Borderlands 3, here comes the DX12 path for Monster Hunter World to extend the tale.

For Turing, DX12 does nothing but hurts performance by as much as 12%, for Navi it doesn't improve fps at all, but it improves frame times in a significant way, still doesn't even matter because the frame times were atrociously bad on Navi @DX11, the only thing DX12 does is improve Navi's frame times to a level that is close to Turing's frame times.

Overall DX12 doesn't lead to any fps gains on any GPU architecture.

https://www.computerbase.de/2020-01.../#diagramm-directx-11-vs-directx-12-1920-1080

I really really want DX12 to be successful in the PC space considering DXR relies heavily on it, some of the DXR games do have relatively competent DX12 path (Metro Control), Forza and Gears still do as well. But for many other games, they still can't grasp the knowhow to make DX12 successful, which is sad.
 
Back
Top