No DX12 Software is Suitable for Benchmarking *spawn*

Computer base posted a benchmark review. Gains for at high fps low resolution for both AMD and NV.

At high resolution where performance critical code on the gpu is more important DX11 takes back the lead for NV.
 
Thx for the bench. What about VRAM usage ? I notice that in Deus EX MD, DX12 use more VRAM (which is "bad" cause the game already eat vram a lot in dx11 ...)
DX12 increases VRAM usage in all DX12 games I have, Tomb Raider, Deus Ex MD, Division.. Etc. The increase ranges from 300MB, to a full 1GB.

In the case of Division, I registered a 500MB increase in the same area under DX12.
Computer base posted a benchmark review. Gains for at high fps low resolution for both AMD and NV.
They seem to be oblivious to the fact that visual settings are bugged under DX12.
 
Just tried the Division at Ultra preset (it's a step down from maxed settings), which sacrifices Shadows (in the form of PCSS), Reflections, Object Details, Post AA, and Ambient occlusion (runs it at High instead of Ultra, or HBAO+). DX12 was like 2 fps faster than DX11 in the Internal Benchmark and an actual gameplay walkthrough.

It seems we have another Rise of Tomb Raider situation on our hand, where DX12 offers worse image quality options compared to DX11 (no VXAO in DX12). For now Division has no HBAO+, PCSS, and for NV users no HFTS! We'll see if that also remains the case down the line! But I am not holding my breath.


EDIT: GTX 1080 DX11 vs DX12 Ultra: DX12 is able to provide a 5fps boost to fps during gameplay!
 
Last edited:
Why don’t developers love DX12?

"But interestingly that doesn’t mean that it is needed by everyone. The extra level of control brings with it a certain amount of extra complexity – and that means that some developers might be reluctant to move to DirectX 12 – indeed, a game which is not really limited by GPU horsepower, and which isn’t bottlenecked by just one CPU thread isn’t usually going to gain much from moving to DirectX 12.

"In those cases DirectX 11 or DirectX 9 likely represents a perfectly acceptable way to design titles. But titles which are highly graphically ambitious almost invariably benefit from DirectX 12.“

In short, game developers have decided that DX12 isn’t worth the extra time and effort on all but the most demanding titles at this point. This despite the general consensus that DX12 is better on many titles over DX11, especially for those with true DX12 empowered video cards.

http://www.techradar.com/news/the-forgotten-api-just-what-is-going-on-with-dx12
 
Why don’t developers love DX12?

"But interestingly that doesn’t mean that it is needed by everyone. The extra level of control brings with it a certain amount of extra complexity – and that means that some developers might be reluctant to move to DirectX 12 – indeed, a game which is not really limited by GPU horsepower, and which isn’t bottlenecked by just one CPU thread isn’t usually going to gain much from moving to DirectX 12.

"In those cases DirectX 11 or DirectX 9 likely represents a perfectly acceptable way to design titles. But titles which are highly graphically ambitious almost invariably benefit from DirectX 12.“

In short, game developers have decided that DX12 isn’t worth the extra time and effort on all but the most demanding titles at this point. This despite the general consensus that DX12 is better on many titles over DX11, especially for those with true DX12 empowered video cards.

http://www.techradar.com/news/the-forgotten-api-just-what-is-going-on-with-dx12

I think the big problem is more that with a low level API, it becomes very easy to shoot yourself in the foot. This is particularly true when you're dealing with manual memory management. This means that unless you have a lot of resources in order to test on many different platforms and fix problems (usually nasty hard to figure out and fix ones involving drivers), you're probably going to end up actually loosing performance compared to DX11, as well as ending up with severe problems on a couple platforms. This isn't even mentioning API complexity issues.
 
Why don’t developers love DX12?

"But interestingly that doesn’t mean that it is needed by everyone. The extra level of control brings with it a certain amount of extra complexity – and that means that some developers might be reluctant to move to DirectX 12 – indeed, a game which is not really limited by GPU horsepower, and which isn’t bottlenecked by just one CPU thread isn’t usually going to gain much from moving to DirectX 12.

"In those cases DirectX 11 or DirectX 9 likely represents a perfectly acceptable way to design titles. But titles which are highly graphically ambitious almost invariably benefit from DirectX 12.“

In short, game developers have decided that DX12 isn’t worth the extra time and effort on all but the most demanding titles at this point. This despite the general consensus that DX12 is better on many titles over DX11, especially for those with true DX12 empowered video cards.

http://www.techradar.com/news/the-forgotten-api-just-what-is-going-on-with-dx12
Everybody seems to be missing the obvious answer: You still need to support DX11 on PC. There are too many Windows 7 and Windows 8 customers out there. DX12 doesn't allow you to cut DX11 support. With DX11 + DX12, you have to support and maintain two versions of the game (including two set of IHV driver bugs). Once DX11 becomes obsolete (= Windows 7 is no longer popular), DX12 or Vulkan will become the preferred API for AAA cross platform development, since these APIs match the consoles much closer in resource management and feature set. Having to support DX11 on PC is already causing unoptimal design choices on consoles (similarly than DX9 on consoles slowed down compute shader transition on PC).

Right now is seems that Vulkan will have the upper hand in the near future, since it supports Windows 7 and 8. Most AAA games have already dropped support for Fermi (GeForce 480) and Terascale 3 (Radeon 6970). So Vulkan minimum requirements should no longer be a problem for AAA games. It could be the sole API on PC. I'd guess that the biggest problem is the shaders written in HLSL. Big AAA games/engines have huge amount of HLSL code. But tools to solve that issue are already in pretty good shape. It is interesting to see how this pans out. Will we have another Vista & DX10 situation = most developers are just waiting for the OS problem to solve itself, or will we see more developers jumping to the Vulkan bandwagon.
 
I think the big problem is more that with a low level API, it becomes very easy to shoot yourself in the foot. This is particularly true when you're dealing with manual memory management. This means that unless you have a lot of resources in order to test on many different platforms and fix problems (usually nasty hard to figure out and fix ones involving drivers), you're probably going to end up actually loosing performance compared to DX11, as well as ending up with severe problems on a couple platforms. This isn't even mentioning API complexity issues.

In one of the Q&A sessions about Vega's memory configuration a AMD employee said the same thing:
 
In addition, Huddy does not see DirectX 12 in any case as useful. The possibilities that DirectX 12 offers to a developer in which he gets more control over the hardware through a low-level access increases the complexity not inconsiderably. This, according to him, is already enough to explain the hesitant attitude of developers. In addition, it is not very useful to handle DirectX 12, if this is absolutely unnecessary from the performance point of view.

Huddy therefore recommends that all developers who do not want to improve performance through DirectX 12 continue to work with DirectX 9 or 11. Graphically demanding games always benefit from Huddy. But this presupposes that the developers have mastered the handling of DirectX 12. The practice looked a lot different recently and DirectX 12 made more problems than it brought advantages.

http://www.pcgameshardware.de/DirectX-12-Software-255525/News/ist-nicht-immer-sinnvoll-1220210/
 
It seems to me that developers who lack the necessary expertise or incentive to use DX12 will tend to use off-the-shelf graphics engines anyway—the CryEngine, Unity3D, the Unreal Engine, FrostBite etc.—which now come with Vulkan and/or DX12 support.


Their LLAPI support, still need expertise to use them, they aren't just plug and play here ya go :), cause not everyone uses the same shaders. Better to stay away from LLAPI's if the developers aren't comfortable with that.
 
The huge problem is new hardware. The development of a AAA title takes at least as long as it takes for the OEMs to bring out a new generation of hardware, which might bring new capabilities and might require different programming solutions. In DX11 it is mostly up to the OEM´s driver team to handle this. In DX12 the game developer is forced to make those changes, which means after launch support becomes much more of an effort. And if consoles also go to a 3 year refresh cycle, this means you will need to have to adjust already released games to the new console hardware. This is a much smaller task on DX11 when you look at the workload of the game developer.

LLAPI were great when the hardware refresh cycles were slower and when the development of a game took much less time and resources. If you start with CGN1.1 and Maxwell those are the first cards that need to work with your DX12 application, then you add anything sold today and anything coming out this year plus anything coming out in 2019 + consoles. That is a lot of work. With DX11 it is not so much your problem.
 
The huge problem is new hardware. The development of a AAA title takes at least as long as it takes for the OEMs to bring out a new generation of hardware, which might bring new capabilities and might require different programming solutions. In DX11 it is mostly up to the OEM´s driver team to handle this. In DX12 the game developer is forced to make those changes, which means after launch support becomes much more of an effort. And if consoles also go to a 3 year refresh cycle, this means you will need to have to adjust already released games to the new console hardware. This is a much smaller task on DX11 when you look at the workload of the game developer.

LLAPI were great when the hardware refresh cycles were slower and when the development of a game took much less time and resources. If you start with CGN1.1 and Maxwell those are the first cards that need to work with your DX12 application, then you add anything sold today and anything coming out this year plus anything coming out in 2019 + consoles. That is a lot of work. With DX11 it is not so much your problem.


And thus why the market has diverged instead of come together. There is no easy way around it, the work just has to be done.
 
Sniper Elite 4 is perhaps the best DX12 implementation to date, all video cards receive performance boost fron DX12, even Kepler based GPUs. Fury cards recieve the largest boost, but that's,s because they have very bad DX11 performance to begin begin with. Also all GPUs receieve a moderate boost from Asynchronous Compute, except GCN1, Kepler and Maxwell GPUs.

http://www.pcgameshardware.de/Sniper-Elite-4-Spiel-56795/Tests/Direct-X-12-Benchmark-1220545/
Certainly bodes well for AMD, both for CPU and GPU. Performance at 6 cores / 6 threads was noticeably higher than 4 cores / 8 threads.
 
Sniper Elite 4 is perhaps the best DX12 implementation to date, all video cards receive performance boost fron DX12, even Kepler based GPUs. Fury cards recieve the largest boost, but that's,s because they have very bad DX11 performance to begin begin with. Also all GPUs receieve a moderate boost from Asynchronous Compute, except GCN1, Kepler and Maxwell GPUs.

http://www.pcgameshardware.de/Sniper-Elite-4-Spiel-56795/Tests/Direct-X-12-Benchmark-1220545/
@CarstenS any chance these tests could be re-run at least for GCN1 with the new Radeon Software 17.2.1 drivers? Apparently those driver re-enable async compute on GCN1 https://forum.beyond3d.com/posts/1965805/
 
Impressive showing of Fiji cards in Sniper Elite 4.
And once again the multi-GPU performance scaling in DirectX12 is nothing short of impressive, just as I've been saying for months given my personal experience in ROTR, Deus Ex Makind Divided and AotS.

AMD is claiming up to 100% mGPU scaling and just look at these results:

bST6JoM.png



The R9 290 can be found at less than 150€ in ebay. Who would say a 300€ mGPU combo could ever match a recent 650€ single card in 4K.
 
Back
Top