No DX12 Software is Suitable for Benchmarking *spawn*

All I know it's the only "DX12 game" in which DX12 reduces performance without affecting IQ on both brands...
I will take higher minimal framerate over higher average framerate any day. The DX12 path is worth it no matter who made your GPU.
 
Yeah the DX12 path in this game definitely looks "strange"...only game that has reduced perf in DX12, runs better on NV than AMD (when equivalent cards are compared), only has ASYNC implemented when it suits NV....but then again it's an Nvidia GameWorks branded PC port so..
I think we can drop the GameWorks is to blame :)
Case in point the PureHair also impacts Nvidia who get minimal benefits from it if enabled (already been discussed in other threads and I provided detailed info showing it is not the same as what is visually shown with AMD), also it has been shown when done correctly neither technology impacts on each other if disabled, and that is the key point that the technology should be able to be turned off without impacting performance.

Part of the performance benefit for Nvidia in RoTR may come down to AO/volumetric lighting/shadows/etc type post-processing effects more optimised to their architecture, just as we see Nvidia cards tank on Quantum Break.

Anyway can I suggest any GameWorks/PureHair debates be in a different thread, they get blamed for too much and that takes focus away from the actual underlying cause of performance when they are disabled/turned off.
Cheers
 
Maybe they just figured other features had a higher priority.

I find it very hard to believe that async hasn't been used in the XBone version since November, or that it isn't in the PS4 port since the beginning.
How that AMD-enhancing performance feature didn't get ported on the original DX12 patch and appears now that the new nvidia cards seem to support it, is real mystery. Especially given all the nvidia branding that came with the PC port.

Regardless, here's a Fury X vs. 980 Ti comparison with the new patch:

http://www.overclock3d.net/reviews/...e_tomb_raider_directx_12_performance_update/5

c55Dm1Y.jpg


Looks like RoTR has just joined the list of titles where Maxwell loses and GCN gains performance when transitioning to DX12.
 
I find it very hard to believe that async hasn't been used in the XBone version since November, or that it isn't in the PS4 port since the beginning.
How that AMD-enhancing performance feature didn't get ported on the original DX12 patch and appears now that the new nvidia cards seem to support it, is real mystery. Especially given all the nvidia branding that came with the PC port.

Regardless, here's a Fury X vs. 980 Ti comparison with the new patch:

http://www.overclock3d.net/reviews/...e_tomb_raider_directx_12_performance_update/5

Looks like RoTR has just joined the list of titles where Maxwell loses and GCN gains performance when transitioning to DX12.
I tried to find it in the review but find no information.
Any idea how they captured the DX12 performance?
They mention not using the preset benchmark but wondering if you know more how they do this for DX12 games:
This performance data is taken from actual gameplay, from the same section of the game as we have used in our previous DirectX 12 performance overview. Our original DirectX 12 data predates that Rise of the Tomb Raider built in benchmarking tool.
They did use PresentMon for at least one game historically (Quantum Break but also because it is UWP) but felt it was cumbersome for them to use, so no idea if they stayed with it for DX12 in general as I cannot find any recent mention in review test setups, maybe me missing it *shrug*.
Thanks
 
They mention not using the preset benchmark but wondering if you know more how they do this for DX12 games:
Nop. I don't usually follow that site and I only got there through comments from news elsewhere, though I think it went through wccftech in the meantime.
Maybe they used FRAPS? Their DX11 results don't differ a lot from other results taken with the internal benchmark?

Hopefully, the GTX1060 and RX470 reviews will use the new patch and we'll be able to see results based on the internal benchmark.

Great analysis and shows how DX12 can be a disappointment, especially again regarding internal benchmark compared to the actual real-world game.
Why is that a DX12 specific problem and not something generally related to internal benchmarks?
 
I have just read the performance review of Total War: Warhammer at pcgameshardware.
Great analysis and shows how DX12 can be a disappointment, especially again regarding internal benchmark compared to the actual real-world game.
http://www.pcgameshardware.de/Total...Specials/Direct-X-12-Benchmarks-Test-1200551/
I used the Chrome translator, so not perfect explanation but still makes sense.

Cheers
You're luckier than me, I couldn't make sense of the translation, even having insight of previous versions of the engine... (Napoleon/Shogun 2/Rome 2)
 
Probably a browser thing but google-translated page does not give drop-down graph options, so still need to have original page open on another tab to manipulate graphs.
Other than that translation is okay.
 
Nop. I don't usually follow that site and I only got there through comments from news elsewhere, though I think it went through wccftech in the meantime.
Maybe they used FRAPS? Their DX11 results don't differ a lot from other results taken with the internal benchmark?

Hopefully, the GTX1060 and RX470 reviews will use the new patch and we'll be able to see results based on the internal benchmark.


Why is that a DX12 specific problem and not something generally related to internal benchmarks?
FRAPs does not work correctly with DX12.
TBH I feel DX12/internal benchmarks/etc are just compounding measuring games, when different API such as data points for DX11 and DX12, and also indirectly different hardware, it is reducing the number of sites I feel confident in using for evaluating benchmarks.
The internal benchmarks are becoming more of a "cheat" IMO with how they pan out with separate frame-rate analysis.

DX12 can be a disappointment because most developers are not implementing it well, nor optimising it for each architecture, and I can see this causing more extreme competition between AMD and Nvidia in game sponsorship.
In a rush, but that Warhammer review sums up my feeling.
Cheers
 
DX12 can be a disappointment because most developers are not implementing it well, nor optimising it for each architecture, and I can see this causing more extreme competition between AMD and Nvidia in game sponsorship.

All new APIs have teething problems at first. And TBH, except for Tomb Raider's first patch, all DX12 have offered performance increases with one or both vendors.
nVidia's lesser performance in some titles seem to stem from the fact that neither Kepler or Maxwell were made with DX12 in mind, but rather DX11 oriented GPUs with "courtesy" DX12 forward compatibility. Pascal may change that.
 
What's "not made with DX12 in mind" supposed to even mean? There are two performance enhancing features in DX12:
1. Significantly reduced D3D API bottlenecks
2. "Async compute"
Unless D3D11 is kinda "misused" 1. won't be a problem and I'm pretty sure NV does run concurrent graphics and compute even in D3D11. Ashes of Singularity is the only case where 1. comes into play and if API is pushed hard enough then even Kepler will show advantage in D3D12.
 
It's not so easy. Resource management, pipeline configuration, threads (direct command lists, bundles) and resource transitioning (barriers, where a proper usage will improve GPU concurrency), over-committed situation etc, everything must be re-implemented by a different point of view. The only code you do not need to change is HLSL (except if you do not want take advantage of shader direct indexing of SM 5.1, usefull at example on GPU bound situations). Some techniques, like bindless resources, need a proper new approach. A naive conversion of a Dx11 code will probably always performs worst.
 
What's "not made with DX12 in mind" supposed to even mean? There are two performance enhancing features in DX12:
1. Significantly reduced D3D API bottlenecks

This isn't a single feature.
 
This isn't a single feature.
It's not. But neither is this really a GPU feature (forgetting hardware that can't run DX12 because of this).
As @Alessio1989 basically mentioned above: it's easy to get it wrong with D3D12 and getting it right does not mean massive performance increases unless you're really pushing the D3D11 above and beyond (that is spending a significant amount of CPU time in D3D). So if you're not pushing it getting to D3D11 performance should basically be considered optimal.
 
Preliminary benchmarks of Doom's vulkan mode is reminiscent of dx12.

For RX480,

As you can see there is a very healthy performance increase, in fact in Ultra HD running towards 20% just from switching OpenGL to Vulkan mode.

and,

For Nvidia that is not a similar result. I ran the test with the very latest 368.69 WHQL driver on the GeForce GTX 1070, Vulkan does not seem to kick in. Overall seen from the previous driver OpenGL 4.5 perf has increased overall a tiny bit, however with Vulkan activate in WQHD and UHD there even is a tiny bit of negative scaling.

http://www.guru3d.com/news_story/new_patch_brings_vulkan_support_to_doom.html
 
Which is really funny since the first Vulkan demonstration was on a 1080 and async isn't even enabled for Nvidia yet.
 
Back
Top