DX12 Performance Discussion And Analysis Thread

I'm counting on the multithreaded nature of DX12 to show better performance with more cores, regardless of the number of cores you have, so I was counting on up(side?)grading when I'm done with the more demanding DX11 titles I want to play (right now I'm on MGS V).
I can't overclock because it's a LGA2011 Xeon E5 so not even the BCLK straps will work like the Core i7 variants. Regardless, the CPU was really cheap on e-bay (180€ IIRC) so I'll probably sell the current 4820k for less money.
I'm also attracted to the high core count because I'd like to use software video encoding when using Steam In-Home Streaming, which has a much better quality than AMD's hardware solution.

I'm not only changing the CPU for the games though. I'll be doing lots of Solidworks FEM, renders and OpenSIM runs for my phd so I'm "joining the useful and the pleasant" as we say in my country :)



But back to DX12 yeah I'm hoping for better results down the line with the low-clocked 10-core than with the higher-clocked 4-core.

I know this, i use too Autocad, Inventor and Solidwork ( for my jobs ) so ofc having high count cpu cores ( or even 2P multiprocessors ) stay a must to have. ( when render and simulation are runing on CPU's. ) .. Having the possibility to back to home and work " as at the office " is alllways pleasant.

( Lol, i remember the earlly 90's, when we was "play " at work, because some games ( as THX ) was hard to make run properly maxed on "home PC" compared to the workstation beast we had for Autocad at work. ( Autocad DOS version: 8-10-12 )
 
Last edited:
Too bad that absolutely no one seems to be testing with FX CPUs anymore.
And too bad that DX12 didn't appear 3 years ago, either..


I actually have ordered a 10-core/20-thread Xeon and while it clocks rather low at 3GHz it's still put aside because I'll lose performance in DX11 games over my current 3.9GHz 4820K.
I definitely have to switch to it when I install AotS.

AMD itself uses Intel i7 5960X CPU for their official Benchmarks :rolleyes:
 
AMD itself uses Intel i7 5960X CPU for their official Benchmarks :rolleyes:

Theres 2 simples reason to that, as most reviewers use this type of CPU its more easy to get verifiable numbers, and the second is as the high end cpu of Intel ( so far ) outperform the FX in most games, i can understand that they dont want to bring numbers inflated by it.

This said, it could be interessant to test thoses CPU under DX12, Vulkan ( only one game so far, so... well, you see what i mean ) for the guys who own thoses cpu.
 
This said, it could be interessant to test thoses CPU [...]
*lol* Haha. Kommst mit den Sprachen durcheinander? Passiert mir auch manchmal :)

I just wanted to point out that even AMD knows that their CPUs are crap for gaming (even if the marketing wants to tell you another story). But yeah - it would be interesting to make a side-by-side comparison with Intels i7 5000/6000 and AMDs FX 8000/9000
 
There was some fun comparison with FX back in the Mantle days. The main benefit with Mantle was making AMD APUs and CPUs look less terrible.
 
And we all know the results. Mantle is not working or is not working properly on modern GCN-GPUs. This is one disadvantage of low level: you and me, as a customer, are dependent on the goodwill of a (game-) developer. Before we were dependent on the goodwill of hardware-vendors. I'm not sure what i like more, but i do know that i like independency the most. :yep2:
 
There was some fun comparison with FX back in the Mantle days. The main benefit with Mantle was making AMD APUs and CPUs look less terrible.
CPU cost reduction of draw calls is the easiest thing to achieve. But Mantle also supported (asynchronous) compute queues. I don't know if anyone used them.

Low level API + async compute certainly helps AMDs high end GPUs (Fury X):
CnOHHv8WcAAhTEr.jpg:large

2560x1440 + max details (= certainly GPU bound). 52% performance gain. Not bad. Vulkan is very close to Mantle (esp when extensions are used).
 
Seems they are working to get async compute support on Nvidia GPU's.

Does DOOM support asynchronous compute when running on the Vulkan API?
Asynchronous compute is a feature that provides additional performance gains on top of the baseline id Tech 6 Vulkan feature set. Currently asynchronous compute is only supported on AMD GPUs and requires DOOM Vulkan supported drivers to run. We are working with NVIDIA to enable asynchronous compute in Vulkan on NVIDIA GPUs. We hope to have an update soon.
https://community.bethesda.net/thread/54585?start=0&tstart=0
 
CPU cost reduction of draw calls is the easiest thing to achieve. But Mantle also supported (asynchronous) compute queues. I don't know if anyone used them.

Low level API + async compute certainly helps AMDs high end GPUs (Fury X):
CnOHHv8WcAAhTEr.jpg:large

2560x1440 + max details (= certainly GPU bound). 52% performance gain. Not bad. Vulkan is very close to Mantle (esp when extensions are used).

That's an absolutely insane increase. I hope nvidia can pull something even half that good for Pascal.
 
CPU cost reduction of draw calls is the easiest thing to achieve. But Mantle also supported (asynchronous) compute queues. I don't know if anyone used them.

Low level API + async compute certainly helps AMDs high end GPUs (Fury X):
CnOHHv8WcAAhTEr.jpg:large

2560x1440 + max details (= certainly GPU bound). 52% performance gain. Not bad. Vulkan is very close to Mantle (esp when extensions are used).

Don't forget Shader Intrinsics as well. A previous iD tweet mentioned that in addition to async compute, that it was one of the contributors to realizing those large rendering time savings on consoles.

And apparently AMD just confirmed that Doom is the first game on PC that uses it.

I wonder if that's enabled for Nvidia cards or if it is disabled as well?

Regards,
SB
 
Will see when SM 6.0 arrive. Some of those intrinsics will be officially supported by the next shader model in HLSL (like barycentric).
Also, I am still waiting NVIDIA GPUs to allowing bypass Geometry Shader for VP and RT array index.. AFIK some Maxwell and beyond should support it since they export similar extension in OGL.. Dummy geometry shaders must die.
 
CPU cost reduction of draw calls is the easiest thing to achieve. But Mantle also supported (asynchronous) compute queues. I don't know if anyone used them.

Low level API + async compute certainly helps AMDs high end GPUs (Fury X):
CnOHHv8WcAAhTEr.jpg:large

2560x1440 + max details (= certainly GPU bound). 52% performance gain. Not bad. Vulkan is very close to Mantle (esp when extensions are used).

Interesting to see the video of Doom with an AMD 480 when comparing OpenGL/Vulkan/Vulkan+Async Compute - has all 3 side by side from 50seconds onwards.
Makes it pretty clear where AMD loses out with OpenGL (with internal benchmark and CPU time although may be glitched looking at GPU for OpenGL, and not sure how both parameters combine for FPS in their tool), and importantly the improvements as you say.
Start at around 50seconds onwards.
Cheers
 
Last edited:
Welcome to the The Khronos Group Inc. Extension hell.
Not just Khronos, this extensions (or hacks) are available in D3D11 and D3D12 as well. And it's not that I have a problem with NV or AMD introducing them.
It does get seriously weird to compare Vulkan (or DX12) game on vendor A vs. vendor B where huge parts of rendering pipeline are simply ifdefed based on vendor id and then claiming how architecture X is better suited for Vulkan/DX12 then architecture Y.
It's a great time for wccf tech and the likes though: one game, bunch of news for each patch.
 
Yes, some functionalities are exposed via proprietary extensions on DirectX 11/12 too, however Vulkan, like openGL, is designed to deal more with extension than Direct3D.
Little example:
On Direct3D there is one version of ROVs, one version Geomtry shader "bypass", one version of conservative rasterization (with 3 incremental tiers) etc...
Those intrinsics extensions have been released for D3D11/12 too it's not a big issue after all, they will be exposed under SM6.0 and some of them (most? I do not remember) should be available from years under XB1 and PS4 SDKs. So they will not cause any headache to developers.
 
Not just Khronos, this extensions (or hacks) are available in D3D11 and D3D12 as well. And it's not that I have a problem with NV or AMD introducing them.
It does get seriously weird to compare Vulkan (or DX12) game on vendor A vs. vendor B where huge parts of rendering pipeline are simply ifdefed based on vendor id and then claiming how architecture X is better suited for Vulkan/DX12 then architecture Y.
It's a great time for wccf tech and the likes though: one game, bunch of news for each patch.

Yes, it's just that in the past, the majority of extensions used were Nvidia specific with AMD/ATI being mostly left out. I wonder if it's the console effect here where AMD is finally getting more extension support than Nvidia for once.

Regards,
SB
 
Back
Top