No DX12 Software is Suitable for Benchmarking *spawn*

Maybe future cards that could potentially operate better under dx12? Seems odd. Maybe it's just one of those things where they're constantly working on it, so they just put it in the game because they did the work.

Then you get a game like Wolf 2, which is Vulkan only because they said it would be impossible to write their renderer in DX11, OpenGL.
 
Speaking of Wolfenstein, the newest patch is bringing unprecedented boosts to Vega cards:

https://www.computerbase.de/2017-11...-compute-3840-2160-anspruchsvolle-testsequenz

EMBxr4f.png



EDIT:
At launch, the GTX 1080 Ti did 68.4 FPS in this test. If we apply the same proportional boost to the 1080 Ti that happened with the 1080 (6.68%), we'll get 73 FPS.
This puts the Vega 64 within a distance of 9.6% of the 1080 Ti.


I hope Computerbase or someone else tests this with Polaris and Fiji cards to evaluate the gains.
This way we'd know if the optimizations are strictly related to async compute or if they're Vega based (e.g. more shaders using FP16).
 
Last edited by a moderator:
This way we'd know if the optimizations are strictly related to async compute or if they're Vega based (e.g. more shaders using FP16).
Are the shader intrinsic's exclusive to Vega? Also I asked this somewhere else but no one answered, does Nvidia have any shader intrinsics for Vulkan yet?
 
Are the shader intrinsic's exclusive to Vega?
At launch, the Vega 64 was only 50% faster than a RX580 in this game, so I'd say definitely not.
Besides, why wouldn't they use the very same functions for pre-Vega cards they already used in Doom?

Also I asked this somewhere else but no one answered, does Nvidia have any shader intrinsics for Vulkan yet?
nvidia cards are a lot closer to AMD cards in Wolfenstein II than they were in Vulkan Doom, so I'd say yes.
 
Though Vega 64 is still slower than GTX 1080 @1080p and 1440p in the taxing scene, and only gets faster @4K.

The performance decrease on Nvidia cards in the latest patch 2 seems due to Async being turned off for all Nvidia cards.
Unlike when using a graphics card with GPU from Nvidia: Async Compute was active from the beginning, with the last patch Async Compute was now turned off on each GeForce, as it should cause problems. The developer talks about waiting for a new driver from Nvidia.
 
Are the shader intrinsic's exclusive to Vega? Also I asked this somewhere else but no one answered, does Nvidia have any shader intrinsics for Vulkan yet?
Well NVIDIA supports VK_EXT_shader_subgroup_ballot and VK_EXT_shader_subgroup_vote. On the other hand AMD supports VK_AMD_shader_ballot, VK_EXT_shader_subgroup_vote and VK_AMD_shader_explicit_vertex_parameter (barycentric support). VK_EXT_shader_subgroup_ballot and VK_AMD_shader_ballot are not equal.
So unless there are two code paths...
 
Well NVIDIA supports VK_EXT_shader_subgroup_ballot and VK_EXT_shader_subgroup_vote. On the other hand AMD supports VK_AMD_shader_ballot, VK_EXT_shader_subgroup_vote and VK_AMD_shader_explicit_vertex_parameter (barycentric support). VK_EXT_shader_subgroup_ballot and VK_AMD_shader_ballot are not equal.
So unless there are two code paths...
I know it is recommended that if you go down the DX12 route you should maintain at least two code paths, one for AMD and one for Nvidia. Do they recommend the same thing for Vulkan?
 
How much of the performance gap closing can also be down to what Nvidia did in June 2017, update to 1.0.51.0 along with Vulkan related bug fixes and also they state various Vulkan performance improvements (all part of driver 382.68)?
I take it from specific extension perspectives there would be a cut-off where a studio and the developers working on the game would not look to use any new ones that open up further functionality on the card.
Possibly also the gap in Doom was down to not using those initial Vulkan extensions MDolenc highlights, but are used now?
Since then they have been doing more specific work with extensions along with update to 1.0.54.0, but I guess we will not know what is used in these games even looking across the whole Nvidia Vulkan history.
Has the history of updates including extensions, scroll down: https://developer.nvidia.com/vulkan-driver
One of the bigger more recent updates: https://developer.nvidia.com/nvidia-vulkan-developer-driver-khronos-vulkan-spec-update-1054
 
Last edited:
The bad performance in Wolfenstein 2 is a result from an unoptimized game. And that was telegraphed months ahead when id announced that they wont support nVidia anymore and that nVidia user should not expect any optimizitions.

Look over to the F1 2017 Linux Vulkan benchmarks: https://www.phoronix.com/scan.php?page=article&item=nvidia-gtx1070-ti&num=4

Totally different result with the "same" API.

In the end it doesnt matter what nVidia supports. id wont use it.
 
The bad performance in Wolfenstein 2 is a result from an unoptimized game. And that was telegraphed months ahead when id announced that they wont support nVidia anymore and that nVidia user should not expect any optimizitions.

Look over to the F1 2017 Linux Vulkan benchmarks: https://www.phoronix.com/scan.php?page=article&item=nvidia-gtx1070-ti&num=4

Totally different result with the "same" API.

In the end it doesnt matter what nVidia supports. id wont use it.
But the performance is closer in Wolfenstein 2 than Doom 2016 when looking at Nvidia to AMD GPUs and Vulkan.

Edit:
NVM actually both seem reasonably similar, with Doom performance for Nvidia closing gap on AMD.
So maybe does come back to one of the earlier Nvidia updates resolving bugs and generally improving Vulkan performance.
Basing this looking at the 1070ti review (with cards around it also tested) that has more recent drivers (would include the general Vulkan bug fixes and performance update I mentioned earlier) for both games.
 
Last edited:
I know it is recommended that if you go down the DX12 route you should maintain at least two code paths, one for AMD and one for Nvidia. Do they recommend the same thing for Vulkan?
It kinda has to be the same recommendation as best practices for certain resources differ between NV and AMD.

Regarding Wolfenstein II specifically one can check binary to look for specific strings. Following strings should pop out:
VK_AMD_gpu_shader_half_float
VK_EXT_shader_subgroup_ballot
VK_AMD_gcn_shader
VK_AMD_shader_trinary_minmax
VK_AMD_shader_ballot
VK_AMD_wave_limits
VK_NV_dedicated_allocation
VK_AMD_rasterization_order
;)
 
VK_AMD_gpu_shader_half_float
16 bit floats. Nvidia still doesn't have double rate 16b float in consumer hardware.
VK_EXT_shader_subgroup_ballot
VK_AMD_shader_ballot
Cross lane ops (wave ballot). Can be used for various optimizations (mostly with loops and branches).
VK_AMD_shader_trinary_minmax
GCN supports trinary min/max instructions. Compiler should in most cases emit them automatically, but intrinsics are nice to have for cases where the compiler doesn't do what you want.
VK_AMD_wave_limits
For tuning async compute :)
VK_AMD_rasterization_order
https://gpuopen.com/unlock-the-rasterizer-with-out-of-order-rasterization/
 
Back
Top