Direct3D 12 feature checker (July 2017) by DmitryKo
[URL]https://forum.beyond3d.com/posts/1840641/[/URL]
Windows 6.2 version 1703 (build 15063)
ADAPTER 0
"NVIDIA GeForce GTX 980 Ti"
VEN_10DE, DEV_17C8, SUBSYS_36B61458, REV_A1
Dedicated video memory : 2105212928 bytes
Total video memory : 2079547392 bytes
Video driver version : 22.21.13.8476
Maximum feature level : D3D_FEATURE_LEVEL_12_1 (0xc100)
DoublePrecisionFloatShaderOps : 1
OutputMergerLogicOp : 1
MinPrecisionSupport : D3D12_SHADER_MIN_PRECISION_SUPPORT_NONE (0)
TiledResourcesTier : D3D12_TILED_RESOURCES_TIER_3 (3)
ResourceBindingTier : D3D12_RESOURCE_BINDING_TIER_3 (3)
PSSpecifiedStencilRefSupported : 0
TypedUAVLoadAdditionalFormats : 1
ROVsSupported : 1
ConservativeRasterizationTier : D3D12_CONSERVATIVE_RASTERIZATION_TIER_1 (1)
StandardSwizzle64KBSupported : 0
CrossNodeSharingTier : D3D12_CROSS_NODE_SHARING_TIER_NOT_SUPPORTED (0)
CrossAdapterRowMajorTextureSupported : 0
VPAndRTArrayIndexFromAnyShaderFeedingRasterizerSupportedWithoutGSEmulation : 1
ResourceHeapTier : D3D12_RESOURCE_HEAP_TIER_1 (1)
MaxGPUVirtualAddressBitsPerResource : 40
MaxGPUVirtualAddressBitsPerProcess : 40
Adapter Node 0: TileBasedRenderer: 0, UMA: 0, CacheCoherentUMA: 0, IsolatedMMU: 1
HighestShaderModel : D3D12_SHADER_MODEL_6_0 (0x0060)
WaveOps : 1
WaveLaneCountMin : 32
WaveLaneCountMax : 32
TotalLaneCount : 45056
ExpandedComputeResourceStates : 1
Int64ShaderOps : 1
RootSignature.HighestVersion : D3D_ROOT_SIGNATURE_VERSION_1_1 (2)
DepthBoundsTestSupported : 1
ProgrammableSamplePositionsTier : D3D12_PROGRAMMABLE_SAMPLE_POSITIONS_TIER_2 (2)
ShaderCache.SupportFlags : D3D12_SHADER_CACHE_SUPPORT_SINGLE_PSO | LIBRARY (3)
ADAPTER 1
"AMD Radeon (TM) R9 380 Series"
VEN_1002, DEV_6938, SUBSYS_22C81458, REV_F1
Dedicated video memory : 4270284800 bytes
Total video memory : 4244619264 bytes
Maximum feature level : D3D_FEATURE_LEVEL_12_0 (0xc000)
DoublePrecisionFloatShaderOps : 1
OutputMergerLogicOp : 1
MinPrecisionSupport : D3D12_SHADER_MIN_PRECISION_SUPPORT_16_BIT (2)
TiledResourcesTier : D3D12_TILED_RESOURCES_TIER_2 (2)
ResourceBindingTier : D3D12_RESOURCE_BINDING_TIER_3 (3)
PSSpecifiedStencilRefSupported : 1
TypedUAVLoadAdditionalFormats : 1
ROVsSupported : 0
ConservativeRasterizationTier : D3D12_CONSERVATIVE_RASTERIZATION_TIER_NOT_SUPPORTED (0)
StandardSwizzle64KBSupported : 0
CrossNodeSharingTier : D3D12_CROSS_NODE_SHARING_TIER_NOT_SUPPORTED (0)
CrossAdapterRowMajorTextureSupported : 0
VPAndRTArrayIndexFromAnyShaderFeedingRasterizerSupportedWithoutGSEmulation : 1
ResourceHeapTier : D3D12_RESOURCE_HEAP_TIER_2 (2)
MaxGPUVirtualAddressBitsPerResource : 40
MaxGPUVirtualAddressBitsPerProcess : 40
Adapter Node 0: TileBasedRenderer: 0, UMA: 0, CacheCoherentUMA: 0, IsolatedMMU: 1
HighestShaderModel : D3D12_SHADER_MODEL_6_0 (0x0060)
WaveOps : 1
WaveLaneCountMin : 64
WaveLaneCountMax : 64
TotalLaneCount : 2048
ExpandedComputeResourceStates : 1
Int64ShaderOps : 1
RootSignature.HighestVersion : D3D_ROOT_SIGNATURE_VERSION_1_1 (2)
DepthBoundsTestSupported : 1
ProgrammableSamplePositionsTier : D3D12_PROGRAMMABLE_SAMPLE_POSITIONS_TIER_NOT_SUPPORTED (0)
ShaderCache.SupportFlags : D3D12_SHADER_CACHE_SUPPORT_SINGLE_PSO | LIBRARY (3)
ADAPTER 2
"Microsoft Basic Render Driver"
VEN_1414, DEV_008C, SUBSYS_00000000, REV_00
Dedicated video memory : 0 bytes
Total video memory : 4269301760 bytes
Maximum feature level : D3D_FEATURE_LEVEL_12_1 (0xc100)
Direct3D 11.3 : D3D_FEATURE_LEVEL_11_1 (0xb100)
DoublePrecisionFloatShaderOps : 1
OutputMergerLogicOp : 1
MinPrecisionSupport : D3D12_SHADER_MIN_PRECISION_SUPPORT_10_BIT | 16_BIT (3)
TiledResourcesTier : D3D12_TILED_RESOURCES_TIER_3 (3)
ResourceBindingTier : D3D12_RESOURCE_BINDING_TIER_3 (3)
PSSpecifiedStencilRefSupported : 1
TypedUAVLoadAdditionalFormats : 1
ROVsSupported : 1
ConservativeRasterizationTier : D3D12_CONSERVATIVE_RASTERIZATION_TIER_3 (3)
StandardSwizzle64KBSupported : 1
CrossNodeSharingTier : D3D12_CROSS_NODE_SHARING_TIER_NOT_SUPPORTED (0)
CrossAdapterRowMajorTextureSupported : 1
VPAndRTArrayIndexFromAnyShaderFeedingRasterizerSupportedWithoutGSEmulation : 1
ResourceHeapTier : D3D12_RESOURCE_HEAP_TIER_2 (2)
MaxGPUVirtualAddressBitsPerResource : 32
MaxGPUVirtualAddressBitsPerProcess : 47
Adapter Node 0: TileBasedRenderer: 0, UMA: 1, CacheCoherentUMA: 1, IsolatedMMU: 0
HighestShaderModel : D3D12_SHADER_MODEL_6_0 (0x0060)
WaveOps : 1
WaveLaneCountMin : 4
WaveLaneCountMax : 4
TotalLaneCount : 4
ExpandedComputeResourceStates : 1
Int64ShaderOps : 1
RootSignature.HighestVersion : D3D_ROOT_SIGNATURE_VERSION_1_1 (2)
DepthBoundsTestSupported : 0
ProgrammableSamplePositionsTier : D3D12_PROGRAMMABLE_SAMPLE_POSITIONS_TIER_NOT_SUPPORTED (0)
ShaderCache.SupportFlags : D3D12_SHADER_CACHE_SUPPORT_SINGLE_PSO | LIBRARY (3)
FINISHED running on 2017-07-02 18:01:12
3 display adapters enumerated
Interesting. Are you compiling to x86 Release or x64 Release configuration? Which Windows SDK version - 10.0.15063.0 or Insider Preview 10.0.16225.0?can't create d3d12 device on all devices (with SM6 option, works fine without it). I'll just compile on my own.
You have to include a proper manifest file in the EXE, as described in the comments at the bottom of the CPP file, or OS version check will report Windows 8.IsWindows10OrGreater returning false... Ok throw it out. Then buildnumber is 9200 for some reason...
I put a wrong executable type (32-bit x86) in the archive - please re-download and run again.Just saw your edit about the command prompt. Kepler fails the test?
Did you enable Graphics Tools under Windows 10 Settings - Apps - Apps&features - Manage optional features ?Failed to create Direct3D 12 device
Error 887A0004: Det angivna enhetsgränssnittet eller den angivna funktionsnivån stöds inte i systemet.
10.0.15063.137 which apparently is so old and messed up MS doesn't even list it anymore. Installing SDK 10.0.15063.468 now. Damn MS and their minor minor minuscule versions.Interesting. Are you compiling to x86 Release or x64 Release configuration? Which Windows SDK version - 10.0.15063.0 or Insider Preview 10.0.16225.0?
Yeah, but Visual Studio should take care of that automatically no? Project is set to target Windows 10.You have to include a proper manifest file in the EXE, as described in the comments at the bottom of the CPP file, or OS version check will report Windows 8.
Later SDK versions should not really affect running on an earlier OS (unless you statically link new system functions, which I don't).You built it with Insider Preview? Because I get the same error as @Svensk Viking if I run the prebuilt version (I don't have insider preview binaries).
Nope, the manifest file is not included in Win32 Console projects by default, and "Project/Retarget solution" only selects the SDK directory for header files.Yeah, but Visual Studio should take care of that automatically no?
No that was with running your binary directly. Ah... And looking at the exe you included the x86 version.Do you target x64 or x86? There seems to be an issue with the Developer mode for 32-bit executables.
Ah I see, no way around custom manifest then. Now it works... P.S.: Do you know if this is for console apps only and is automatic for WinAPI apps?Nope, the manifest file is not included in Win32 Console projects by default, and "Project/Retarget solution" only selects the SDK directory for header files.
ADAPTER 1
"NVIDIA GeForce GTX 680"
VEN_10DE, DEV_1180, SUBSYS_0969196E, REV_A1
Dedicated video memory : 2115829760 bytes
Total video memory : 2090164224 bytes
Maximum feature level : D3D_FEATURE_LEVEL_11_0 (0xb000)
DoublePrecisionFloatShaderOps : 1
OutputMergerLogicOp : 1
MinPrecisionSupport : D3D12_SHADER_MIN_PRECISION_SUPPORT_NONE (0)
TiledResourcesTier : D3D12_TILED_RESOURCES_TIER_1 (1)
ResourceBindingTier : D3D12_RESOURCE_BINDING_TIER_2 (2)
PSSpecifiedStencilRefSupported : 0
TypedUAVLoadAdditionalFormats : 0
ROVsSupported : 0
ConservativeRasterizationTier : D3D12_CONSERVATIVE_RASTERIZATION_TIER_NOT_SUPPORTED (0)
StandardSwizzle64KBSupported : 0
CrossNodeSharingTier : D3D12_CROSS_NODE_SHARING_TIER_NOT_SUPPORTED (0)
CrossAdapterRowMajorTextureSupported : 0
VPAndRTArrayIndexFromAnyShaderFeedingRasterizerSupportedWithoutGSEmulation : 0
ResourceHeapTier : D3D12_RESOURCE_HEAP_TIER_1 (1)
MaxGPUVirtualAddressBitsPerResource : 40
MaxGPUVirtualAddressBitsPerProcess : 40
Adapter Node 0: TileBasedRenderer: 0, UMA: 0, CacheCoherentUMA: 0, IsolatedMMU: 1
HighestShaderModel : D3D12_SHADER_MODEL_6_0 (0x0060)
WaveOps : 1
WaveLaneCountMin : 32
WaveLaneCountMax : 32
TotalLaneCount : 16384
ExpandedComputeResourceStates : 1
Int64ShaderOps : 1
RootSignature.HighestVersion : D3D_ROOT_SIGNATURE_VERSION_1_1 (2)
DepthBoundsTestSupported : 1
ProgrammableSamplePositionsTier : D3D12_PROGRAMMABLE_SAMPLE_POSITIONS_TIER_NOT_SUPPORTED (0)
ShaderCache.SupportFlags : D3D12_SHADER_CACHE_SUPPORT_SINGLE_PSO | LIBRARY (3)
Oops... please re-download the archive.looking at the exe you included the x86 version
It's in effect for every Win32 executable - no matter if console, GUI, or DLL.Do you know if this is for console apps only and is automatic for WinAPI apps?
Direct3D 12 feature checker (July 2017) by DmitryKo
https://forum.beyond3d.com/posts/1840641/
Windows 10 version 1703 (build 15063)
Checking for experimental shader models
ADAPTER 0
"NVIDIA GeForce GTX 1080"
VEN_10DE, DEV_1B80, SUBSYS_61803842, REV_A1
Dedicated video memory : 4209704960 bytes
Total video memory : 4146325504 bytes
Video driver version : 22.21.13.8205
Maximum feature level : D3D_FEATURE_LEVEL_12_1 (0xc100)
DoublePrecisionFloatShaderOps : 1
OutputMergerLogicOp : 1
MinPrecisionSupport : D3D12_SHADER_MIN_PRECISION_SUPPORT_NONE (0)
TiledResourcesTier : D3D12_TILED_RESOURCES_TIER_3 (3)
ResourceBindingTier : D3D12_RESOURCE_BINDING_TIER_2 (2)
PSSpecifiedStencilRefSupported : 0
TypedUAVLoadAdditionalFormats : 1
ROVsSupported : 1
ConservativeRasterizationTier : D3D12_CONSERVATIVE_RASTERIZATION_TIER_2 (2)
StandardSwizzle64KBSupported : 0
CrossNodeSharingTier : D3D12_CROSS_NODE_SHARING_TIER_NOT_SUPPORTED (0)
CrossAdapterRowMajorTextureSupported : 0
VPAndRTArrayIndexFromAnyShaderFeedingRasterizerSupportedWithoutGSEmulation : 1
ResourceHeapTier : D3D12_RESOURCE_HEAP_TIER_1 (1)
MaxGPUVirtualAddressBitsPerResource : 40
MaxGPUVirtualAddressBitsPerProcess : 40
Adapter Node 0: TileBasedRenderer: 0, UMA: 0, CacheCoherentUMA: 0, IsolatedMMU: 1
HighestShaderModel : D3D12_SHADER_MODEL_6_0 (0x0060)
WaveOps : 1
WaveLaneCountMin : 32
WaveLaneCountMax : 32
TotalLaneCount : 40960
ExpandedComputeResourceStates : 1
Int64ShaderOps : 1
RootSignature.HighestVersion : D3D_ROOT_SIGNATURE_VERSION_1_1 (2)
DepthBoundsTestSupported : 1
ProgrammableSamplePositionsTier : D3D12_PROGRAMMABLE_SAMPLE_POSITIONS_TIER_2 (2)
ShaderCache.SupportFlags : D3D12_SHADER_CACHE_SUPPORT_SINGLE_PSO | LIBRARY (3)
ADAPTER 1
"Intel(R) Iris(TM) Pro Graphics 6200"
VEN_8086, DEV_1622, SUBSYS_16221849, REV_0A
Dedicated video memory : 134217728 bytes
Total video memory : 70838272 bytes
Maximum feature level : D3D_FEATURE_LEVEL_11_1 (0xb100)
DoublePrecisionFloatShaderOps : 1
OutputMergerLogicOp : 1
MinPrecisionSupport : D3D12_SHADER_MIN_PRECISION_SUPPORT_NONE (0)
TiledResourcesTier : D3D12_TILED_RESOURCES_TIER_1 (1)
ResourceBindingTier : D3D12_RESOURCE_BINDING_TIER_1 (1)
PSSpecifiedStencilRefSupported : 0
TypedUAVLoadAdditionalFormats : 0
ROVsSupported : 1
ConservativeRasterizationTier : D3D12_CONSERVATIVE_RASTERIZATION_TIER_NOT_SUPPORTED (0)
StandardSwizzle64KBSupported : 0
CrossNodeSharingTier : D3D12_CROSS_NODE_SHARING_TIER_NOT_SUPPORTED (0)
CrossAdapterRowMajorTextureSupported : 0
VPAndRTArrayIndexFromAnyShaderFeedingRasterizerSupportedWithoutGSEmulation : 1
ResourceHeapTier : D3D12_RESOURCE_HEAP_TIER_2 (2)
MaxGPUVirtualAddressBitsPerResource : 31
MaxGPUVirtualAddressBitsPerProcess : 48
Adapter Node 0: TileBasedRenderer: 0, UMA: 1, CacheCoherentUMA: 1, IsolatedMMU: 1
HighestShaderModel : D3D12_SHADER_MODEL_5_1 (0x0051)
WaveOps : 0
WaveLaneCountMin : 4
WaveLaneCountMax : 4
TotalLaneCount : 4
ExpandedComputeResourceStates : 1
Int64ShaderOps : 0
RootSignature.HighestVersion : D3D_ROOT_SIGNATURE_VERSION_1_1 (2)
DepthBoundsTestSupported : 0
ProgrammableSamplePositionsTier : D3D12_PROGRAMMABLE_SAMPLE_POSITIONS_TIER_NOT_SUPPORTED (0)
ShaderCache.SupportFlags : D3D12_SHADER_CACHE_SUPPORT_SINGLE_PSO (1)
Direct3D 12 feature checker (July 2017) by DmitryKo
https://forum.beyond3d.com/posts/1840641/
Windows 10 version 1703 (build 15063)
Checking for experimental shader models
ADAPTER 0
"NVIDIA GeForce GTX 1080"
VEN_10DE, DEV_1B80, SUBSYS_61803842, REV_A1
Dedicated video memory : 4168089600 bytes
Total video memory : 4104710144 bytes
Video driver version : 22.21.13.8476
Maximum feature level : D3D_FEATURE_LEVEL_12_1 (0xc100)
DoublePrecisionFloatShaderOps : 1
OutputMergerLogicOp : 1
MinPrecisionSupport : D3D12_SHADER_MIN_PRECISION_SUPPORT_NONE (0)
TiledResourcesTier : D3D12_TILED_RESOURCES_TIER_3 (3)
ResourceBindingTier : D3D12_RESOURCE_BINDING_TIER_3 (3)
PSSpecifiedStencilRefSupported : 0
TypedUAVLoadAdditionalFormats : 1
ROVsSupported : 1
ConservativeRasterizationTier : D3D12_CONSERVATIVE_RASTERIZATION_TIER_2 (2)
StandardSwizzle64KBSupported : 0
CrossNodeSharingTier : D3D12_CROSS_NODE_SHARING_TIER_NOT_SUPPORTED (0)
CrossAdapterRowMajorTextureSupported : 0
VPAndRTArrayIndexFromAnyShaderFeedingRasterizerSupportedWithoutGSEmulation : 1
ResourceHeapTier : D3D12_RESOURCE_HEAP_TIER_1 (1)
MaxGPUVirtualAddressBitsPerResource : 40
MaxGPUVirtualAddressBitsPerProcess : 40
Adapter Node 0: TileBasedRenderer: 0, UMA: 0, CacheCoherentUMA: 0, IsolatedMMU: 1
HighestShaderModel : D3D12_SHADER_MODEL_6_0 (0x0060)
WaveOps : 1
WaveLaneCountMin : 32
WaveLaneCountMax : 32
TotalLaneCount : 40960
ExpandedComputeResourceStates : 1
Int64ShaderOps : 1
RootSignature.HighestVersion : D3D_ROOT_SIGNATURE_VERSION_1_1 (2)
DepthBoundsTestSupported : 1
ProgrammableSamplePositionsTier : D3D12_PROGRAMMABLE_SAMPLE_POSITIONS_TIER_2 (2)
ShaderCache.SupportFlags : D3D12_SHADER_CACHE_SUPPORT_SINGLE_PSO | LIBRARY (3)
Well, back in 2014, Microsoft and Intel did implement a driver-based hack for the Haswell graphics to work around its hardware limit of 64K heap descriptors, allowing all RB Tier 1 GPUs to have 2^20 (~1M) descriptors, not ~55K as intended originally.do you think NVIDIA is bypassing the 64KB buffer dedicated to CBVs ("Constant memory size") via driver?
Yes, it checks the D3D12_SHADER_MIN_PRECISION_SUPPORT cap bit for NONE (0), 10_BIT (1), 16_BIT (2), and a combination of both, i.e. "10_BIT | 16_BIT" (3).is your program correctly detecting the 10/16 bit precision support?
How did you come to this conclusion? Half-precision is only supported by GCN3 (R9 285/380, Fury/Nano) and GCN4 (RX 460/470/480, RX 500 series) cards. The post above shows Radeon HD380 reporting minimum precision of 16_BIT (2) just like it should.looks like last AMD drivers too dropped min precision support :\
Only a real-world test would tell if Resource Binding Tier 3 is really working with these new NVidia drivers, but unfortunately I am not aware of any D3D12 test that specifically includes a RB Tier 3 payload.It could be an OS bug... And an OS bug would explain the craziness of Tier3 of NV architectures (no, I do not believe in the driver emulation tale..)
My R9 380X reports reports only full precision support with last drivers and last OS update on both your and mine app, and microsoft DX-capviewer too. I also noted the same issue with CarstenS's report on it's broadwell gpu. Boradwell/Gen8 was the first GPU having native FP16 support.How did you come to this conclusion? Half-precision is only supported by GCN3 (R9 285/380, Fury/Nano) and GCN4 (RX 460/470/480, RX 500 series) cards. The post above shows Radeon HD380 reporting minimum precision of 16_BIT (2) just like it should.
And it doesn't make any real-world performance difference on these cards anyway - though Vega will change this.
Value of '3' is valid since shader minimum precision support enumeration has flag operation enabled:Yes, it checks the D3D12_SHADER_MIN_PRECISION_SUPPORT cap bit for NONE (0), 10_BIT (1), 16_BIT (2), and a combination of both, i.e. "10_BIT | 16_BIT" (3).
(Though it's not really clear in the MSDN documentation, the value of (3) is a perfectly valid response and is reported by WARP12, i.e. Microsoft Basic Render Driver).
DEFINE_ENUM_FLAG_OPERATORS( D3D12_SHADER_MIN_PRECISION_SUPPORT );
This is probably covered by WDDM 2.1/2.2 and DXGI 1.6/2.0 documentation, which is not publicly available as of now.I was aware Tier 1&2 of RB removed the early limitation of only 5 descriptor tables with SRV simply using internal root constants as offset in a descriptor heap. Are they using the same trick again to bypass # limit of CBVs and UAVs per shader stage?
What about the non-populated root signature entries, how can the driver safely decide without runtime support if a slot is just unpopulated (and treat it as a null descriptor) or not?
What about the unpopulated root signature slots? I have some old NDA documentation which refers to 2014 hardware (damn, how cool are current binding tiers xD), but nothing speaks clearly about unpopulated slots limitation, maybe it was just a Kepler architecture limit (which would explain why Kepler remains a "tier 2" on RB).This is probably covered by WDDM 2.1/2.2 and DXGI 1.6/2.0 documentation, which is not publicly available as of now.
The post I referred to talked about going even farther - avoiding the 16-bit addressing limit by skipping the affected hardware block altogether and micro-managing the descriptor heap with the CPU (i.e. the Direct3D runtime).
Direct3D 12 feature checker (July 2017) by DmitryKo
https://forum.beyond3d.com/posts/1840641/
Windows 10 version 1703 (build 15063)
ADAPTER 0
"Radeon Vega Frontier Edition"
VEN_1002, DEV_6863, SUBSYS_6B761002, REV_00
Dedicated video memory : 4211945472 bytes
Total video memory : 4193810432 bytes
Video driver version : 22.19.384.2
Maximum feature level : D3D_FEATURE_LEVEL_12_1 (0xc100)
DoublePrecisionFloatShaderOps : 1
OutputMergerLogicOp : 1
MinPrecisionSupport : D3D12_SHADER_MIN_PRECISION_SUPPORT_16_BIT (2)
TiledResourcesTier : D3D12_TILED_RESOURCES_TIER_3 (3)
ResourceBindingTier : D3D12_RESOURCE_BINDING_TIER_3 (3)
PSSpecifiedStencilRefSupported : 1
TypedUAVLoadAdditionalFormats : 1
ROVsSupported : 1
ConservativeRasterizationTier : D3D12_CONSERVATIVE_RASTERIZATION_TIER_3 (3)
StandardSwizzle64KBSupported : 0
CrossNodeSharingTier : D3D12_CROSS_NODE_SHARING_TIER_NOT_SUPPORTED (0)
CrossAdapterRowMajorTextureSupported : 0
VPAndRTArrayIndexFromAnyShaderFeedingRasterizerSupportedWithoutGSEmulation : 1
ResourceHeapTier : D3D12_RESOURCE_HEAP_TIER_2 (2)
MaxGPUVirtualAddressBitsPerResource : 44
MaxGPUVirtualAddressBitsPerProcess : 44
Adapter Node 0: TileBasedRenderer: 0, UMA: 0, CacheCoherentUMA: 0, IsolatedMMU: 1
HighestShaderModel : D3D12_SHADER_MODEL_5_1 (0x0051)
WaveOps : 1
WaveLaneCountMin : 64
WaveLaneCountMax : 64
TotalLaneCount : 4096
ExpandedComputeResourceStates : 1
Int64ShaderOps : 1
RootSignature.HighestVersion : D3D_ROOT_SIGNATURE_VERSION_1_1 (2)
DepthBoundsTestSupported : 1
ProgrammableSamplePositionsTier : D3D12_PROGRAMMABLE_SAMPLE_POSITIONS_TIER_NOT_SUPPORTED (0)
ShaderCache.SupportFlags : D3D12_SHADER_CACHE_SUPPORT_SINGLE_PSO | LIBRARY (3)
HighestShaderModel : D3D12_SHADER_MODEL_5_1 (0x0051)
Without developer tools, current highest shader model available is 5.1.