Direct3D feature levels discussion

You need to enable the developer mode:

Windows 10 Settings - Update & Security - For developers - Use developer features - Developer mode

Then run checkfeatures /sm6 from the command prompt.
 
Last edited:
First time I used your tool. Enabled Developer Mode.

Just saw your edit about the command prompt. Kepler fails the test?

Direct3D 12 feature checker (July 2017) by DmitryKo
https://forum.beyond3d.com/posts/1840641/

Windows 10 version 1703 (build 15063)

ADAPTER 0
"NVIDIA GeForce GTX TITAN"
VEN_10DE, DEV_1005, SUBSYS_84511043, REV_A1
Dedicated video memory : 3221225472 bytes
Total video memory : 4294901760 bytes
Video driver version : 22.21.13.8476
Failed to create Direct3D 12 device
Error 80004005: Odefinierat fel

ADAPTER 1
"Microsoft Basic Render Driver"
VEN_1414, DEV_008C, SUBSYS_00000000, REV_00
Dedicated video memory : 0 bytes
Total video memory : 4294901760 bytes
Failed to create Direct3D 12 device
Error 887A0004: Det angivna enhetsgränssnittet eller den angivna funktionsnivån stöds inte i systemet.

FINISHED running on 2017-07-02 18:09:03
2 display adapters enumerated
 
Last edited:
First of... Boy Microsoft is getting incompetent...
Running the pre-built version: can't create d3d12 device on all devices (with SM6 option, works fine without it).
Ok... I'll just compile on my own. IsWindows10OrGreater returning false... Ok throw it out. Then buildnumber is 9200 for some reason...
So hurray!!! DirectX 12 is no running on Windows 8... o_O For fucksake Microsoft...

Anyway besides bizare windows version... Here's the dump for 980TI and RX 380.

Code:
Direct3D 12 feature checker (July 2017) by DmitryKo
[URL]https://forum.beyond3d.com/posts/1840641/[/URL]

Windows 6.2 version 1703 (build 15063)

ADAPTER 0
"NVIDIA GeForce GTX 980 Ti"
VEN_10DE, DEV_17C8, SUBSYS_36B61458, REV_A1
Dedicated video memory : 2105212928  bytes
Total video memory : 2079547392  bytes
Video driver version : 22.21.13.8476
Maximum feature level : D3D_FEATURE_LEVEL_12_1 (0xc100)
DoublePrecisionFloatShaderOps : 1
OutputMergerLogicOp : 1
MinPrecisionSupport : D3D12_SHADER_MIN_PRECISION_SUPPORT_NONE (0)
TiledResourcesTier : D3D12_TILED_RESOURCES_TIER_3 (3)
ResourceBindingTier : D3D12_RESOURCE_BINDING_TIER_3 (3)
PSSpecifiedStencilRefSupported : 0
TypedUAVLoadAdditionalFormats : 1
ROVsSupported : 1
ConservativeRasterizationTier : D3D12_CONSERVATIVE_RASTERIZATION_TIER_1 (1)
StandardSwizzle64KBSupported : 0
CrossNodeSharingTier : D3D12_CROSS_NODE_SHARING_TIER_NOT_SUPPORTED (0)
CrossAdapterRowMajorTextureSupported : 0
VPAndRTArrayIndexFromAnyShaderFeedingRasterizerSupportedWithoutGSEmulation : 1
ResourceHeapTier : D3D12_RESOURCE_HEAP_TIER_1 (1)
MaxGPUVirtualAddressBitsPerResource : 40
MaxGPUVirtualAddressBitsPerProcess : 40
Adapter Node 0:         TileBasedRenderer: 0, UMA: 0, CacheCoherentUMA: 0, IsolatedMMU: 1
HighestShaderModel : D3D12_SHADER_MODEL_6_0 (0x0060)
WaveOps : 1
WaveLaneCountMin : 32
WaveLaneCountMax : 32
TotalLaneCount : 45056
ExpandedComputeResourceStates : 1
Int64ShaderOps : 1
RootSignature.HighestVersion : D3D_ROOT_SIGNATURE_VERSION_1_1 (2)
DepthBoundsTestSupported : 1
ProgrammableSamplePositionsTier : D3D12_PROGRAMMABLE_SAMPLE_POSITIONS_TIER_2 (2)
ShaderCache.SupportFlags : D3D12_SHADER_CACHE_SUPPORT_SINGLE_PSO | LIBRARY (3)

ADAPTER 1
"AMD Radeon (TM) R9 380 Series"
VEN_1002, DEV_6938, SUBSYS_22C81458, REV_F1
Dedicated video memory : 4270284800  bytes
Total video memory : 4244619264  bytes
Maximum feature level : D3D_FEATURE_LEVEL_12_0 (0xc000)
DoublePrecisionFloatShaderOps : 1
OutputMergerLogicOp : 1
MinPrecisionSupport : D3D12_SHADER_MIN_PRECISION_SUPPORT_16_BIT (2)
TiledResourcesTier : D3D12_TILED_RESOURCES_TIER_2 (2)
ResourceBindingTier : D3D12_RESOURCE_BINDING_TIER_3 (3)
PSSpecifiedStencilRefSupported : 1
TypedUAVLoadAdditionalFormats : 1
ROVsSupported : 0
ConservativeRasterizationTier : D3D12_CONSERVATIVE_RASTERIZATION_TIER_NOT_SUPPORTED (0)
StandardSwizzle64KBSupported : 0
CrossNodeSharingTier : D3D12_CROSS_NODE_SHARING_TIER_NOT_SUPPORTED (0)
CrossAdapterRowMajorTextureSupported : 0
VPAndRTArrayIndexFromAnyShaderFeedingRasterizerSupportedWithoutGSEmulation : 1
ResourceHeapTier : D3D12_RESOURCE_HEAP_TIER_2 (2)
MaxGPUVirtualAddressBitsPerResource : 40
MaxGPUVirtualAddressBitsPerProcess : 40
Adapter Node 0:         TileBasedRenderer: 0, UMA: 0, CacheCoherentUMA: 0, IsolatedMMU: 1
HighestShaderModel : D3D12_SHADER_MODEL_6_0 (0x0060)
WaveOps : 1
WaveLaneCountMin : 64
WaveLaneCountMax : 64
TotalLaneCount : 2048
ExpandedComputeResourceStates : 1
Int64ShaderOps : 1
RootSignature.HighestVersion : D3D_ROOT_SIGNATURE_VERSION_1_1 (2)
DepthBoundsTestSupported : 1
ProgrammableSamplePositionsTier : D3D12_PROGRAMMABLE_SAMPLE_POSITIONS_TIER_NOT_SUPPORTED (0)
ShaderCache.SupportFlags : D3D12_SHADER_CACHE_SUPPORT_SINGLE_PSO | LIBRARY (3)

ADAPTER 2
"Microsoft Basic Render Driver"
VEN_1414, DEV_008C, SUBSYS_00000000, REV_00
Dedicated video memory : 0  bytes
Total video memory : 4269301760  bytes
Maximum feature level : D3D_FEATURE_LEVEL_12_1 (0xc100)
        Direct3D 11.3 : D3D_FEATURE_LEVEL_11_1 (0xb100)
DoublePrecisionFloatShaderOps : 1
OutputMergerLogicOp : 1
MinPrecisionSupport : D3D12_SHADER_MIN_PRECISION_SUPPORT_10_BIT | 16_BIT (3)
TiledResourcesTier : D3D12_TILED_RESOURCES_TIER_3 (3)
ResourceBindingTier : D3D12_RESOURCE_BINDING_TIER_3 (3)
PSSpecifiedStencilRefSupported : 1
TypedUAVLoadAdditionalFormats : 1
ROVsSupported : 1
ConservativeRasterizationTier : D3D12_CONSERVATIVE_RASTERIZATION_TIER_3 (3)
StandardSwizzle64KBSupported : 1
CrossNodeSharingTier : D3D12_CROSS_NODE_SHARING_TIER_NOT_SUPPORTED (0)
CrossAdapterRowMajorTextureSupported : 1
VPAndRTArrayIndexFromAnyShaderFeedingRasterizerSupportedWithoutGSEmulation : 1
ResourceHeapTier : D3D12_RESOURCE_HEAP_TIER_2 (2)
MaxGPUVirtualAddressBitsPerResource : 32
MaxGPUVirtualAddressBitsPerProcess : 47
Adapter Node 0:         TileBasedRenderer: 0, UMA: 1, CacheCoherentUMA: 1, IsolatedMMU: 0
HighestShaderModel : D3D12_SHADER_MODEL_6_0 (0x0060)
WaveOps : 1
WaveLaneCountMin : 4
WaveLaneCountMax : 4
TotalLaneCount : 4
ExpandedComputeResourceStates : 1
Int64ShaderOps : 1
RootSignature.HighestVersion : D3D_ROOT_SIGNATURE_VERSION_1_1 (2)
DepthBoundsTestSupported : 0
ProgrammableSamplePositionsTier : D3D12_PROGRAMMABLE_SAMPLE_POSITIONS_TIER_NOT_SUPPORTED (0)
ShaderCache.SupportFlags : D3D12_SHADER_CACHE_SUPPORT_SINGLE_PSO | LIBRARY (3)

FINISHED running on 2017-07-02 18:01:12
3 display adapters enumerated
 
Last edited:
can't create d3d12 device on all devices (with SM6 option, works fine without it). I'll just compile on my own.
Interesting. Are you compiling to x86 Release or x64 Release configuration? Which Windows SDK version - 10.0.15063.0 or Insider Preview 10.0.16225.0?

IsWindows10OrGreater returning false... Ok throw it out. Then buildnumber is 9200 for some reason...
You have to include a proper manifest file in the EXE, as described in the comments at the bottom of the CPP file, or OS version check will report Windows 8.
 
Last edited:
Just saw your edit about the command prompt. Kepler fails the test?
I put a wrong executable type (32-bit x86) in the archive - please re-download and run again.

Failed to create Direct3D 12 device
Error 887A0004: Det angivna enhetsgränssnittet eller den angivna funktionsnivån stöds inte i systemet.
Did you enable Graphics Tools under Windows 10 Settings - Apps - Apps&features - Manage optional features ?
 
Last edited:
Interesting. Are you compiling to x86 Release or x64 Release configuration? Which Windows SDK version - 10.0.15063.0 or Insider Preview 10.0.16225.0?
10.0.15063.137 which apparently is so old and messed up MS doesn't even list it anymore. Installing SDK 10.0.15063.468 now. Damn MS and their minor minor minuscule versions.
You built it with Insider Preview? Because I get the same error as @Svensk Viking if I run the prebuilt version (I don't have insider preview binaries).

You have to include a proper manifest file in the EXE, as described in the comments at the bottom of the CPP file, or OS version check will report Windows 8.
Yeah, but Visual Studio should take care of that automatically no? Project is set to target Windows 10.
 
You built it with Insider Preview? Because I get the same error as @Svensk Viking if I run the prebuilt version (I don't have insider preview binaries).
Later SDK versions should not really affect running on an earlier OS (unless you statically link new system functions, which I don't).

Which platform do you target, x64 or x86? There seems to be an issue with the Developer mode for 32-bit executables.

Yeah, but Visual Studio should take care of that automatically no?
Nope, the manifest file is not included in Win32 Console projects by default, and "Project/Retarget solution" only selects the SDK directory for header files.
 
Do you target x64 or x86? There seems to be an issue with the Developer mode for 32-bit executables.
No that was with running your binary directly. Ah... And looking at the exe you included the x86 version. :)

Nope, the manifest file is not included in Win32 Console projects by default, and "Project/Retarget solution" only selects the SDK directory for header files.
Ah I see, no way around custom manifest then. Now it works... P.S.: Do you know if this is for console apps only and is automatic for WinAPI apps?

Also Kepler:
Code:
ADAPTER 1
"NVIDIA GeForce GTX 680"
VEN_10DE, DEV_1180, SUBSYS_0969196E, REV_A1
Dedicated video memory : 2115829760  bytes
Total video memory : 2090164224  bytes
Maximum feature level : D3D_FEATURE_LEVEL_11_0 (0xb000)
DoublePrecisionFloatShaderOps : 1
OutputMergerLogicOp : 1
MinPrecisionSupport : D3D12_SHADER_MIN_PRECISION_SUPPORT_NONE (0)
TiledResourcesTier : D3D12_TILED_RESOURCES_TIER_1 (1)
ResourceBindingTier : D3D12_RESOURCE_BINDING_TIER_2 (2)
PSSpecifiedStencilRefSupported : 0
TypedUAVLoadAdditionalFormats : 0
ROVsSupported : 0
ConservativeRasterizationTier : D3D12_CONSERVATIVE_RASTERIZATION_TIER_NOT_SUPPORTED (0)
StandardSwizzle64KBSupported : 0
CrossNodeSharingTier : D3D12_CROSS_NODE_SHARING_TIER_NOT_SUPPORTED (0)
CrossAdapterRowMajorTextureSupported : 0
VPAndRTArrayIndexFromAnyShaderFeedingRasterizerSupportedWithoutGSEmulation : 0
ResourceHeapTier : D3D12_RESOURCE_HEAP_TIER_1 (1)
MaxGPUVirtualAddressBitsPerResource : 40
MaxGPUVirtualAddressBitsPerProcess : 40
Adapter Node 0:         TileBasedRenderer: 0, UMA: 0, CacheCoherentUMA: 0, IsolatedMMU: 1
HighestShaderModel : D3D12_SHADER_MODEL_6_0 (0x0060)
WaveOps : 1
WaveLaneCountMin : 32
WaveLaneCountMax : 32
TotalLaneCount : 16384
ExpandedComputeResourceStates : 1
Int64ShaderOps : 1
RootSignature.HighestVersion : D3D_ROOT_SIGNATURE_VERSION_1_1 (2)
DepthBoundsTestSupported : 1
ProgrammableSamplePositionsTier : D3D12_PROGRAMMABLE_SAMPLE_POSITIONS_TIER_NOT_SUPPORTED (0)
ShaderCache.SupportFlags : D3D12_SHADER_CACHE_SUPPORT_SINGLE_PSO | LIBRARY (3)
 
Pascal & Iris Pro
Code:
Direct3D 12 feature checker (July 2017) by DmitryKo
https://forum.beyond3d.com/posts/1840641/

Windows 10 version 1703 (build 15063)
Checking for experimental shader models

ADAPTER 0
"NVIDIA GeForce GTX 1080"
VEN_10DE, DEV_1B80, SUBSYS_61803842, REV_A1
Dedicated video memory : 4209704960  bytes
Total video memory : 4146325504  bytes
Video driver version : 22.21.13.8205
Maximum feature level : D3D_FEATURE_LEVEL_12_1 (0xc100)
DoublePrecisionFloatShaderOps : 1
OutputMergerLogicOp : 1
MinPrecisionSupport : D3D12_SHADER_MIN_PRECISION_SUPPORT_NONE (0)
TiledResourcesTier : D3D12_TILED_RESOURCES_TIER_3 (3)
ResourceBindingTier : D3D12_RESOURCE_BINDING_TIER_2 (2)
PSSpecifiedStencilRefSupported : 0
TypedUAVLoadAdditionalFormats : 1
ROVsSupported : 1
ConservativeRasterizationTier : D3D12_CONSERVATIVE_RASTERIZATION_TIER_2 (2)
StandardSwizzle64KBSupported : 0
CrossNodeSharingTier : D3D12_CROSS_NODE_SHARING_TIER_NOT_SUPPORTED (0)
CrossAdapterRowMajorTextureSupported : 0
VPAndRTArrayIndexFromAnyShaderFeedingRasterizerSupportedWithoutGSEmulation : 1
ResourceHeapTier : D3D12_RESOURCE_HEAP_TIER_1 (1)
MaxGPUVirtualAddressBitsPerResource : 40
MaxGPUVirtualAddressBitsPerProcess : 40
Adapter Node 0:    TileBasedRenderer: 0, UMA: 0, CacheCoherentUMA: 0, IsolatedMMU: 1
HighestShaderModel : D3D12_SHADER_MODEL_6_0 (0x0060)
WaveOps : 1
WaveLaneCountMin : 32
WaveLaneCountMax : 32
TotalLaneCount : 40960
ExpandedComputeResourceStates : 1
Int64ShaderOps : 1
RootSignature.HighestVersion : D3D_ROOT_SIGNATURE_VERSION_1_1 (2)
DepthBoundsTestSupported : 1
ProgrammableSamplePositionsTier : D3D12_PROGRAMMABLE_SAMPLE_POSITIONS_TIER_2 (2)
ShaderCache.SupportFlags : D3D12_SHADER_CACHE_SUPPORT_SINGLE_PSO | LIBRARY (3)

ADAPTER 1
"Intel(R) Iris(TM) Pro Graphics 6200"
VEN_8086, DEV_1622, SUBSYS_16221849, REV_0A
Dedicated video memory : 134217728  bytes
Total video memory : 70838272  bytes
Maximum feature level : D3D_FEATURE_LEVEL_11_1 (0xb100)
DoublePrecisionFloatShaderOps : 1
OutputMergerLogicOp : 1
MinPrecisionSupport : D3D12_SHADER_MIN_PRECISION_SUPPORT_NONE (0)
TiledResourcesTier : D3D12_TILED_RESOURCES_TIER_1 (1)
ResourceBindingTier : D3D12_RESOURCE_BINDING_TIER_1 (1)
PSSpecifiedStencilRefSupported : 0
TypedUAVLoadAdditionalFormats : 0
ROVsSupported : 1
ConservativeRasterizationTier : D3D12_CONSERVATIVE_RASTERIZATION_TIER_NOT_SUPPORTED (0)
StandardSwizzle64KBSupported : 0
CrossNodeSharingTier : D3D12_CROSS_NODE_SHARING_TIER_NOT_SUPPORTED (0)
CrossAdapterRowMajorTextureSupported : 0
VPAndRTArrayIndexFromAnyShaderFeedingRasterizerSupportedWithoutGSEmulation : 1
ResourceHeapTier : D3D12_RESOURCE_HEAP_TIER_2 (2)
MaxGPUVirtualAddressBitsPerResource : 31
MaxGPUVirtualAddressBitsPerProcess : 48
Adapter Node 0:    TileBasedRenderer: 0, UMA: 1, CacheCoherentUMA: 1, IsolatedMMU: 1
HighestShaderModel : D3D12_SHADER_MODEL_5_1 (0x0051)
WaveOps : 0
WaveLaneCountMin : 4
WaveLaneCountMax : 4
TotalLaneCount : 4
ExpandedComputeResourceStates : 1
Int64ShaderOps : 0
RootSignature.HighestVersion : D3D_ROOT_SIGNATURE_VERSION_1_1 (2)
DepthBoundsTestSupported : 0
ProgrammableSamplePositionsTier : D3D12_PROGRAMMABLE_SAMPLE_POSITIONS_TIER_NOT_SUPPORTED (0)
ShaderCache.SupportFlags : D3D12_SHADER_CACHE_SUPPORT_SINGLE_PSO (1)
 
384.76

Code:
Direct3D 12 feature checker (July 2017) by DmitryKo
https://forum.beyond3d.com/posts/1840641/

Windows 10 version 1703 (build 15063)
Checking for experimental shader models

ADAPTER 0
"NVIDIA GeForce GTX 1080"
VEN_10DE, DEV_1B80, SUBSYS_61803842, REV_A1
Dedicated video memory : 4168089600  bytes
Total video memory : 4104710144  bytes
Video driver version : 22.21.13.8476
Maximum feature level : D3D_FEATURE_LEVEL_12_1 (0xc100)
DoublePrecisionFloatShaderOps : 1
OutputMergerLogicOp : 1
MinPrecisionSupport : D3D12_SHADER_MIN_PRECISION_SUPPORT_NONE (0)
TiledResourcesTier : D3D12_TILED_RESOURCES_TIER_3 (3)
ResourceBindingTier : D3D12_RESOURCE_BINDING_TIER_3 (3)
PSSpecifiedStencilRefSupported : 0
TypedUAVLoadAdditionalFormats : 1
ROVsSupported : 1
ConservativeRasterizationTier : D3D12_CONSERVATIVE_RASTERIZATION_TIER_2 (2)
StandardSwizzle64KBSupported : 0
CrossNodeSharingTier : D3D12_CROSS_NODE_SHARING_TIER_NOT_SUPPORTED (0)
CrossAdapterRowMajorTextureSupported : 0
VPAndRTArrayIndexFromAnyShaderFeedingRasterizerSupportedWithoutGSEmulation : 1
ResourceHeapTier : D3D12_RESOURCE_HEAP_TIER_1 (1)
MaxGPUVirtualAddressBitsPerResource : 40
MaxGPUVirtualAddressBitsPerProcess : 40
Adapter Node 0:    TileBasedRenderer: 0, UMA: 0, CacheCoherentUMA: 0, IsolatedMMU: 1
HighestShaderModel : D3D12_SHADER_MODEL_6_0 (0x0060)
WaveOps : 1
WaveLaneCountMin : 32
WaveLaneCountMax : 32
TotalLaneCount : 40960
ExpandedComputeResourceStates : 1
Int64ShaderOps : 1
RootSignature.HighestVersion : D3D_ROOT_SIGNATURE_VERSION_1_1 (2)
DepthBoundsTestSupported : 1
ProgrammableSamplePositionsTier : D3D12_PROGRAMMABLE_SAMPLE_POSITIONS_TIER_2 (2)
ShaderCache.SupportFlags : D3D12_SHADER_CACHE_SUPPORT_SINGLE_PSO | LIBRARY (3)
 
@DmitryKo: do you think NVIDIA is bypassing the 64KB buffer dedicated to CBVs ("Constant memory size") via driver? http://docs.nvidia.com/cuda/cuda-c-...ex.html#features-and-technical-specifications

edit: is your program correctly detecting the 10/16 bit precision support?

edit2: never-mind, looks like last AMD drivers too dropped min precision support :\ It could be an OS bug... And an OS bug would explain the craziness of Tier3 of NV architectures (no, I do not believe in the driver emulation tale..)
 
Last edited:
do you think NVIDIA is bypassing the 64KB buffer dedicated to CBVs ("Constant memory size") via driver?
Well, back in 2014, Microsoft and Intel did implement a driver-based hack for the Haswell graphics to work around its hardware limit of 64K heap descriptors, allowing all RB Tier 1 GPUs to have 2^20 (~1M) descriptors, not ~55K as intended originally.

To conclude whether it is possible to implement a similar hack and work around RB Tier 2 limitations for Maxwell/Pascal using the main CPU, one would need to know internal implementation details of Nvidia GPUs, which are not available to general public.

Max McMullen from the DirectX team would be the best person to ask about this, unfortunately he's not visiting this forum anymore...

is your program correctly detecting the 10/16 bit precision support?
Yes, it checks the D3D12_SHADER_MIN_PRECISION_SUPPORT cap bit for NONE (0), 10_BIT (1), 16_BIT (2), and a combination of both, i.e. "10_BIT | 16_BIT" (3).
(Though it's not really clear in the MSDN documentation, the value of (3) is a perfectly valid response and is reported by WARP12, i.e. Microsoft Basic Render Driver).

looks like last AMD drivers too dropped min precision support :\
How did you come to this conclusion? Half-precision is only supported by GCN3 (R9 285/380, Fury/Nano) and GCN4 (RX 460/470/480, RX 500 series) cards. The post above shows Radeon HD380 reporting minimum precision of 16_BIT (2) just like it should.

And it doesn't make any real-world performance difference on these cards anyway - though GCN5 (Vega) will change this.

It could be an OS bug... And an OS bug would explain the craziness of Tier3 of NV architectures (no, I do not believe in the driver emulation tale..)
Only a real-world test would tell if Resource Binding Tier 3 is really working with these new NVidia drivers, but unfortunately I am not aware of any D3D12 test that specifically includes a RB Tier 3 payload.
 
Last edited:
I was aware Tier 1&2 of RB removed the early limitation of only 5 descriptor tables with SRV simply using internal root constants as offset in a descriptor heap. Are they using the same trick again to bypass # limit of CBVs and UAVs per shader stage?
What about the non-populated root signature entries, how can the driver safely decide without runtime support if a slot is just unpopulated (and treat it as a null descriptor) or not?

How did you come to this conclusion? Half-precision is only supported by GCN3 (R9 285/380, Fury/Nano) and GCN4 (RX 460/470/480, RX 500 series) cards. The post above shows Radeon HD380 reporting minimum precision of 16_BIT (2) just like it should.

And it doesn't make any real-world performance difference on these cards anyway - though Vega will change this.
My R9 380X reports reports only full precision support with last drivers and last OS update on both your and mine app, and microsoft DX-capviewer too. I also noted the same issue with CarstenS's report on it's broadwell gpu. Boradwell/Gen8 was the first GPU having native FP16 support.
 
Yes, it checks the D3D12_SHADER_MIN_PRECISION_SUPPORT cap bit for NONE (0), 10_BIT (1), 16_BIT (2), and a combination of both, i.e. "10_BIT | 16_BIT" (3).
(Though it's not really clear in the MSDN documentation, the value of (3) is a perfectly valid response and is reported by WARP12, i.e. Microsoft Basic Render Driver).
Value of '3' is valid since shader minimum precision support enumeration has flag operation enabled:

Code:
DEFINE_ENUM_FLAG_OPERATORS( D3D12_SHADER_MIN_PRECISION_SUPPORT );

They are also reported with hex notation, MS use hex notation on enums to simplify flags operation evaluation for programmer.

edit: why the hell I made a new post? :\
 
I was aware Tier 1&2 of RB removed the early limitation of only 5 descriptor tables with SRV simply using internal root constants as offset in a descriptor heap. Are they using the same trick again to bypass # limit of CBVs and UAVs per shader stage?
What about the non-populated root signature entries, how can the driver safely decide without runtime support if a slot is just unpopulated (and treat it as a null descriptor) or not?
This is probably covered by WDDM 2.1/2.2 and DXGI 1.6/2.0 documentation, which is not publicly available as of now.

The post I referred to talked about going even farther - avoiding the 16-bit addressing limit by skipping the affected hardware block altogether and micro-managing the descriptor heap with the CPU (i.e. the Direct3D runtime).
 
This is probably covered by WDDM 2.1/2.2 and DXGI 1.6/2.0 documentation, which is not publicly available as of now.

The post I referred to talked about going even farther - avoiding the 16-bit addressing limit by skipping the affected hardware block altogether and micro-managing the descriptor heap with the CPU (i.e. the Direct3D runtime).
What about the unpopulated root signature slots? I have some old NDA documentation which refers to 2014 hardware (damn, how cool are current binding tiers xD), but nothing speaks clearly about unpopulated slots limitation, maybe it was just a Kepler architecture limit (which would explain why Kepler remains a "tier 2" on RB).
 
Vega Frontier Edition

Code:
Direct3D 12 feature checker (July 2017) by DmitryKo
https://forum.beyond3d.com/posts/1840641/

Windows 10 version 1703 (build 15063)

ADAPTER 0
"Radeon Vega Frontier Edition"
VEN_1002, DEV_6863, SUBSYS_6B761002, REV_00
Dedicated video memory : 4211945472  bytes
Total video memory : 4193810432  bytes
Video driver version : 22.19.384.2
Maximum feature level : D3D_FEATURE_LEVEL_12_1 (0xc100)
DoublePrecisionFloatShaderOps : 1
OutputMergerLogicOp : 1
MinPrecisionSupport : D3D12_SHADER_MIN_PRECISION_SUPPORT_16_BIT (2)
TiledResourcesTier : D3D12_TILED_RESOURCES_TIER_3 (3)
ResourceBindingTier : D3D12_RESOURCE_BINDING_TIER_3 (3)
PSSpecifiedStencilRefSupported : 1
TypedUAVLoadAdditionalFormats : 1
ROVsSupported : 1
ConservativeRasterizationTier : D3D12_CONSERVATIVE_RASTERIZATION_TIER_3 (3)
StandardSwizzle64KBSupported : 0
CrossNodeSharingTier : D3D12_CROSS_NODE_SHARING_TIER_NOT_SUPPORTED (0)
CrossAdapterRowMajorTextureSupported : 0
VPAndRTArrayIndexFromAnyShaderFeedingRasterizerSupportedWithoutGSEmulation : 1
ResourceHeapTier : D3D12_RESOURCE_HEAP_TIER_2 (2)
MaxGPUVirtualAddressBitsPerResource : 44
MaxGPUVirtualAddressBitsPerProcess : 44
Adapter Node 0:    TileBasedRenderer: 0, UMA: 0, CacheCoherentUMA: 0, IsolatedMMU: 1
HighestShaderModel : D3D12_SHADER_MODEL_5_1 (0x0051)
WaveOps : 1
WaveLaneCountMin : 64
WaveLaneCountMax : 64
TotalLaneCount : 4096
ExpandedComputeResourceStates : 1
Int64ShaderOps : 1
RootSignature.HighestVersion : D3D_ROOT_SIGNATURE_VERSION_1_1 (2)
DepthBoundsTestSupported : 1
ProgrammableSamplePositionsTier : D3D12_PROGRAMMABLE_SAMPLE_POSITIONS_TIER_NOT_SUPPORTED (0)
ShaderCache.SupportFlags : D3D12_SHADER_CACHE_SUPPORT_SINGLE_PSO | LIBRARY (3)
 
Back
Top