Direct3D feature levels discussion

The hardware does support standard swizzle (and fp16 of course, Broadwell too). It may not be enabled yet in the driver though, not sure.

And even if we don't get all the way to "bashing", I think it's fair for us to be proud of having clear leadership in DX12 features for the time being. Indeed several of the main features are there because of us. Not something most would have expected out of Intel a few years ago I don't think :)



Do you know why dxdiag reports FL 11_1 and not FL 12_x on Skylake?
 
Does dxdiag report DX12 feature levels? I though it reports only DX11 feature levels, so probably Skylake drivers are currently limited to FL 11_1 on DX11 (D3D 11.3 if it reports WDDM 2.0)
 
w8oncye6.png


D3D12 Feature Checker instead reports:

ADAPTER 0
"Intel(R) HD Graphics 530"
VEN_8086, DEV_1912, SUBSYS_86941043, REV_06
Dedicated video memory : 134217728 bytes
Total video memory : 4294901760 bytes
Maximum feature level : D3D_FEATURE_LEVEL_12_1 (0xc100)
Direct3D 11.3 : D3D_FEATURE_LEVEL_11_1 (0xb100)
 
Yeah not sure about feature level support in DX11 driver - probably just some caps bit missing, I'll check when I get back from IDF.
 
Do you know why dxdiag reports FL 11_1 and not FL 12_x on Skylake?

DXdiag is an old piece of code, as not so many peoples look on it, i can imagine MS dont care to really recode it.. it will mix physical and shared memory.. offtly dont report the right memory size available ...

Windows 10 is still new and even right now i think the DX12 SDK, caps bit and all are not at 100% ( whatever it is on driver or windows ).
 
Do you know why dxdiag reports FL 11_1 and not FL 12_x on Skylake?
D3D12 Feature Checker instead reports:

Maximum feature level : D3D_FEATURE_LEVEL_12_1 (0xc100)
Direct3D 11.3 : D3D_FEATURE_LEVEL_11_1 (0xb100)

As I said above, DXDiag and DXCapsView are Direct3D 11 tools and they only report Direct3D 11.3 features under Windows 10.
You can use my text-mode console app if you need to check Direct3D 12 capabilities.

As of now, only Nvidia drivers report feature levels 12_x under Direct3D 11.3 - all other graphics drivers, including AMD Catalyst, Intel HD Graphics and Microsoft's own WARP12 from the beta Windows 10 SDK, only report maximum feature level 11_1 under Direct3D 11.3.

DXdiag is an old piece of code, as not so many peoples look on it, i can imagine MS dont care to really recode it.. it will mix physical and shared memory.. offtly dont report the right memory size available
DXDiag doesn't report physical memory simply because Direct3D 6-11 had a quite complex memory model that allows the graphics driver to allocate system memory over AGP/PCIe bus, which the DXG Kernel counts in the three "memory pools": dedicated video memory + dedicated system memory + shared system memory" - but none of these seem to correspond to the actual physical memory on modern hardware.
 
Last edited:
It has a name, it's Tonga. Fiji has a name, it's Fiji.
Those are the chip names, and we do use them liberally. But they don't say anything about the architectural features, which is why we need something to call the architecture, be it GCN 1.2, Maxwell, or Evergreen.
Fiji/Tonga/etc has an official name. AMD nowadays calls the architecture:
Graphics Core Next Architecture, Generation 3

Short name: GCN3

Source: http://amd-dev.wpengine.netdna-cdn..../07/AMD_GCN3_Instruction_Set_Architecture.pdf
Yeah, and to AMD's credit, they did finally issue the architecture a proper name in public-facing documents. Though this was post-Tonga, which is why it's not something we use at this moment if only for consistency.

GCN 1.1 by comparison was always labeled "Sea Islands" in AMD's documents, which isn't very useful when AMD PR is telling you that Sea Islands is not the architecture name but something else entirely...
 
Geometry Shader bypass performance cap-bit finally correctly exposed with Catalyst 15.8 - 15.201.1151.0.
I am wondering when we have this feature available in DirectX 11.3 drivers (https://msdn.microsoft.com/en-us/library/windows/desktop/dn933226(v=vs.85).aspx). And I am wondering what happens if I try to run a vertex shader that outputs a render target index in Windows 7. DirectX 11.3 is Windows 10 only, but this feature doesn't need any enabling from the CPU side API.
 
just a question: does anyone tried to get the cross-node sharing tier of crossfire and sli configurations?
note1: dunno if current driver support corss-node sharing, probably not...
note2: crossfire/sli must be enable into the GPU control panel...
 
It's kind of awkward that DX12 VBVs (vertex buffer views) are not shader visible. This limits usability of vertex buffers compared to other buffer types. I know that most modern GPUs support fully bindless vertex buffers, so it is sad to see that this limitation applies to all GPUs.

Fortunately there is ALLOW_INPUT_ASSEMBLER_INPUT_LAYOUT to disable vertex buffer support completely to free some root signature space. Mantle didn't support vertex buffers either, so this is not a big loss really...
 
MSDN states:
D3D12_ROOT_SIGNATURE_FLAG_ALLOW_INPUT_ASSEMBLER_INPUT_LAYOUT
The app is opting in to using the Input Assembler (requiring an input layout that defines a set of vertex buffer bindings). Omitting this flag can result in one root argument space being saved on some hardware. Omit this flag if the Input Assembler is not required, though the optimization is minor.
On which hardware?
I also still do not have clear which hardware has a dedicated buffer for root argument versioning and how big is it. I guess there are different hardware with different "root buffer" sizes, but none of current hardware has a big one enough to cover the full root signature size allowed by the API (64 DWORDs).
 
I guess there are different hardware with different "root buffer" sizes, but none of current hardware has a big one enough to cover the full root signature size allowed by the API (64 DWORDs).
Uhh, that's not correct. Intel hardware can do even more than the 64 DWORDS without additional indirection.
 
Back
Top