Direct3D feature levels discussion

Maybe references to Direct3D 11 or HLSL on the description page would ring the bell? :p

Yeah, but I was deceived by IntelExt_BeginPixelShaderOrdering shader

Code:
void IntelExt_BeginPixelShaderOrdering( )
{
    DeviceMemoryBarrier();   
    uint opcode = g_IntelExt.IncrementCounter();
    g_IntelExt[opcode].opcode = INTEL_EXT_BEGIN_PIXEL_ORDERING;
    g_IntelExt[opcode].rid = 0xFFFF;
}

named pretty similar to the OGL extension function call beginFragmentShaderOrderingINTEL
 
Haswell with driver 10.18.15.4225 on build 10074.

"Intel(R) HD Graphics Family"
VEN_8086, DEV_0A16, SUBSYS_2246103C, REV_0B
Dedicated video memory : 134742016 bytes
Total video memory : 4294901760 bytes
Maximum feature level : D3D_FEATURE_LEVEL_11_1 (0xb100)
DoublePrecisionFloatShaderOps : 1
OutputMergerLogicOp : 1
MinPrecisionSupport : D3D12_SHADER_MIN_PRECISION_SUPPORT_NONE (0)
TiledResourcesTier : D3D12_TILED_RESOURCES_TIER_NOT_SUPPORTED (0)
ResourceBindingTier : D3D12_RESOURCE_BINDING_TIER_1 (1)
PSSpecifiedStencilRefSupported : 0
TypedUAVLoadAdditionalFormats : 0
ROVsSupported : 1
ConservativeRasterizationTier : D3D12_CONSERVATIVE_RASTERIZATION_TIER_NOT_SUPPORTED (0)
MaxGPUVirtualAddressBitsPerResource : 31
StandardSwizzle64KBSupported : 0
CrossNodeSharingTier : D3D12_CROSS_NODE_SHARING_TIER_NOT_SUPPORTED (0)
CrossAdapterRowMajorTextureSupported : 0
VPAndRTArrayIndexFromAnyShaderFeedingRasterizerSupportedWithoutGSEmulation : 1
ResourceHeapTier : D3D12_RESOURCE_HEAP_TIER_2 (2)
Adapter Node 0: TileBasedRenderer: 0, UMA: 1, CacheCoherentUMA: 1
 
Someone trying to define Dx12 Feature Levels and Tiers

  • Tier 1: INTEL Haswell e Broadwell, NVIDIA Fermi
  • Tier 2: NVIDIA Kepler, Maxwell 1.0 and Maxwell 2.0
  • Tier 3: AMD GCN 1.0, GCN 1.1 and GCN 1.2

  • Feature level 11.0: NVIDIA Fermi, Kepler, Maxwell 1.0
  • Feature level 11.1: AMD GCN 1.0, INTEL Haswell and Broadwell
  • Feature level 12.0: AMD GCN 1.1 and GCN 1.2
  • Feature level 12.1: NVIDIA Maxwell 2.0
 
Someone trying to define Dx12 Feature Levels and Tiers

  • Tier 1: INTEL Haswell e Broadwell, NVIDIA Fermi
  • Tier 2: NVIDIA Kepler, Maxwell 1.0 and Maxwell 2.0
  • Tier 3: AMD GCN 1.0, GCN 1.1 and GCN 1.2

  • Feature level 11.0: NVIDIA Fermi, Kepler, Maxwell 1.0
  • Feature level 11.1: AMD GCN 1.0, INTEL Haswell and Broadwell
  • Feature level 12.0: AMD GCN 1.1 and GCN 1.2
  • Feature level 12.1: NVIDIA Maxwell 2.0

That's a good summary.

Note, the author is above you ;)
 
CrossNodeSharingTier : D3D12_CROSS_NODE_SHARING_TIER_NOT_SUPPORTED (0)
CrossAdapterRowMajorTextureSupported : 0
VPAndRTArrayIndexFromAnyShaderFeedingRasterizerSupportedWithoutGSEmulation : 1
ResourceHeapTier : D3D12_RESOURCE_HEAP_TIER_2 (2)
Adapter Node 0: TileBasedRenderer: 0, UMA: 1, CacheCoherentUMA: 1
Thanks! Anyone with Kepler/Maxwell1, Maxwell-2, or Skylake to run the D3D12 feature test tool?

It might also be interesting to see what CacheCoherentUMA reports on Kaveri and Carrizo.
It would be more interesting to know what difference does it make for the graphics programmer...

They did not include Skylake, as I did in my tables... otherwise seems to be correct.
 
Someone trying to define Dx12 Feature Levels and Tier.

Based on content the site seems to focus primarily on one GPU vendor, not much balance in articles or personal conclusions drawn.
 
Based on content the site seems to focus primarily on one GPU vendor, not much balance in articles or personal conclusions drawn.
I am not directly involved in the content production of the site anymore, however I can tell that is a small site. If you are a small site and when only one GPU vendor is interested to provides you quite regularity samples to test, you cannot create with magic other vendors hardware to write about.
Of course everyone has it's own personal opinions, but no-one on that site is a professional journalist, and I personally see every-day too much professional journalist trying to be non-objective as much as possible.
 
Thanks! Anyone with Kepler/Maxwell1, Maxwell-2, or Skylake to run the D3D12 feature test tool?
Kepler says:
ADAPTER 0
"NVIDIA GeForce GTX 680"
VEN_10DE, DEV_1180, SUBSYS_0969196E, REV_A1
Dedicated video memory : 2086338560 bytes
Total video memory : 4294901760 bytes
Maximum feature level : D3D_FEATURE_LEVEL_11_0 (0xb000)
DoublePrecisionFloatShaderOps : 1
OutputMergerLogicOp : 1
MinPrecisionSupport : D3D12_SHADER_MIN_PRECISION_SUPPORT_NONE (0)
TiledResourcesTier : D3D12_TILED_RESOURCES_TIER_1 (1)
ResourceBindingTier : D3D12_RESOURCE_BINDING_TIER_2 (2)
PSSpecifiedStencilRefSupported : 0
TypedUAVLoadAdditionalFormats : 0
ROVsSupported : 0
ConservativeRasterizationTier : D3D12_CONSERVATIVE_RASTERIZATION_TIER_NOT_SUPPORTED (0)
MaxGPUVirtualAddressBitsPerResource : 31
StandardSwizzle64KBSupported : 0
CrossNodeSharingTier : D3D12_CROSS_NODE_SHARING_TIER_NOT_SUPPORTED (0)
CrossAdapterRowMajorTextureSupported : 0
VPAndRTArrayIndexFromAnyShaderFeedingRasterizerSupportedWithoutGSEmulation : 0
ResourceHeapTier : D3D12_RESOURCE_HEAP_TIER_1 (1)
Adapter Node 0: TileBasedRenderer: 0, UMA: 0, CacheCoherentUMA: 0
 
I dunno, I think the division is pretty good. Both 12_0 (bindless) and 12_1 (better rasterization/ROP control) are important. 12_1 enables stuff like efficient OIT, volumetric shadows (AVSM), voxelization, user-space tiling/binning, etc. Are games all of a sudden going to require all of these features? Of course not, but the sooner the majority of hardware has them the sooner we can push those more future-looking techniques forward. Of course consoles will hold things back a bit, but that's always the case.

Let's have a little gratitude for the consoles. If it weren't for their contribution to the R&D for their generation of GPU, the current situation might have been Maxwell at 12_1 and everything else being 11_1, assuming there would be sufficient push for a 12_foo with two major IHVs being below the bar.
 
Let's have a little gratitude for the consoles.
I have no major beef with the consoles - in some ways I'm surprised AMD hasn't been pushing the generality of GCN a lot harder than they have (in Mantle and otherwise). In their situation I think I would have long since entirely dispensed with the notions of driver-managed descriptors, samplers, etc. and just directly exposed the relevant memory encoding routines. Honestly I think the desire to remain compatible with current shading languages (HLSL) really puts a damper on what that hardware is otherwise capable of.

In any case I'm glad we can all get onto bindless soon, but it's sort of sad that ubiquitous ROV support is still a ways away, if only because there aren't really good alternatives other than picking inferior algorithms. At least with conservative rasterization you have software fallbacks (GS) on hardware that doesn't support it natively.
 
I didn't know if Skylake iGP feature were or not still under NDA.
Skylake hasn't launched yet so all the information is obviously speculation.
Feature level 12_x for Skylake has been announced in IDF2014 presentation GVCS005 - Microsoft* Direct3D* 12: New API Details and Intel Optimizations (PDF page 46, webcast 39:05) - if it's a speculation, it comes directly from Intel.

VPAndRTArrayIndexFromAnyShaderFeedingRasterizerSupportedWithoutGSEmulation : 0
ResourceHeapTier : D3D12_RESOURCE_HEAP_TIER_1 (1)
Thanks! Now we have a complete picture for Resource Heap tiers ... whatever these tiers mean.

Using functionality such as mapping default buffers should be guided by this after a point. Just an example.
It's always good to have an explicit definition.
 
Last edited:
Kepler says:

VPAndRTArrayIndexFromAnyShaderFeedingRasterizerSupportedWithoutGSEmulation : 0
Maxwell most likely implements this feature with GS pass-through (https://developer.nvidia.com/sites/...l/specs/GL_NV_geometry_shader_passthrough.txt). I am actually surprised that Kepler doesn't support this, since AMD has had this since OpenGL 4.2 (https://www.opengl.org/registry/specs/AMD/vertex_shader_layer.txt).

This presentation lists all the other relevant NVIDIA Maxwell extensions regarding to DirectX 12.1 features:
https://developer.nvidia.com/content/maxwell-gm204-opengl-extensions

Fragment Shader Interlock = Rasterizer ordered views (https://developer.nvidia.com/sites/...ngl/specs/GL_NV_fragment_shader_interlock.txt).
 
The current confusion seems to have been caused by comments from AMD’s Robert Hallock, who acknowledged that the various AMD GCN-class GPUs support different feature levels of DirectX 12. This has been spun into allegations that AMD doesn’t support “full” DirectX 12. In reality, Intel, Nvidia, and AMD all support DirectX 12 at various feature levels, and no GPU on the market today supports every single optional DirectX 12 capability.

DX12FeatureLEvels-640x436.png


It’s not clear why Microsoft lists Kepler as supporting DirectX 11_1 while Nvidia shows it as limited to DirectX 11_0 below, but either way, the point is made: DirectX 12 support is nuanced and varies between various card families from every manufacturer. AMD’s GCN 1.0 chips include Cape Verde, Pitcairn, Oland, and Tahiti and support feature level 11_1, whereas Bonaire, Hawaii, Tonga, and Fiji will all support feature level 12_0. Nvidia’s various 4xx, 5xx, 6xx, and 7xx families will all support DirectX 12 at the 11_0 or 11_1 feature level, with the GTX 750 Ti offering FL 12_0 support.

The issue has been further confused by claims that Maxwell is the only GPU on the market to support “full” DirectX 12. While it’s true that Maxwell is the only GPU that supports DirectX 12_1, AMD is the only company offering full Tier 3 resource binding and asynchronous shaders for simultaneous graphics and compute. That doesn’t mean AMD or Nvidia is lying — it means that certain features and capabilities of various cards are imperfectly captured by feature levels and that calling one GPU or another “full” DX12 misses this distinction. Intel, for example, offers ROV at the 11_1 feature level — something neither AMD nor Nvidia can match.

If you own a GCN 1.0, Fermi, or Kepler card, you’re going to get the DirectX 12 features that matter most. That’s why Microsoft created feature levels that older GPUs could use — if Fermi, Kepler, and older GCN 1.0 cards couldn’t benefit from the core advantages of DirectX 12, Microsoft wouldn’t have qualified them to use it in the first place. The API was purposefully designed to allow for backwards compatibility in order to ensure developers would be willing to target it.

http://www.extremetech.com/extreme/...what-amd-intel-and-nvidia-do-and-dont-deliver
 
Back
Top