Direct3D feature levels discussion

Feature level 12_x for Skylake has been announced in IDF2014 presentation GVCS005 - Microsoft* Direct3D* 12: New API Details and Intel Optimizations (PDF page 46, webcast 39:05) - if it's a speculation, it comes directly from Intel.
Right, but that presentation came before the announcement/finalization of the feature levels (compare the dates). The only information I'd reliably extract from that bullet is that Broadwell is *not* feature level 12+ (which you have already determined).
 
Hmm... their description of what an API version is vs. feature level is kind of confusing there. API version is effectively a software interface thing, whereas feature level speaks to hardware capabilities. They kind of get at this with their feature level 9 example, but before that they characterize feature level as is they are like major versions and API is like minor version... not really true at all.
 
the only information I'd reliably extract from that bullet is that Broadwell is *not* feature level 12+
The news sites interpreted it as level 12_0 for Skylake. Hopefully it's a right guess, because if Skylake is level 11_1, the backlash will be on the same scale as the recent AMD bashing (and if Skylake is level 12_1, then AMD will probaly get even more bashing)...


In fact I'm starting to believe that Microsoft should have named the new levels 11_2 and 11_3, because that's what they in essense are... at least it would be consistent with Direct3D 11.3 and 11on12 layer.
Or maybe they could just rename 11_0 and 11_1 to 12_0 and 12_1 and any new feature level would start with 12_2.
Maybe then general public would realize these features do not make any significant difference...
 
In fact I'm starting to believe that Microsoft should have named the new levels 11_2 and 11_3, because that's what they in essense are... at least it would be consistent with Direct3D 11.3 and 11on12 layer.
Or maybe they could just rename 11_0 and 11_1 to 12_0 and 12_1 and any new feature level would start with 12_2.
Maybe then general public would realize these features do not make any significant difference...
Hmm, I pretty much disagree with all of that. As both I an sebbbi have outlined, both 12_0 and 12_1 are significant increases in functionality and performance over 11_x. And your comment about 11 on 12 and naming seems like you're not treating feature levels as separate from API as well. Remember: think of them more like "A, B, C" than related to the 9/10/11 of the API. They really are decoupled at this point...

I'll also remind you that these feature levels are not meant for consumption by the "general public" at all. They are developer API details. News sites that are choosing to publish information about them are doing that of their own accord; they are not meant to be marketed to or understood by consumers.
 
Last edited:
think of them more like "A, B, C" than related to the 9/10/11 of the API
That's the exact point I'm trying to make - name them 11_2/11_3 or 12_2/12_3, it woudln't make any difference to developers, but the general public would believe the difference between 11_1 and 11_2 is not as dramatic as between 12_0 vs 11_1...
 
Last edited:
No, that's not any less confusing. They should be named something that has *nothing to do* with the numbering of the API.
This. Something like D3D12_FEATURE_LEVEL_ALPHA, *_BETA, *_GAMMA, *_DELTA, and so on...
But I am afraid we are too late to see other dramatic refactoring since they come the from d3dcommon.h.
 
Yeah, it's a pretty ridiculous and universally confusing way to represent what the GPUs underneath are really capable of. In the 5 years I've spent trying to help customers understand what my GPUs are capable of, none of them have ever understood how to separate API from hardware capabilities using the feature level naming, and some of them still struggle to grok it after explanation, returning to a poorer understanding somewhere down the line.

If you must have hardware feature levels (and you must), make it very simple for everyone to understand.
 
I should note that Max did already say in another thread that he generally agrees that its confusing but that at this point it would be even more confusing to change it entirely (which is probably fair):
https://forum.beyond3d.com/threads/directx-12-api-preview.55653/page-5#post-1791900
And his note about WDDM being mixed up in this fun when new OSes are released is very true as well, and something that consumers are even more confused about (tends to spawn misguided indignation).
 
Why do you need to have feature levels? Wouldn't it be easier to just have the cap bits from years ago?
FEATURE_ROV etc?
Especially since some of the features that are combined in a single level don't seem to be very related anyway?
 
Why does you need to have feature levels? Wouldn't it be easier to just have the cap bits from years ago?
It makes it much easier for developers to be able to target a base set of features, and it encourages hardware vendors to support a given level of functionality completely. If it wasn't for feature levels, you'd likely still have each vendor decide not to bother implementing little details even in feature level 11 and similar. It basically allows Microsoft/developers to have a bit of leverage to get hardware vendors to converge on useful levels of functionality, which is something that is definitely needed.

FEATURE_ROV etc?
These are present as well - this is where we've sort of settled for now. Basically you can check for feature level 12_1 and if present assume that ROVs, CR, etc. are supported. Alternatively, you can require 11_1 hardware and query if ROVs are optionally supported (ex. Haswell/Broadwell).

As far as 12_0 and 12_1 go, I'd actually argue the features are fairly related within a tier. CR + ROVs in particular are very powerful when used together.
 
I can see how it encourages HW makers to add a complete features set.
But when you see statements about how Nvidia supports certain 11_1 features but not all, and that they can be enabled on request, then it's clear that it also slows down adoption of some features.
There's probably no way to make everybody happy...
 

There are a couple of points in there that I'm not sure are correct.

1. "AMD is the only company offering ... asynchronous shaders for simultaneous graphics and compute." - I'm pretty sure Maxwell 2 support async compute
2. I didn't think Maxwell 1 supported Typed UAV's or Resource Binding Tier 2 and is thus 11_0 rather than 12_0. Ditto, didn't think Kepler supports tiles resources Tier 2.
 
1. "AMD is the only company offering ... asynchronous shaders for simultaneous graphics and compute." - I'm pretty sure Maxwell 2 support async compute
They are slightly over-stepping their bounds on statements like that. All DX12 implementations must provide async compute and graphics queues. The level to which these can be executed simultaneously on hardware is both invisible to developers and potentially a lot more subtle than "yes or no", as it depends on exactly which shared hardware resources are used by the kernels.
 
They are slightly over-stepping their bounds on statements like that. All DX12 implementations must provide async compute and graphics queues. The level to which these can be executed simultaneously on hardware is both invisible to developers and potentially a lot more subtle than "yes or no", as it depends on exactly which shared hardware resources are used by the kernels.

I hadn't realised that. So Kepler (even Fermi) and Haswell all support async compute? Just not necessarily executing simultaneously?
 
Async-copy and async-computer must be supported by all D3D12 GPUs and drivers. As far I know async-copy should be supported by all three vendors on all GPUs and iGPs.
 
I hadn't realised that. So Kepler (even Fermi) and Haswell all support async compute? Just not necessarily executing simultaneously?
Yes, it's supported on all DX12 implementations. It just may or may not provide performance benefits on a given piece of hardware depending on the specifics of the graphics/compute/copy commands.
 
As far as 12_0 and 12_1 go, I'd actually argue the features are fairly related within a tier. CR + ROVs in particular are very powerful when used together.
The first thing I designed after seeing DX 11.3 / DX 12 new feature list was a light binning pipeline based on ROV and CR. These two allow many kinds of algorithms to be implemented efficiently. You can achieve even crazier stuff if you combine these with rendering to a sparse texture (tiled resource). I am eagerly waiting to see how well CR performs (I don't have access to Maxwell 2.0 GPU at the moment). Intel ROV implementation seems to be fast. Hopefully it is as fast on other GPUs.

ROV + CR + volume tiled resources also combine well (for voxel rendering obviously).
 
Stupid question but I'm a bit hazy on bindless in relation to hardware and resource binding tiers. So could someone please explain the relationship between the 3?
because I thought bindless would be required for 12 to work period, but thats not the case I'm a little confused, some help would be appreciated.
 
Back
Top