DirectX 12 API Preview

Andrew said:
What Max's new slides talk about is that there are also new features coming beyond feature level 11, and the described features will be exposed both through the D3D 12 and 11.3 APIs.

But are there feature levels in D3D 12 (at launch) that go beyond the feature levels available in D3D 11.3 (aka feature level 11_3)? It sounds like from Max that D3D 11.3 and 12.0 will have feature parity.
 
But are there feature levels in D3D 12 (at launch) that go beyond the feature levels available in D3D 11.3 (aka feature level 11_3)? It sounds like from Max that D3D 11.3 and 12.0 will have feature parity.
I don't think Microsoft has really said either way on that (Max, feel free to jump in :)).

But you'll have to be a bit more specific on what you mean by "features" - arguably the low-overhead API stuff itself exposes both CPU and GPU stuff that you can't get access to on 11.3, so not to be coy, but how do you define a "feature" in that context?
 
so consumers really need to separate those concepts in their minds too.

Seems unlikely to happen after the years of marketing that used DirectX versions as a defining feature of new GPU's. Even some developers that I've spoken with will still get confused about API versions vs. feature levels in D3D11!
 
Microsoft hasn't confirmed the name/value of any new feature levels yet, nor have the final conformance tests been handed off to hardware vendors. All statements concerning a feature level 11.3 or 12.0 is speculation, as are any statements about hardware supporting such feature levels. What I did just confirm last week is the existence of four new rendering features that will be included in both the Direct3D 11.3 and Direct3D 12.0 APIs: ROVs, Typed UAVs, Volume Tiled Resources, and Conservative Rasterization. I also confirmed there are a couple more features beyond what was disclosed. At minimum this means capability bits but one could reasonably assume there's a common set across multiple hardware vendors that I'll eventually announce as a new feature level, once the issues of conformance & support are settled across hardware vendors. Announcing new feature levels is something I prefer to do at the same time across all hardware vendors.

As far as parity of new hardware features between Direct3D 11.3 and 12.0, generally my team is trying to bring hardware features to both however there may eventually be some features that only make sense on 12.0. Consider the 12.0 bind model that I announced at IDF. I'm sure several hardware vendors are already dreaming up ways to exploit that bind model for new rendering features. It's too dramatic a change to back port that bind model to 11.3 so such rendering features would be 12.0 specific.

I'm trying to be more open during the development process of Direct3D, pulling in feedback earlier in the development process to make a better API and enable developers to create content earlier. It's natural for some amount of confusion to occur the first time this is attempted. There are a lot of formerly hidden steps in building Direct3D that are visible now and every partner in the industry, from game developer, to IHV, to Direct3D is figuring out how to work in this more open model.

Andrew Lauritzen and I are in complete agreement about the feature level and API numbering being confusing. The pattern was established before I was the lead for Direct3D so I've favored continuity over aggressively renaming things just to make a mark.

If the goal is to describe which API methods function fully I could further throw another complicating factor by considering OS version and WDDM version. A classic example of this is the Direct3D 11.1 API Platform Update for Windows 7, where my team brought the 11.1 API from Windows 8 back to Windows 7. There was a lot of negative press about the 11.1 hardware features not being supported on Windows 7 in that platform update, along with a significant amount of speculation in the press that the hardware features were turned off to create a need to upgrade to Windows 8. The actual truth is my team engineered the platform update to have full support for the hardware features in feature level 11.1. Exposing those hardware features requires the runtime query a new WDDM version and function table from the user mode drivers. When my team went into testing for the platform update, a significant number of hybrid laptop drivers and unsupported wrapper drivers for things like USB displays behaved erratically when a new driver version was queried. After months of ordering more laptops and devices to test, I eventually pulled the plug on querying a new WDDM version to resolve the remaining driver compatibility issues. The cost was too great for the features being added. Even if my team managed to keep the driver query intact, some APIs like EnqueueSetEvent on DXGI wouldn't work without an update to the kernel or a change in design for Windows 7. Such APIs were left disabled in the platform update based on "bang for buck" of dev effort.

In summary, support for a given piece of functionality is multidimensional: API version, OS version, Hardware Support, and WDDM Version.
 
Seems unlikely to happen after the years of marketing that used DirectX versions as a defining feature of new GPU's. Even some developers that I've spoken with will still get confused about API versions vs. feature levels in D3D11!

I honestly don't know what's so hard about it?! Developers are used to working with API's period.. Not a hard concept at all!
 
But you'll have to be a bit more specific on what you mean by "features" - arguably the low-overhead API stuff itself exposes both CPU and GPU stuff that you can't get access to on 11.3, so not to be coy, but how do you define a "feature" in that context?
I think he meant exposing new GPU features in addition to making the CPU side API more efficient. Max already gave us a list of four new GPU features in DirectX 11.3. We all know that the latest GPUs support many other features in addition to those four new DirectX 11.3 features and the old DirectX 11.2 feature set (just look at OpenGL 4.4/4.5 spec sheets).

My personal wish list is quite long, so I just cut it down to the single most important feature: https://www.opengl.org/registry/specs/ARB/indirect_parameters.txt. This feature allows the GPU itself to decide how many draw calls it's going to render (without any CPU intervention). This is the most important "enabler" feature for new kinds of GPU-driven (compute based) rendering pipelines.
 
It's natural for some amount of confusion to occur the first time this is attempted. There are a lot of formerly hidden steps in building Direct3D that are visible now and every partner in the industry, from game developer, to IHV, to Direct3D is figuring out how to work in this more open model.
Totally agreed, but as I mentioned above I think this even pre-dates the new, more open efforts. Game devs mostly understand API vs. feature level at this point (but didn't in the early ~DX11 timeframe) but consumers/tech press completely do not. For instance, do you know anywhere that you can even go and check the feature level of various current hardware, including caps like tiled resource tier? When API and feature level were separated, the consumers/press followed the API level for some reason, where what they really care more about is the feature level.

The pattern was established before I was the lead for Direct3D so I've favored continuity over aggressively renaming things just to make a mark.
I agree, changing at this point would just confuse people more. Maybe some sort of targeted effort (FAQ, press blast or something) could help, but as you note, it gets even more subtle than API vs. feature level once OS stuff is involved.

When my team went into testing for the platform update, a significant number of hybrid laptop drivers and unsupported wrapper drivers for things like USB displays behaved erratically when a new driver version was queried. After months of ordering more laptops and devices to test, I eventually pulled the plug on querying a new WDDM version to resolve the remaining driver compatibility issues. The cost was too great for the features being added. Even if my team managed to keep the driver query intact, some APIs like EnqueueSetEvent on DXGI wouldn't work without an update to the kernel or a change in design for Windows 7. Such APIs were left disabled in the platform update based on "bang for buck" of dev effort.
Very interesting, I had never heard that :) I'm totally not surprised that stuff like hybrid graphics drivers, USB displays (and stuff like multi-gpu SLI/crossfire) makes this stuff way more complicated in practice as well. I do have to sigh at game devs who think that it's technically easy to back-port new API and hardware features and that Microsoft and IHVs are being intentionally obtuse on... I get that consumers are not going to understand the subtleties but game devs really should :)

Great post Max; it clarifies everything really well.
 
I think he meant exposing new GPU features in addition to making the CPU side API more efficient.
Right but that's my point - what is a "GPU feature" isn't even fully well-defined. Ex. is the new binding model a CPU or GPU feature (or both)? As far as developers are concerned, it really doesn't matter either way - it's a capability that they didn't previously have.
 
Totally agreed, but as I mentioned above I think this even pre-dates the new, more open efforts. Game devs mostly understand API vs. feature level at this point (but didn't in the early ~DX11 timeframe) but consumers/tech press completely do not. For instance, do you know anywhere that you can even go and check the feature level of various current hardware, including caps like tiled resource tier? When API and feature level were separated, the consumers/press followed the API level for some reason, where what they really care more about is the feature level.
Cap bits are a huge part of the consumer problem, IMHO. It was easy enough to keep track of features with feature levels, but once you throw in optional features and features that aren't tied to to a feature level (e.g. tiled resources), it gets maddening quickly.

Direct3D/DirectX is far closer to a household name, so that's certainly going to be part of the problem. But at the end of the day if you want everything promoted by feature level, then features need to be all-or-nothing per level, well documented, and defined before the hardware arrives. Otherwise you have a Kepler situation where the hardware is 11_0 with a bunch of extra features that now need explained, or a GTX 980 situation where the hardware will almost certainly support FL 11_3, but no one can say for sure because 11_3 technically doesn't exist yet.
 
But at the end of the day if you want everything promoted by feature level, then features need to be all-or-nothing per level, well documented, and defined before the hardware arrives. Otherwise you have a Kepler situation where the hardware is 11_0 with a bunch of extra features that now need explained, or a GTX 980 situation where the hardware will almost certainly support FL 11_3, but no one can say for sure because 11_3 technically doesn't exist yet.

I think the "all-or-nothing" approach is not a good thing in practice...

The DirectX team should adapt to feedback from the community (devs/users/studios etc) .. And if it makes sense they should bring features down feature levels (as optional)..

eg. (Windows8) ShadowBuffers was Feature Level 10 BUT was made optional for Feature Level 9

eg. (Windows8.1) Instancing was 9_3 higher BUT now optional for 9_1

And when Windows moves to a new agile release strategy, where system level dlls can be deployed within days and to groups (as per job descriptions found recently) ... then we need a way to rapidly innovate the Dx/WinRT API's if needed ...
 
I think the "all-or-nothing" approach is not a good thing in practice...
Oh from a development standpoint you're right, it would be absolutely terrible. I'm just stating that for a consumer standpoint, having to account for individual features doesn't work very well.
 
The actual truth is my team engineered the platform update to have full support for the hardware features in feature level 11.1. Exposing those hardware features requires the runtime query a new WDDM version and function table from the user mode drivers. When my team went into testing for the platform update, a significant number of hybrid laptop drivers and unsupported wrapper drivers for things like USB displays behaved erratically when a new driver version was queried. After months of ordering more laptops and devices to test, I eventually pulled the plug on querying a new WDDM version to resolve the remaining driver compatibility issues. The cost was too great for the features being added. Even if my team managed to keep the driver query intact, some APIs like EnqueueSetEvent on DXGI wouldn't work without an update to the kernel or a change in design for Windows 7. Such APIs were left disabled in the platform update based on "bang for buck" of dev effort.

In summary, support for a given piece of functionality is multidimensional: API version, OS version, Hardware Support, and WDDM Version.

It is, however OpenGL works on all those OS and hardware with a complete feature set.

Any ETA on D3D12 having a public preview or something like that ?
 
Right but that's my point - what is a "GPU feature" isn't even fully well-defined. Ex. is the new binding model a CPU or GPU feature (or both)? As far as developers are concerned, it really doesn't matter either way - it's a capability that they didn't previously have.
The new binding model requires GPU hardware support (bindless resources). This makes it a new GPU feature. However since it's so tightly integrated to the new API (unlike the bindless resources in OpenGL), I would also say that it's a major API defining feature. Both the CPU side and the GPU side code are affected a lot by the new design. I really do like the new DX12 binding model and resource management. It's very clean and efficient.
As long as you don't test it.
This reminds me about the rant of vendor A, B and C OpenGL driver quality: http://richg42.blogspot.fi/2014/05/the-truth-on-opengl-driver-quality.html :)

OpenGL has never been a truly cross vendor API. You can never be sure that your code runs flawlessly on all the different GPUs. Most OpenGL developers don't even try to do that. Traditionally it was always better to use vendor specific extensions than ARB stuff even for storing vertex and index buffers. The situation has improved since, but we are still far away from DirectX (10+) cross vendor support. One of the biggest remaining flaws is that each vendor has their own GLSL parser and compiler. Sometimes your fully standards compliant code doesn't even compile on all of them. OpenGL is nice if you only use the simple features and need cross platform porting, but as soon as you start doing something advanced, your code likely doesn't even work properly on a single platform (on multiple different GPUs).

It's great that they are finally doing a full rewrite of the OpenGL API. Without all the legacy baggage and millions of ways of doing the same thing, the drivers should become much simpler: http://beyond3d.com/showthread.php?t=65484
 
The new binding model requires GPU hardware support (bindless resources). This makes it a new GPU feature.
Not sure I agree with this logic... the new binding models works on Haswell which does not support "bindless resources" as they are defined in OpenGL for instance. Also what about cases where the new API/driver uses hardware in a different way than DX11 did? Is that "new hardware" or a "GPU feature"? Certainly not anything that was added for DX12 specifically given the time-frame, but as folks are probably aware, there is often unused hardware/interfaces for a given driver/OS/API.

I just don't think "GPU feature" is something that is a well-defined notion in this context, nor do I think it's particularly interesting to define it anyways.
 
I just don't think "GPU feature" is something that is a well-defined notion in this context, nor do I think it's particularly interesting to define it anyways.
Yes, I agree that it's not completely well defined. Blending for example is just compiled to the end of the pixel shader on some GPUs, and some hardware might implement MDI using a loop in the driver (draw call count comes from the CPU) or a loop in the GPU command processor (draw call count comes from the GPU). As long as the features allow developers to do new stuff on the GPUs, I don't care how the features are implemented.

On the other hand, the OpenGL 4.5 extended blending support (with lots of hardcoded Photoshop blending modes) feels like a awkward hack built on top of "pixelsync" and similar great hardware features. I would have rather seen the API to expose the actual hardware feature so that we developers can program our own blending formulas instead of using some hardcoded ones (even in the OpenGL extension page they are fighting over some formula, whether it should be exactly like it is in the Photoshop or like it is currently in the extension).
 
On the other hand, the OpenGL 4.5 extended blending support (with lots of hardcoded Photoshop blending modes) feels like a awkward hack built on top of "pixelsync" and similar great hardware features. I would have rather seen the API to expose the actual hardware feature so that we developers can program our own blending formulas instead of using some hardcoded ones (even in the OpenGL extension page they are fighting over some formula, whether it should be exactly like it is in the Photoshop or like it is currently in the extension).
Yeah, totally agreed on all points there. I question the thinking behind that entire extension (and path rendering but that's another discussion - albeit very relevant to the "what is a GPU feature" question :)).
 
Yeah, totally agreed on all points there. I question the thinking behind that entire extension (and path rendering but that's another discussion - albeit very relevant to the "what is a GPU feature" question :)).
I found it really funny when I read the first GTX 980 reviews and the reviewers were touting voxel rendering as a new GPU hardware feature. Some reviewers even seemed to believe that there was custom hardware inside Maxwell that made this "feature" much faster.

Nowadays all the new GPU/API features are highly technical. For the marketing team it is much more sexy to say that their GPU supports voxel rendering than saying that their GPU supports conservative rasterization and tiled resources for volume textures (sparse volume textures). Obviously conservative rasterization can be used to speed up polygon soup -> voxel conversion algorithms and tiled volume textures can be used to store sparse voxel octree data.

I personally like both the rasterizer ordered views (DX version of PixelSync) and the conservative rasterization. I am very interested in finding out how this combo performs in data binning algorithms. This feature pair has high potential for light binning (tiled deferred lighting) and particle binning (tiled compute based particle rendering) among other things.
 
Back
Top