Direct3D feature levels discussion

Could you please cinfirm if there is any change in reported capabilities for Maxwell-1 with the latest Nvidia drivers?
Pulled from a GTX 750 (the first thing I could grab)

TypedUAVLoadAdditionalFormats was 0 last time. Now it's 1.

Code:
"NVIDIA GeForce GTX 750"
VEN_10DE, DEV_1381, SUBSYS_234619DA, REV_A2
Dedicated video memory : 1020985344  bytes
Total video memory : 4294901760  bytes
Maximum feature level : D3D_FEATURE_LEVEL_11_0 (0xb000)
DoublePrecisionFloatShaderOps : 1
OutputMergerLogicOp : 1
MinPrecisionSupport : D3D12_SHADER_MIN_PRECISION_SUPPORT_NONE (0)
TiledResourcesTier : D3D12_TILED_RESOURCES_TIER_1 (1)
ResourceBindingTier : D3D12_RESOURCE_BINDING_TIER_2 (2)
PSSpecifiedStencilRefSupported : 0
TypedUAVLoadAdditionalFormats : 1
ROVsSupported : 0
ConservativeRasterizationTier : D3D12_CONSERVATIVE_RASTERIZATION_TIER_NOT_SUPPORTED (0)
MaxGPUVirtualAddressBitsPerResource : 31
StandardSwizzle64KBSupported : 0
CrossNodeSharingTier : D3D12_CROSS_NODE_SHARING_TIER_NOT_SUPPORTED (0)
CrossAdapterRowMajorTextureSupported : 0
VPAndRTArrayIndexFromAnyShaderFeedingRasterizerSupportedWithoutGSEmulation : 0
ResourceHeapTier : D3D12_RESOURCE_HEAP_TIER_1 (1)
Adapter Node 0:    TileBasedRenderer: 0, UMA: 0, CacheCoherentUMA: 0
 
I'm reaching out to everyone in order to put together a proper, well-researched article. But in the meantime what exactly did you want?
As per the previous discussion, I get why you're doing this but you may run into the fact that IHVs are not supposed to be marketing hardware feature levels to consumers. This thread is a perfect example of how even with enthusiasts the information is neither useful nor well understood really.

Trust me, I'm all for being forthcoming on information and so on. However, recent experience does put me pretty firmly in the camp of the folks arguing that shining a spotlight on this stuff to consumers is probably doing more harm than good. Developers, absolutely this stuff is relevant. To consumers I feel like you can talk about big hardware features (conservative raster, ROVs, bindless, etc) without getting into the weeds of feature levels, caps bits, etc.
 
Last edited:
As per the previous discussion, I get why you're doing this but you may run into the fact that IHVs are not supposed to be marketing hardware feature levels to consumers. This thread is a perfect example of how even with enthusiasts the information is neither useful nor well understood really.

Trust me, I'm all for being forthcoming on information and so on. However, recent experience does put me pretty firmly in the camp of the folks arguing that shining a spotlight on this stuff to consumers is probably doing more harm than good. Developers, absolutely this stuff is relevant. To consumers I feel like you can talk about big hardware features (conservative raster, ROVs, bindless, etc) without getting into the weeds of feature levels, caps bits, etc.
I completely hear you on consumer versus dev, which is a big reason why I'm doing this article. The purpose of the article basically boils down to "Stop freaking out. There are differences, some matter more than others. Most are there for developers". If I can't find a way to clarify this and calm people down (i.e. bring enlightenment), then it's not something I'll publish, as I definitely don't want to contribute to the problem.
 
My fear for both enthusiasts and consumers is the actual target for developers. Yes features are there for developers and they differ a lot. So does that mean many developers will be forced to target lowest common denominator? Will any of these enhanced features be used at all if they require a completely different way to code shaders etc?
 
You can't get the genie back in the bottle.

Every vendor is marketing their new GPUs as compliant to whatever is the latest "DirectX" no matter what, so versions of the API/runtime lost any significance for consumers, more so than stupid package box infographics - this is why they are looking elsewhere to differentiate hardware capabilities and make a right choice to "future-proof" their purchase decisions. But I agree that for levels 12_0 and 12_1 the impact of individual mandatory features is much easier to explain to end users.
 
My fear for both enthusiasts and consumers is the actual target for developers. Yes features are there for developers and they differ a lot. So does that mean many developers will be forced to target lowest common denominator? Will any of these enhanced features be used at all if they require a completely different way to code shaders etc?
Develoepr target usualy features they need. If they need an hardware feature and some target hardware does not support that hardware feature the have three choice:
- Don not target that hardware, this happens only if that hardware has no more significant market share for the game.
- Don't use that feature on that hardware (ie the end user cannot enable particular effects on a game with certain hardware), the simplest option but always applicable.
- Use an alternative methods, that could be potentially slowest.
 
I completely hear you on consumer versus dev, which is a big reason why I'm doing this article. The purpose of the article basically boils down to "Stop freaking out. There are differences, some matter more than others. Most are there for developers". If I can't find a way to clarify this and calm people down (i.e. bring enlightenment), then it's not something I'll publish, as I definitely don't want to contribute to the problem.
Sounds reasonable. I get both sides of this issue (I'm both a dev and enthusiast :)) so I don't envy the line you guys have to walk here, but you clearly understand the situation so that's all one can ask.
 
Last edited:
neither useful nor well understood really

just be transparent about it and let the public decide ... And as for the consumer vs developer argument, thru the years ive seen many developer only technical info become common language with consumers..

and if a feature was "neither useful nor well understood" then that itself is a problem of the HW/API that needs fixing ..

It's rediculous that you can't just get a matrix of what the HW supports ... period ... it should be that simple! (in my biased eyes its only the ISV's that are afraid to list it out)
 
just be transparent about it and let the public decide
There's nothing "secret" here per se, it's all queryable via the public API, as this thread has demonstrated :) It's a question of whether or not to direct consumer focus to it or not. There's a million details in graphics APIs that consumers rightfully have no idea about because they don't have the context to understand what they mean, and that's the point: what consumers ultimately care about is how games look and play and it's best to let the games speak for themselves on that front. Obviously marketing is marketing but the further you stray from that the more you just get into fanboys justifying purchases which does no one any service.

and if a feature was "neither useful nor well understood" then that itself is a problem of the HW/API that needs fixing ..
Um, my statement was that the "information is neither useful nor well understood" to consumers, not the underlying features themselves.

It's rediculous that you can't just get a matrix of what the HW supports ... period ... it should be that simple! (in my biased eyes its only the ISV's that are afraid to list it out)
You can! See the tool Dmitry posted, or presumably an updated DX caps tool. Getting this information has always been trivial, just people (rightly) didn't care much in the past. It's not about secrets, it's about whether consumers have the context to interpret the information in any useful way. Ex. anyone can go ahead and capture a GPUView trace of a game for instance, but how many people have the experience to interpret the resulting data?
 
but how many people have the experience to interpret the resulting data?

... agree ... with you its trivial to get this info ...

... all im arguing against is your statement of "experience to interpret the data" ... thats true BUT look at the food industry and their labels.. im sure most people have no clue out there what that data means..

anyway ill leave this argument here .. agree to disagree

visivite-green-supplement-facts.png
 
Last edited:
Feature "tiers" made DX "Feature level" way confusing (and it wasn't easy before..). I would argue for referring them atomically.
 
My personal opinion is that all the information should be easily available for the technically oriented consumer. Consumers can be considered the number one driver of technical innovation, as consumers choose the product. If consumers prefer DX 12.1 over DX 12.0 then they will buy a DX 12.1 compatible GPU. This eventually means that DX 12.0 GPUs will not sell well enough and that eventually means that there will be enough DX 12.1 GPUs around to create games that require 12.1 feature set. Conservative rasterization, ROV and volume tiled resources are all major "enabler" features, allowing new styles of rendering pipelines and techniques (that are not easy to backport to older hardware). Same is true for other new DX 12 features at lower feature levels, such as ExecuteIndirect, bindless resources, tiled resources and fast render target array index (bypassing geomery shader).

I do agree with Andrew that it is becoming harder to describe the new features to a consumer. Does the consumer actually understand why it is important to be able to change the index start offset and primitive count per instance (the main difference between geometry instancing and multidraw) or how does 0.5 and 1/512 pixel maximum "dilatation" affect the peformance and usability of conservative rasterization (tier 1 vs tier 3). How important are tier 2 tiled resources (page miss error code from sampling, min/max filter). Not even the developers know the answers to all of these questions yet.

A good example of feature that was not understood by consumers was compute shaders. Most consumers still think that CS allows GPU physics and fluid simulation, while in reality it is mostly used to speed up lighting and post processing in current games and to perform occlusion culling, etc geometry processing.
 
My personal opinion is that all the information should be easily available for the technically oriented consumer. Consumers can be considered the number one driver of technical innovation, as consumers choose the product. If consumers prefer DX 12.1 over DX 12.0 then they will buy a DX 12.1 compatible GPU. This eventually means that DX 12.0 GPUs will not sell well enough and that eventually means that there will be enough DX 12.1 GPUs around to create games that require 12.1 feature set. Conservative rasterization, ROV and volume tiled resources are all major "enabler" features, allowing new styles of rendering pipelines and techniques (that are not easy to backport to older hardware). Same is true for other new DX 12 features at lower feature levels, such as ExecuteIndirect, bindless resources, tiled resources and fast render target array index (bypassing geomery shader).

I do agree with Andrew that it is becoming harder to describe the new features to a consumer. Does the consumer actually understand why it is important to be able to change the index start offset and primitive count per instance (the main difference between geometry instancing and multidraw) or how does 0.5 and 1/512 pixel maximum "dilatation" affect the peformance and usability of conservative rasterization (tier 1 vs tier 3). How important are tier 2 tiled resources (page miss error code from sampling, min/max filter). Not even the developers know the answers to all of these questions yet.

A good example of feature that was not understood by consumers was compute shaders. Most consumers still think that CS allows GPU physics and fluid simulation, while in reality it is mostly used to speed up lighting and post processing in current games and to perform occlusion culling, etc geometry processing.


You just underestimated every internet expert out there about this subject, I am horrified you just did that as it will somehow be about karma and irony or something about that bag seems funny.
 
What happens exactly when you "bind" a texture to a texture unit for a given shader program? Given texture units are spread across the chip it's a pretty strange concept.
 
June 9, 2015
A few days ago it was confirmed that none of AMD’s current GPUs are capable of supporting the new rendering features in DirectX 12 level 12_1, namely Conservative Rasterization and Rasterizer Ordered Views.
I already mentioned that back when Maxwell v2 and DX11.3 were introduced.
Back then a guy named ‘Tom’ tried to push the idea that these features are supported by Mantle/GCN already. When I asked for some specifics, it became somewhat clear that it would simply be some kind of software implementation rather than direct hardware support.

When I asked for some actual code to see how complex it would actually be to do this in Mantle, there was no reply.
Software approaches in general are no surprise. AMD demonstrated a software version of order-independent-transparency at the introduction of the Radeon 5xxx series.
And a software implementation for conservative rasterization was published in GPU Gems 2 back in 2005.
So it’s no surprise that you can implement these techniques in software on modern GPUs. But the key to DX12_1 is efficient hardware support.

I think it is safe to assume that if they supported conservative rasterization and ROV’s, that we would have heard of it by now, and it would definitely be mentioned on the slides, since it is a far more interesting feature than resource binding tier 2 vs tier 3.

So I contacted Richard Huddy about this. His reply pretty much confirmed that conservative rasterization and ROV’s are missing.
Some of the responses of AMD’s Robert Hallock also point to downplaying the new DX12_1 features, and just pretending that supporting DX12 is supporting DX12, regardless of features.
Clearly that is not the case. But if AMD needs to spread that message, with a new chip being launched in just a few days, I think we know enough.
https://scalibq.wordpress.com/2015/06/09/no-dx12_1-for-upcoming-radeon-3xx-series/
 
What happens exactly when you "bind" a texture to a texture unit for a given shader program? Given texture units are spread across the chip it's a pretty strange concept.
Nothing really "happens" on GCN hardware.

Simplified: When you "bind" stuff on CPU side, the driver puts a "pointer" (resource descriptor) to an array. Later when a wave starts running in a CU, it will issue (scalar) instructions to load this array (of resource descriptors) from the memory to scalar registers. Buffer load / texture sample instruction takes a resource descriptor (scalar register) and 64 offsets/UVs as input and returns 64 results (filtered texels or loaded values from a buffer). Texture sampling has higher latency than buffer loads, as the texture filtering hardware is further away from the execution units (buffer loads have low latency as those get the data directly from the CU L1 cache).

GCN hardware is fully bindless. Most graphics APIs however are based around a resource binding concept, because that's how some modern GPUs (and all older GPUs) work. On GCN you wouldn't need any CPU-side binding API. All memory accesses (including filtering with samplers) could be fully programmed by shaders.
 
Last edited:
Back
Top