DmitryKo
Veteran
Could you please confirm if there is any change in reported capabilities for Maxwell-1 with the latest Nvidia drivers?in the meantime what exactly did you want?
Last edited:
Could you please confirm if there is any change in reported capabilities for Maxwell-1 with the latest Nvidia drivers?in the meantime what exactly did you want?
Pulled from a GTX 750 (the first thing I could grab)Could you please cinfirm if there is any change in reported capabilities for Maxwell-1 with the latest Nvidia drivers?
"NVIDIA GeForce GTX 750"
VEN_10DE, DEV_1381, SUBSYS_234619DA, REV_A2
Dedicated video memory : 1020985344 bytes
Total video memory : 4294901760 bytes
Maximum feature level : D3D_FEATURE_LEVEL_11_0 (0xb000)
DoublePrecisionFloatShaderOps : 1
OutputMergerLogicOp : 1
MinPrecisionSupport : D3D12_SHADER_MIN_PRECISION_SUPPORT_NONE (0)
TiledResourcesTier : D3D12_TILED_RESOURCES_TIER_1 (1)
ResourceBindingTier : D3D12_RESOURCE_BINDING_TIER_2 (2)
PSSpecifiedStencilRefSupported : 0
TypedUAVLoadAdditionalFormats : 1
ROVsSupported : 0
ConservativeRasterizationTier : D3D12_CONSERVATIVE_RASTERIZATION_TIER_NOT_SUPPORTED (0)
MaxGPUVirtualAddressBitsPerResource : 31
StandardSwizzle64KBSupported : 0
CrossNodeSharingTier : D3D12_CROSS_NODE_SHARING_TIER_NOT_SUPPORTED (0)
CrossAdapterRowMajorTextureSupported : 0
VPAndRTArrayIndexFromAnyShaderFeedingRasterizerSupportedWithoutGSEmulation : 0
ResourceHeapTier : D3D12_RESOURCE_HEAP_TIER_1 (1)
Adapter Node 0: TileBasedRenderer: 0, UMA: 0, CacheCoherentUMA: 0
Well, this apparently gives a little bit more credence to that table...TypedUAVLoadAdditionalFormats was 0 last time. Now it's 1.
As per the previous discussion, I get why you're doing this but you may run into the fact that IHVs are not supposed to be marketing hardware feature levels to consumers. This thread is a perfect example of how even with enthusiasts the information is neither useful nor well understood really.I'm reaching out to everyone in order to put together a proper, well-researched article. But in the meantime what exactly did you want?
I completely hear you on consumer versus dev, which is a big reason why I'm doing this article. The purpose of the article basically boils down to "Stop freaking out. There are differences, some matter more than others. Most are there for developers". If I can't find a way to clarify this and calm people down (i.e. bring enlightenment), then it's not something I'll publish, as I definitely don't want to contribute to the problem.As per the previous discussion, I get why you're doing this but you may run into the fact that IHVs are not supposed to be marketing hardware feature levels to consumers. This thread is a perfect example of how even with enthusiasts the information is neither useful nor well understood really.
Trust me, I'm all for being forthcoming on information and so on. However, recent experience does put me pretty firmly in the camp of the folks arguing that shining a spotlight on this stuff to consumers is probably doing more harm than good. Developers, absolutely this stuff is relevant. To consumers I feel like you can talk about big hardware features (conservative raster, ROVs, bindless, etc) without getting into the weeds of feature levels, caps bits, etc.
Develoepr target usualy features they need. If they need an hardware feature and some target hardware does not support that hardware feature the have three choice:My fear for both enthusiasts and consumers is the actual target for developers. Yes features are there for developers and they differ a lot. So does that mean many developers will be forced to target lowest common denominator? Will any of these enhanced features be used at all if they require a completely different way to code shaders etc?
Sounds reasonable. I get both sides of this issue (I'm both a dev and enthusiast ) so I don't envy the line you guys have to walk here, but you clearly understand the situation so that's all one can ask.I completely hear you on consumer versus dev, which is a big reason why I'm doing this article. The purpose of the article basically boils down to "Stop freaking out. There are differences, some matter more than others. Most are there for developers". If I can't find a way to clarify this and calm people down (i.e. bring enlightenment), then it's not something I'll publish, as I definitely don't want to contribute to the problem.
neither useful nor well understood really
There's nothing "secret" here per se, it's all queryable via the public API, as this thread has demonstrated It's a question of whether or not to direct consumer focus to it or not. There's a million details in graphics APIs that consumers rightfully have no idea about because they don't have the context to understand what they mean, and that's the point: what consumers ultimately care about is how games look and play and it's best to let the games speak for themselves on that front. Obviously marketing is marketing but the further you stray from that the more you just get into fanboys justifying purchases which does no one any service.just be transparent about it and let the public decide
Um, my statement was that the "information is neither useful nor well understood" to consumers, not the underlying features themselves.and if a feature was "neither useful nor well understood" then that itself is a problem of the HW/API that needs fixing ..
You can! See the tool Dmitry posted, or presumably an updated DX caps tool. Getting this information has always been trivial, just people (rightly) didn't care much in the past. It's not about secrets, it's about whether consumers have the context to interpret the information in any useful way. Ex. anyone can go ahead and capture a GPUView trace of a game for instance, but how many people have the experience to interpret the resulting data?It's rediculous that you can't just get a matrix of what the HW supports ... period ... it should be that simple! (in my biased eyes its only the ISV's that are afraid to list it out)
but how many people have the experience to interpret the resulting data?
My personal opinion is that all the information should be easily available for the technically oriented consumer. Consumers can be considered the number one driver of technical innovation, as consumers choose the product. If consumers prefer DX 12.1 over DX 12.0 then they will buy a DX 12.1 compatible GPU. This eventually means that DX 12.0 GPUs will not sell well enough and that eventually means that there will be enough DX 12.1 GPUs around to create games that require 12.1 feature set. Conservative rasterization, ROV and volume tiled resources are all major "enabler" features, allowing new styles of rendering pipelines and techniques (that are not easy to backport to older hardware). Same is true for other new DX 12 features at lower feature levels, such as ExecuteIndirect, bindless resources, tiled resources and fast render target array index (bypassing geomery shader).
I do agree with Andrew that it is becoming harder to describe the new features to a consumer. Does the consumer actually understand why it is important to be able to change the index start offset and primitive count per instance (the main difference between geometry instancing and multidraw) or how does 0.5 and 1/512 pixel maximum "dilatation" affect the peformance and usability of conservative rasterization (tier 1 vs tier 3). How important are tier 2 tiled resources (page miss error code from sampling, min/max filter). Not even the developers know the answers to all of these questions yet.
A good example of feature that was not understood by consumers was compute shaders. Most consumers still think that CS allows GPU physics and fluid simulation, while in reality it is mostly used to speed up lighting and post processing in current games and to perform occlusion culling, etc geometry processing.
https://scalibq.wordpress.com/2015/06/09/no-dx12_1-for-upcoming-radeon-3xx-series/A few days ago it was confirmed that none of AMD’s current GPUs are capable of supporting the new rendering features in DirectX 12 level 12_1, namely Conservative Rasterization and Rasterizer Ordered Views.
I already mentioned that back when Maxwell v2 and DX11.3 were introduced.
Back then a guy named ‘Tom’ tried to push the idea that these features are supported by Mantle/GCN already. When I asked for some specifics, it became somewhat clear that it would simply be some kind of software implementation rather than direct hardware support.
When I asked for some actual code to see how complex it would actually be to do this in Mantle, there was no reply.
Software approaches in general are no surprise. AMD demonstrated a software version of order-independent-transparency at the introduction of the Radeon 5xxx series.
And a software implementation for conservative rasterization was published in GPU Gems 2 back in 2005.
So it’s no surprise that you can implement these techniques in software on modern GPUs. But the key to DX12_1 is efficient hardware support.
I think it is safe to assume that if they supported conservative rasterization and ROV’s, that we would have heard of it by now, and it would definitely be mentioned on the slides, since it is a far more interesting feature than resource binding tier 2 vs tier 3.
So I contacted Richard Huddy about this. His reply pretty much confirmed that conservative rasterization and ROV’s are missing.
Some of the responses of AMD’s Robert Hallock also point to downplaying the new DX12_1 features, and just pretending that supporting DX12 is supporting DX12, regardless of features.
Clearly that is not the case. But if AMD needs to spread that message, with a new chip being launched in just a few days, I think we know enough.
Nothing really "happens" on GCN hardware.What happens exactly when you "bind" a texture to a texture unit for a given shader program? Given texture units are spread across the chip it's a pretty strange concept.