Direct3D feature levels discussion

Discussion in 'Rendering Technology and APIs' started by DmitryKo, Feb 20, 2015.

  1. DmitryKo

    DmitryKo Regular

    Could you please confirm if there is any change in reported capabilities for Maxwell-1 with the latest Nvidia drivers?
     
    Last edited: Jun 12, 2015
  2. Ryan Smith

    Ryan Smith Regular

    Pulled from a GTX 750 (the first thing I could grab)

    TypedUAVLoadAdditionalFormats was 0 last time. Now it's 1.

    Code:
    "NVIDIA GeForce GTX 750"
    VEN_10DE, DEV_1381, SUBSYS_234619DA, REV_A2
    Dedicated video memory : 1020985344  bytes
    Total video memory : 4294901760  bytes
    Maximum feature level : D3D_FEATURE_LEVEL_11_0 (0xb000)
    DoublePrecisionFloatShaderOps : 1
    OutputMergerLogicOp : 1
    MinPrecisionSupport : D3D12_SHADER_MIN_PRECISION_SUPPORT_NONE (0)
    TiledResourcesTier : D3D12_TILED_RESOURCES_TIER_1 (1)
    ResourceBindingTier : D3D12_RESOURCE_BINDING_TIER_2 (2)
    PSSpecifiedStencilRefSupported : 0
    TypedUAVLoadAdditionalFormats : 1
    ROVsSupported : 0
    ConservativeRasterizationTier : D3D12_CONSERVATIVE_RASTERIZATION_TIER_NOT_SUPPORTED (0)
    MaxGPUVirtualAddressBitsPerResource : 31
    StandardSwizzle64KBSupported : 0
    CrossNodeSharingTier : D3D12_CROSS_NODE_SHARING_TIER_NOT_SUPPORTED (0)
    CrossAdapterRowMajorTextureSupported : 0
    VPAndRTArrayIndexFromAnyShaderFeedingRasterizerSupportedWithoutGSEmulation : 0
    ResourceHeapTier : D3D12_RESOURCE_HEAP_TIER_1 (1)
    Adapter Node 0:    TileBasedRenderer: 0, UMA: 0, CacheCoherentUMA: 0
     
  3. Alessio1989

    Alessio1989 Regular

    That's a nice improvement by a programmers' point of view.
     
  4. DmitryKo

    DmitryKo Regular

    Well, this apparently gives a little bit more credence to that table...
     
  5. Andrew Lauritzen

    Andrew Lauritzen Moderator Moderator Veteran

    As per the previous discussion, I get why you're doing this but you may run into the fact that IHVs are not supposed to be marketing hardware feature levels to consumers. This thread is a perfect example of how even with enthusiasts the information is neither useful nor well understood really.

    Trust me, I'm all for being forthcoming on information and so on. However, recent experience does put me pretty firmly in the camp of the folks arguing that shining a spotlight on this stuff to consumers is probably doing more harm than good. Developers, absolutely this stuff is relevant. To consumers I feel like you can talk about big hardware features (conservative raster, ROVs, bindless, etc) without getting into the weeds of feature levels, caps bits, etc.
     
    Last edited: Jun 12, 2015
  6. Ryan Smith

    Ryan Smith Regular

    I completely hear you on consumer versus dev, which is a big reason why I'm doing this article. The purpose of the article basically boils down to "Stop freaking out. There are differences, some matter more than others. Most are there for developers". If I can't find a way to clarify this and calm people down (i.e. bring enlightenment), then it's not something I'll publish, as I definitely don't want to contribute to the problem.
     
    Kej and pjbliverpool like this.
  7. Malo

    Malo Yak Mechanicum Legend Subscriber

    My fear for both enthusiasts and consumers is the actual target for developers. Yes features are there for developers and they differ a lot. So does that mean many developers will be forced to target lowest common denominator? Will any of these enhanced features be used at all if they require a completely different way to code shaders etc?
     
    pharma likes this.
  8. DmitryKo

    DmitryKo Regular

    You can't get the genie back in the bottle.

    Every vendor is marketing their new GPUs as compliant to whatever is the latest "DirectX" no matter what, so versions of the API/runtime lost any significance for consumers, more so than stupid package box infographics - this is why they are looking elsewhere to differentiate hardware capabilities and make a right choice to "future-proof" their purchase decisions. But I agree that for levels 12_0 and 12_1 the impact of individual mandatory features is much easier to explain to end users.
     
    pharma likes this.
  9. Alessio1989

    Alessio1989 Regular

    Develoepr target usualy features they need. If they need an hardware feature and some target hardware does not support that hardware feature the have three choice:
    - Don not target that hardware, this happens only if that hardware has no more significant market share for the game.
    - Don't use that feature on that hardware (ie the end user cannot enable particular effects on a game with certain hardware), the simplest option but always applicable.
    - Use an alternative methods, that could be potentially slowest.
     
  10. Andrew Lauritzen

    Andrew Lauritzen Moderator Moderator Veteran

    Sounds reasonable. I get both sides of this issue (I'm both a dev and enthusiast :)) so I don't envy the line you guys have to walk here, but you clearly understand the situation so that's all one can ask.
     
    Last edited: Jun 12, 2015
  11. liquidboy

    liquidboy Regular

    just be transparent about it and let the public decide ... And as for the consumer vs developer argument, thru the years ive seen many developer only technical info become common language with consumers..

    and if a feature was "neither useful nor well understood" then that itself is a problem of the HW/API that needs fixing ..

    It's rediculous that you can't just get a matrix of what the HW supports ... period ... it should be that simple! (in my biased eyes its only the ISV's that are afraid to list it out)
     
  12. Andrew Lauritzen

    Andrew Lauritzen Moderator Moderator Veteran

    There's nothing "secret" here per se, it's all queryable via the public API, as this thread has demonstrated :) It's a question of whether or not to direct consumer focus to it or not. There's a million details in graphics APIs that consumers rightfully have no idea about because they don't have the context to understand what they mean, and that's the point: what consumers ultimately care about is how games look and play and it's best to let the games speak for themselves on that front. Obviously marketing is marketing but the further you stray from that the more you just get into fanboys justifying purchases which does no one any service.

    Um, my statement was that the "information is neither useful nor well understood" to consumers, not the underlying features themselves.

    You can! See the tool Dmitry posted, or presumably an updated DX caps tool. Getting this information has always been trivial, just people (rightly) didn't care much in the past. It's not about secrets, it's about whether consumers have the context to interpret the information in any useful way. Ex. anyone can go ahead and capture a GPUView trace of a game for instance, but how many people have the experience to interpret the resulting data?
     
    Alessio1989 likes this.
  13. liquidboy

    liquidboy Regular

    ... agree ... with you its trivial to get this info ...

    ... all im arguing against is your statement of "experience to interpret the data" ... thats true BUT look at the food industry and their labels.. im sure most people have no clue out there what that data means..

    anyway ill leave this argument here .. agree to disagree

    [​IMG]
     
    Last edited: Jun 12, 2015
  14. firstminion

    firstminion Newcomer

    Feature "tiers" made DX "Feature level" way confusing (and it wasn't easy before..). I would argue for referring them atomically.
     
  15. Alessio1989

    Alessio1989 Regular

  16. sebbbi

    sebbbi Veteran

    My personal opinion is that all the information should be easily available for the technically oriented consumer. Consumers can be considered the number one driver of technical innovation, as consumers choose the product. If consumers prefer DX 12.1 over DX 12.0 then they will buy a DX 12.1 compatible GPU. This eventually means that DX 12.0 GPUs will not sell well enough and that eventually means that there will be enough DX 12.1 GPUs around to create games that require 12.1 feature set. Conservative rasterization, ROV and volume tiled resources are all major "enabler" features, allowing new styles of rendering pipelines and techniques (that are not easy to backport to older hardware). Same is true for other new DX 12 features at lower feature levels, such as ExecuteIndirect, bindless resources, tiled resources and fast render target array index (bypassing geomery shader).

    I do agree with Andrew that it is becoming harder to describe the new features to a consumer. Does the consumer actually understand why it is important to be able to change the index start offset and primitive count per instance (the main difference between geometry instancing and multidraw) or how does 0.5 and 1/512 pixel maximum "dilatation" affect the peformance and usability of conservative rasterization (tier 1 vs tier 3). How important are tier 2 tiled resources (page miss error code from sampling, min/max filter). Not even the developers know the answers to all of these questions yet.

    A good example of feature that was not understood by consumers was compute shaders. Most consumers still think that CS allows GPU physics and fluid simulation, while in reality it is mostly used to speed up lighting and post processing in current games and to perform occlusion culling, etc geometry processing.
     
    chris1515, Zane, firstminion and 5 others like this.
  17. flopper

    flopper Newcomer


    You just underestimated every internet expert out there about this subject, I am horrified you just did that as it will somehow be about karma and irony or something about that bag seems funny.
     
  18. trinibwoy

    trinibwoy Meh Legend

    What happens exactly when you "bind" a texture to a texture unit for a given shader program? Given texture units are spread across the chip it's a pretty strange concept.
     
  19. pharma

    pharma Veteran

    June 9, 2015
    https://scalibq.wordpress.com/2015/06/09/no-dx12_1-for-upcoming-radeon-3xx-series/
     
  20. sebbbi

    sebbbi Veteran

    Nothing really "happens" on GCN hardware.

    Simplified: When you "bind" stuff on CPU side, the driver puts a "pointer" (resource descriptor) to an array. Later when a wave starts running in a CU, it will issue (scalar) instructions to load this array (of resource descriptors) from the memory to scalar registers. Buffer load / texture sample instruction takes a resource descriptor (scalar register) and 64 offsets/UVs as input and returns 64 results (filtered texels or loaded values from a buffer). Texture sampling has higher latency than buffer loads, as the texture filtering hardware is further away from the execution units (buffer loads have low latency as those get the data directly from the CU L1 cache).

    GCN hardware is fully bindless. Most graphics APIs however are based around a resource binding concept, because that's how some modern GPUs (and all older GPUs) work. On GCN you wouldn't need any CPU-side binding API. All memory accesses (including filtering with samplers) could be fully programmed by shaders.
     
    Last edited: Jun 13, 2015
Loading...

Share This Page

Loading...