Digital Foundry Article Technical Discussion [2023]

Status
Not open for further replies.
This is very simplistic and naive interpretation on where the actual problem lies. If we say that DX12 is bad than all PCs low level APIs are bad. The subject was already discussed here few times. Its not that the DX12 or Vulkan or whatever API problem. Is the complexity of new low level APIs that are problem.
Thoughts on graphics APIs and libraries (asawicki.info)

"In DX11 you could just specify intended resource usage (D3D11_USAGE_IMMUTABLE, D3D11_USAGE_DYNAMIC) and the driver chose preferred place for it. In Vulkan you have to query for memory heaps available on the current GPU and explicitly choose the one you decide best for your resource, based on low-level flags like VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT, VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT etc. AMD exposes 4 memory types and 3 memory heaps. Nvidia has 11 types and 2 heaps. Intel integrated graphics exposes just 1 heap and 2 types, showing the memory is really unified, while AMD APU, also integrated, has same memory model as the discrete card. If you try to match these to what you know about physically existing video RAM and system RAM, it doesn’t make any sense. You could just pick the first DEVICE_LOCAL memory for the fastest GPU access, but even then, you cannot be sure your resource will stay in video RAM. It may be silently migrated to system RAM without your knowledge and consent (e.g. if you go out of memory), which will degrade performance. What is more, there is no way to query for the amount of free GPU memory in Vulkan, unless you do hacks like using DXGI."

As you see everything comes at the cost.

View attachment 8888

One more interesting quote

"One could say the new APIs don’t deliver to their promise of being low level, explicit, and having predictable performance. It is impossible to deliver, unless the API is specific to one GPU, like there is on consoles. "
Dx11 can max out a gpu more than dx12.. the new api's features are propaganda
 
At the end of the day, pc is not consoles. So an API that acts as a consoles does in the PC space doesn't make much sense when it goes against one of the main tenants of the pc
 
At the end of the day, pc is not consoles. So an API that acts as a consoles does in the PC space doesn't make much sense when it goes against one of the main tenants of the pc

It makes sense but only if game devs follow through on their promise. They asked for low level access to hardware and that means “all hardware”. The underlying problem is that a few elite devs hyped low level access without consideration for what it meant for everyone else. The saddest part is that even those elite devs didn’t deliver.

The funny thing is it’s still happening today. Developers are demanding even more low level access to PSO management, BVH internals etc. It’s almost like they’ve learned nothing.

DX12 brought some much needed multithreading improvements like ExecuteIndirect and CommandQueues. It would have been better off leaving state and memory management in the driver.
 
DX12 brought some much needed multithreading improvements like ExecuteIndirect and CommandQueues. It would have been better off leaving state and memory management in the driver.
This is it really, the last two years has pretty much exposed this fallacy about these lower level APIs, some corrections are needed, and Vulkan already started doing some.
 
It makes sense but only if game devs follow through on their promise. They asked for low level access to hardware and that means “all hardware”. The underlying problem is that a few elite devs hyped low level access without consideration for what it meant for everyone else. The saddest part is that even those elite devs didn’t deliver.

The funny thing is it’s still happening today. Developers are demanding even more low level access to PSO management, BVH internals etc. It’s almost like they’ve learned nothing.

DX12 brought some much needed multithreading improvements like ExecuteIndirect and CommandQueues. It would have been better off leaving state and memory management in the driver.
That's crazy. The devs on pc seem to not know what they were asking for in regards to how every puzzle piece of game development fits together
 
This is it really, the last two years has pretty much exposed this fallacy about these lower level APIs, some corrections are needed, and Vulkan already started doing some.
And corrections are happening, those APIs evolve but this dosent change the fact that learning curve is very high. Here is for example some feedback from developers from this year Vulkan conference

"Among problems that developers have with using Vulkan and potential areas of development for the future, I noticed several common themes:

  • Shader/pipeline compilation versus dynamic states. All the pipeline states baked into a pipeline object obviously require lots of time to compile. But not only time - Valve said their pipeline cache can be several gigabytes of storage per game! Adding more dynamic states, on the other hand, could introduce unexpected slow paths in the driver. Someone said that "best practices" validation layer could help with this problem - it includes performance recommendations specific to a GPU vendor. There was also a voice that engine developers should start contributing to "best practices" too, to include things they observed to work well. There was a hand voting who wants everything baked into a pipeline state versus more dynamic states. Result was a tie.
  • Multiple presentations showed developers’ struggles with resource binding. They often mentioned going bindless, appreciated VK_EXT_descriptor_buffer.
  • Development of shading languages. General direction seems to be that developers want more high-level languages, like C++, with classes, templates etc., that would allow them to develop entire complex libraries, preferably also to interleave GPU code with C++ code in the same source file. Khronos/LunarG admitted HLSL is in better shape than GLSL already. GLSL is still to be maintained, but not going to get new big features like templates. There was an idea to use Circle as the shader language."

No its not.. its the other features that are rarely used like mesh shaders sample feedback streaming that i.m talking about

Stop with this nonsense already

Mesh Shaders Release the Intrinsic Power of a GPU - ACM SIGGRAPH Blog
 
And corrections are happening, those APIs evolve but this dosent change the fact that learning curve is very high. Here is for example some feedback from developers from this year Vulkan conference

"Among problems that developers have with using Vulkan and potential areas of development for the future, I noticed several common themes:

  • Shader/pipeline compilation versus dynamic states. All the pipeline states baked into a pipeline object obviously require lots of time to compile. But not only time - Valve said their pipeline cache can be several gigabytes of storage per game! Adding more dynamic states, on the other hand, could introduce unexpected slow paths in the driver. Someone said that "best practices" validation layer could help with this problem - it includes performance recommendations specific to a GPU vendor. There was also a voice that engine developers should start contributing to "best practices" too, to include things they observed to work well. There was a hand voting who wants everything baked into a pipeline state versus more dynamic states. Result was a tie.
  • Multiple presentations showed developers’ struggles with resource binding. They often mentioned going bindless, appreciated VK_EXT_descriptor_buffer.
  • Development of shading languages. General direction seems to be that developers want more high-level languages, like C++, with classes, templates etc., that would allow them to develop entire complex libraries, preferably also to interleave GPU code with C++ code in the same source file. Khronos/LunarG admitted HLSL is in better shape than GLSL already. GLSL is still to be maintained, but not going to get new big features like templates. There was an idea to use Circle as the shader language."



Stop with this nonsense already

Mesh Shaders Release the Intrinsic Power of a GPU - ACM SIGGRAPH Blog

Nanite is not a mesh shader demo. It use mesh/primitive shader but most of the work is done using compute shader.
 
No its not.. its the other features that are rarely used like mesh shaders sample feedback streaming that i.m talking about

No, you don't get to make such a nonsense comment and then selectively change it.

PS5 isn't using its new hardware features across the board, so is PS5's API propaganda too?
 
No its not.. its the other features that are rarely used like mesh shaders sample feedback streaming that i.m talking about
Ray Tracing was added to vanilla DX12 in late 2018. Mesh shaders weren't added to DX12 until DX12 Ultimate in 2020. Sampler feedback was the same, it's a DX12U feature. Sampler feedback streaming, IIRC, is an Xbox only feature and isn't a feature of DX12 on PC at all. Regardless, the reason we've seen games leverage RT in games but not mesh shaders is because of the time that the feature has been available. Honestly I don't think anyone could tell if a game is using sampler feedback unless the game or developer explicitly says that it's enabled. It's possible that's been available in games and nobody knows.

Regardless, DX12 Ultimate hasn't really been out long enough for any AAA games to really use all of it's features. Especially ones that are using off the shelf engines, since they are only now fully supporting those features as well. It isn't a coincidence that the earlies games to incorporate RT were games like Battlefield and Tomb Raider, which use custom in house engines.
 
Here is for example some feedback from developers from this year Vulkan conference
Yeah, I just stated that, Vulkan already implemented corrections in it's latest version, DX12 is yet to do any though, and it's been almost 10 years, this is beyond ridiculous.

We are talking about a colossal failure across every aspect, CPU overhead is way worse, VRAM consumption is out of control, fps is lower, stutters are running rampant, scene complexity didn't increase, none of the promises materialized.
 
Last edited:
Honestly I don't think anyone could tell if a game is using sampler feedback unless the game or developer explicitly says that it's enabled. It's possible that's been available in games and nobody knows.
It's very easy to find out and no, there are no games that use SF currently.

There are architectures that do and do not support these features.

Just compare VRAM usage / texture streaming behavior on something like a 5700XT to a 2070 Super. If a game uses SFS, it will be obvious by how much better the 2070 Super will be.
 
Ray Tracing was added to vanilla DX12 in late 2018.
Yep, and the first DXR game was also available a few months later in early 2019, with more than dozen available in later 2019.

the reason we've seen games leverage RT in games but not mesh shaders is because of the time that the feature has been available
Mesh Shaders is available in DX since late 2019. And in Turing GPUs since 2018.

4 years later, only one game uses them, a chinese one called Justice.
 
Yep, and the first DXR game was also available a few months later in early 2019, with more than dozen available in later 2019.


Mesh Shaders is available in DX since late 2019. And in Turing GPUs since 2018.

4 years later, only one game uses them, a chinese one called Justice.
Damn.. wonder why devs aren't using tgese revolutionary features Microsoft call them ? 🤔
 
Yeah, I just stated that, Vulkan already implemented corrections in it's latest version, DX12 is yet to do any though, and it's been almost 10 years, this is beyond ridiculous.

We are talking about a colossal failure across every aspect, CPU overhead is way worse, VRAM consumption is out of control, fps is lower, stutters are running rampant، scene complexity didn't increase, none of the promises materialized.
What are you implying? That dx12 hasn’t changed in 10 years? Becouse this is simply not true


From the article I posted about apis

“Update 2019-04-16: Microsoft just announced they are adding background shader optimizations to D3D12, so driver can recompile and optimize shaders in the background on its own threads. Congratulations! We are back at D3D11 :p
 
I think some of the problems when we look at advancements in DirectX is a lot of the new features can't be relied on because games have to run on old hardware. There are still people playing on older GCN gpus, and Geforce series 9. Games are too expensive to focus on new features that aren't broadly supported. The common case is actually probably Geforce 10 and 20 series. Writing multiple render paths for different generations, or relying on fallbacks for new tech is probably not ideal.
 
Yep, and the first DXR game was also available a few months later in early 2019, with more than dozen available in later 2019.


Mesh Shaders is available in DX since late 2019. And in Turing GPUs since 2018.

4 years later, only one game uses them, a chinese one called Justice.
RT needed games to show off to sell those juicy RTX cards. No one cared about meshlets and mesh shaders until new gen arrived for sure. The old consoles did not support that so there was no reason to use any of this.
Most of the new games are still cross gen. Adaptation is slow.
 
Status
Not open for further replies.
Back
Top