No DX12 Software is Suitable for Benchmarking *spawn*

You can also find developers that think dx12 is better because you have more explicit control. And I’m talking devs working on major games
Well yeah but then you have results where this "explicit control" is used in ways which are counter productive to the actual point of getting more performance out of the h/w.
Frostbite was the leading force behind Mantle and then it got its D3D12 implementation which is still (?) considerably supbar when compared to its own D3D11 renderer.
MS own endeavors with D3D12 were (and often still are) laughably bad (Halo Infinite is a year old game which still runs abysmally bad on PC h/w).
There are some examples where D3D12 and VK lead to some measurable performance gains of course but funnily enough most of these were not from developers who touted "explicit control" in the first place. So maybe this isn't exactly a feature worth pursuing in PC space?

The api does not describe a singular correct way to do things. Many things can be implemented in many ways, some that will favour one architecture over another and some that will be terrible across the board.
The problem is that with these new APIs developers should often be implementing things "in many ways" (i.e. differently for different h/w) - but they often don't because they only have resources to do it one way (barely) and this one way is usually derived directly from how stuff is done for console h/w.

I think developers will need to code DX12 games for RDNA and GCN, and DX12 separately for Nvidia GPUS. Just like they do with Xbox and PS.
Who will do that? Epic may do this for their renderer(s) because that's what they do for a living basically but other developers likely won't. Hence this thread and what we get in actual shipped s/w.
 
@DegustatoR I'm not convinced that dx12 is generally better or worse. I haven't spent much time researching it. I do know I've played older games where dx11 was preferable to dx12, but I've also played games where performance was pretty much equal. And in the case of cpu performance, I've seen quite a few games where dx12 is much better in cpu limited scenarios or on lower end cpus. Overall, I know devs are looking for something better than both dx12. I know resource barriers were a big problem in dx12, and they now have a spec for enhanced barriers which are closer to vulkan barriers. Not sure how long that's been out of preview or if any games are really using it. But I understand that are differences with AMD and Nvidia in how barriers affect performance. That's above my pay grade, but that's one of those performance hang-up areas that is different across architectures and there is not one singular correct way to do things.

Hopefully d3d13 will be something less verbose than vulkan. I've seen a lot of devs praises Apple's Metal API, but Metal is lacking a few things they view as important.

Edit: Was able to find this series of posts by mjp that goes over barriers. I can't remember the details, but it's part of the complexity of dx12 vs dx11.
 
Last edited:
The only thing we know for sure is that DX12 has sucked for years on all PC architectures. Pointing out one or two games where it works decently doesn’t change that fact. Soon it won’t matter as there wont be any DX11 fallback to compare against.
I've wondered about this in the past. When DX10 came around, it was often slower than DX9 in games that supported both (Crysis was like this). Crysis was an early example, but I don't recall later games being much different in this respect. Eventually everyone dropped DX9 for DX10/11 and we could no longer make comparisons. :unsure:
 
Last edited:
Hopefully d3d13 will be something less verbose than vulkan.
I don't think we're getting D3D13 any time soon. MS seems committed to updating D3D12, and there's a general lack of understanding (or agreement) in what should the next major API revision even be.
These D3D12 updates though are adding new options which can help with some scenarios where the old ways weren't optimal and lead to issues - but this means again that developers should be using these new options instead of relying on the old ones, and this often means additional PC work for graphics programmers while there are no additional time being allocated to this.
We're in 2023 now and I fully expect that most new games which will provide the option of choice between D3D11 and D3D12 to still run worse in D3D12, on Nv h/w at least.

Eventually everyone dropped DX9 for DX10/11 and we could no longer make comparisons.
Yeah, it's already happening for some years, most new releases use D3D12 (or Vulkan) exclusively. Mostly because supporting two renderers is very expensive, and D3D12 is the only API with new features. So there are few ways to compare them even now and there likely won't be any in the future.
 
You can also find developers that think dx12 is better because you have more explicit control.

There’s always been a bit of developer hubris baked into that statement. Egos dictate that they could work miracles if only the shackles of DX11 were removed. The harsh reality though is that extremely few developers have the knowledge or skill or budget to write optimal code for multiple GPU architectures.
 
Back
Top