No DX12 Software is Suitable for Benchmarking *spawn*

DX12 was only "bad" on Nvidia, because their DX11 drivers were "so good"...
It is one reason for this but it's not the only reason.
Many games are simply not optimized for Nv h/w properly in their D3D12 renderers - because the lion's share of rendering optimizations are done for console h/w and thus for AMD GPUs.
Nv h/w being "faster" for the last several years has also played a role in this, with many developers aiming at hitting some framerate ("60") on any GPU and not really caring if the same GPU can in fact output more with further optimization.
 
What architectures issue are we talking about here then? It's not like any of these NVIDIA GPUs are lacking any DX12 feature level features! We've also had several hints pointing towards incompetence from either the developers or -sometimes- from Microsoft itself explaining the discrepancies in such "broken" titles.

Vulkan seems to be spared from such issues.
You just pasted big bunch of posts which point the blame definitely out of game dev hands. You can blame MS for favoring AMD in some decisions when designing the API, but then they also favored NVIDIA in others, either way both IHVs need to deal with the API as it is. Game using features which NVIDIA is having hard time with isn't "game devs fault".
 
You just pasted big bunch of posts which point the blame definitely out of game dev hands.
Actually no, the tools necessary to circumvent these API pitfalls (undefined behavior) on NVIDIA GPUs are available to developers, they just don't use them, they don't care or don't know or are too lazy to do it.
 
they just don't use them, they don't care or don't know or are too lazy to do it.
None of these. They simply don't have a budget from their publisher to do that.

Which BTW was one of main reasons why I was rather skeptical about the new "explicit" APIs from the start. I just can't imagine any gaming publisher spending more than absolutely necessary on their PC versions - and unless there's an IHV to the rescue they'll just make sure that it runs at acceptable framerate and be done with it. We've seen a lot of examples of such approach already.
 
That's not proof.
Then, we disagree on this, I say within the general trend of actual events, that this is proof enough.

-Dozens of DX12 titles have broken implementations on both AMD and NVIDIA comapred to DX11.

-Actual developers (their testimony is features in this thread) saying that DX12 is much harder to do than DX11, and the fps gains are harder to come by, and that DX12 should be considered only when you can leverage it's advantages (ray tracing, lower CPU overhead, exotic rendering ..etc).

-Several testimonials stating that some developers have problems circumventing "undefined API behaviors" due to mismatched descritors on NVIDIA GPUs.

-Several games have way slower DX12 implementation on NVIDIA hardware specifically, despite their DX11 version (or the previous games in the series) working fine with DX11, sometimes even the next game in the series work well when it had time to mature enough as in the case of Borderlands 3 vs Tiny Tina.

-On the other end of the specturm, dozens of DX12 games work as intended on both AMD and NVIDIA GPUs, like Gears, Forza Horizon, Control, Cyberpunk, Metro Exodus, .. etc, from developers with a good reputation of good software design.

These points are enough for me -and for anyone else for that matter- to establish a conclusion based on these trends, that some developers are just clueless on how to do proper DX12 on non AMD hardware, or they are simply incapable of doing so.

They simply don't have a budget from their publisher to do that.
Bingo.
 
Then, we disagree on this, I say within the general trend of actual events, that this is proof enough.

-Dozens of DX12 titles have broken implementations on both AMD and NVIDIA comapred to DX11.

-Actual developers (their testimony is features in this thread) saying that DX12 is much harder to do than DX11, and the fps gains are harder to come by, and that DX12 should be considered only when you can leverage it's advantages (ray tracing, lower CPU overhead, exotic rendering ..etc).

-Several testimonials stating that some developers have problems circumventing "undefined API behaviors" due to mismatched descritors on NVIDIA GPUs.

-Several games have way slower DX12 implementation on NVIDIA hardware specifically, despite their DX11 version (or the previous games in the series) working fine with DX11, sometimes even the next game in the series work well when it had time to mature enough as in the case of Borderlands 3 vs Tiny Tina.

-On the other end of the specturm, dozens of DX12 games work as intended on both AMD and NVIDIA GPUs, like Gears, Forza Horizon, Control, Cyberpunk, Metro Exodus, .. etc, from developers with a good reputation of good software design.

Look what you said again...
  • They just don't use them
  • They don't care
  • Or don't know or are too lazy to do it
At no point does anything you have posted above prove those three bullet points.

I would have especially loved to see how you evidenced and proved that developers are too lazy to implement DX12 properly.

But as I've told you previously in other threads, stop making generalised bullshit claims that you can't backup and learn to stop making said bullshit claims because your feelings are hurt.
 
Makes sense to unoptimize your superior hardware because some software is broken. Maybe Microsoft should fix it...

What superior hardware?

Outside of ray tracing and the uber top end $1000+ GPU's AMD currently offer the best performance for a given price point.

But funny how Nvidia fixed their 'superior hardware' in the 4000 series to enable DX12 to work better.
 

Attachments

  • The-Callisto-Protocol-DX11-vs-DX12-benchmarks-2 (1).png
    The-Callisto-Protocol-DX11-vs-DX12-benchmarks-2 (1).png
    27.9 KB · Views: 7
Makes sense to unoptimize your superior hardware because some software is broken. Maybe Microsoft should fix it...
When you're building your "superior hardware" for said API, you should fix it, not whoever made the API.
And just because certain hardware has issues with the API doesn't mean the API is broken, it more likely means the hardware/software can't handle the API the way it's supposed to (because the API has been around longer and other hardware is handling same cases just fine)
 
The people complaining about DX12 games being unoptimised for Nvidia hardware are the same people who were cheering the Nvidia TWIMTBP program during the DX9 and DX10 era.

The irony.
 
I would have especially loved to see how you evidenced and proved that developers are too lazy to implement DX12 properly.
I would have loved that the mounting evidence of bad DX12 games on both sides is enough to actually make you realise the situation, but apparently that's not enough.

So we end this conversation here, instead of you bullshitting all over logic, forum and the whole place.
 
When you're building your "superior hardware" for said API, you should fix it, not whoever made the API.
And just because certain hardware has issues with the API doesn't mean the API is broken, it more likely means the hardware/software can't handle the API the way it's supposed to (because the API has been around longer and other hardware is handling same cases just fine)
nVidia doesnt build GPUs for only gaming anymore. Their architectures are used for compute, robots, DL, raytracing etc. Its the outdated software stack which is the problem.

nVidia designed Fermi for massiv geomtry processing with Tessellation. Developers didnt care. nVidia designed Turing+ for massiv geometry processing, (compute with Ampere+), Raytracing, DL etc. and developers do not care.

I can play Portal RTX just fine, but Calisto Protocoll is totally broken on nVidia hardware with DXR.
 
Back
Top