No DX12 Software is Suitable for Benchmarking *spawn*

I would have loved that the mounting evidence of bad DX12 games on both sides is enough to actually make you realise the situation, but apparently that's not enough.

So we end this conversation here, instead of you bullshitting all over logic, forum and the whole place.

I'm not the one branding devs as 'lazy' with no tangible evidence to back that up, stop acting like a child because your Nvidia feelings are getting hurt.

But if the games and 'lazy devs' are the problem how have Nvidia managed as much as a 24% performance increase in DX12 games in 522.25 driver?

To quote Nvidia themselves

Our DirectX 12 optimizations apply to GeForce RTX graphics cards and laptops, though improvements will vary based on your specific system setup, and the game settings used. In our testing, performance increases were found in a wide variety of DirectX 12 games, across all resolutions:

  • Assassin’s Creed Valhalla: up to 24% (1080p)
  • Battlefield 2042: up to 7% (1080p)
  • Borderlands 3: Up to 8% (1080p)
  • Call of Duty: Vanguard: up to 12% (4K)
  • Control: up to 6% (4K)
  • Cyberpunk 2077: up to 20% (1080p)
  • F122: up to 17% (4K)
  • Far Cry 6: up to 5% (1440p)
  • Forza Horizon 5: up to 8% (1080P)
  • Horizon Zero Dawn: Complete Edition: up to 8% (4k)
  • Red Dead Redemption 2: up to 7% (1080p)
  • Shadow of the Tomb Raider: up to 5% (1080p)
  • Tom Clancy’s The Division 2: up to 5% (1080p)
  • Watch Dogs: Legion: up to 9% (1440p)

You don't get those sort of performance increases from a driver update unless there was a problem with your driver in the first place and the 24% increase in AC: Valhalla, 20% increase in CP2077 and 17% in F122 shows there was a monstrous problem with Nvidia's drivers in DX12.

So it was Nvidia and their software after all.

There goes your whole argument.
 
Last edited:
When you're building your "superior hardware" for said API, you should fix it, not whoever made the API.
Actually no, you don't build you hardware to handle corner cases in a flawed API/game code, you build it according to your vision and your desired goals.

Again, a few dozen bad DX12 games shouldn't force you to nuke your planned designs, when other DX12 games that are made by good developers are available and run fine.

Should AMD redesign their hardware to handle bad DX12 games that run badly on their hardware too? That's a childish thing to say and even think about it.

It's like people are partially understanding the situation one part at a time, we've had bad DX12 games all the time, just recently The Witcher 3 and Fortnite run horrendously bad on all hardware with DX12, we also have some games that run especially bad on NVIDIA alone, and then some of these games get fixed in their next iteration when properly coded for.

What we have here is a collosal failure on parts of the softwares designers, hardware has nothing to do with it.
 
Should AMD redesign their hardware to handle bad DX12 games that run badly on their hardware too? That's a childish thing to say and even think about it.
hardware has nothing to do with it.
Well that clearly seems to be the mindset among many here, too. When AMD is underperforming it never seems to be about API, it's always about AMD, be it hardware or software, that needs fixing.

Actually no, you don't build you hardware to handle corner cases in a flawed API/game code, you build it according to your vision and your desired goals.
Who said anything about flawed? "Not optimal for current NVIDIA hardware" doesn't equal flawed.
Of course you can build your hardware for whatever you want, but if you support API X, you should make sure it runs well too. If that's not possible due hardware, tough luck, it's not games nor APIs fault.
It's like people are partially understanding the situation one part at a time, we've had bad DX12 games all the time, just recently The Witcher 3 and Fortnite run horrendously bad on all hardware with DX12, we also have some games that run especially bad on NVIDIA alone, and then some of these games get fixed in their next iteration when properly coded for.
Yes, we have badly coded games too. Game not running as well as expected on hardware X however doesn't mean it's badly coded game. It could be, but if said game works on other vendors hardware normally it's likely the issue isn't badly coded game.
 
Actually no, you don't build you hardware to handle corner cases in a flawed API/game code, you build it according to your vision and your desired goals.

Again, a few dozen bad DX12 games shouldn't force you to nuke your planned designs, when other DX12 games that are made by good developers are available and run fine.

Should AMD redesign their hardware to handle bad DX12 games that run badly on their hardware too? That's a childish thing to say and even think about it.

It's like people are partially understanding the situation one part at a time, we've had bad DX12 games all the time, just recently The Witcher 3 and Fortnite run horrendously bad on all hardware with DX12, we also have some games that run especially bad on NVIDIA alone, and then some of these games get fixed in their next iteration when properly coded for.

What we have here is a collosal failure on parts of the softwares designers, hardware has nothing to do with it.
Actually, I think in almost all cases AMD's performance issues are entirely their own fault. Explain how DX12 is a flawed API, at least compared to what we have come to expect from DIrectX over the years.

nVidia doesnt build GPUs for only gaming anymore. Their architectures are used for compute, robots, DL, raytracing etc. Its the outdated software stack which is the problem.

nVidia designed Fermi for massiv geomtry processing with Tessellation. Developers didnt care. nVidia designed Turing+ for massiv geometry processing, (compute with Ampere+), Raytracing, DL etc. and developers do not care.

I can play Portal RTX just fine, but Calisto Protocoll is totally broken on nVidia hardware with DXR.
Tessellation as its been implemented in the DX spec is not practical for many use cases.
 
Last edited:
When AMD is underperforming it never seems to be about API
There is underperforming due to lack of hardware/acceleration, and there is underperforming due to lack of optimizations or bad coding.

but if you support API X, you should make sure it runs well too. If that's not possible due hardware, tough luck, it's not games nor APIs fault.
You need to differentiate, we are not talking about failure of support here, or a general trend of persistently bad DX12 performance. DX12 is fully supported on NVIDIA GPUs, features and all, and most DX12 run well with optimal performance. It's the few bad apples that stand out. And they stand out in a special way due to their preceeding and sometimes succeeding circumstances.

but if said game works on other vendors hardware normally it's likely the issue isn't badly coded game.
This is where we differ, said game used to work well in DX11 (preceeding), switched to DX12 now it runs bad on NVIDIA, waited a few years, released a new sequel with DX12 only and voilla, now DX12 run well on NVIDIA (succeeding). What do you make out of that? The only thing you can make is that developers fucked up in their first game, but managed to properly fix it in the second.

The other examples I mentioned (Valhalla, Call of Duty) are just stuck in the preceeding phase of the situation.

We have examples of developers fucking up all the time, but somehow we gloss over that and accuse the hardware of being bad. Just recently The Callisto Protocol was working well only on PS5, on Series X it was missing visual features and ran bad, and on PC it is single threaded and a stutter fist, some moron claimed the PS5 runs the game well because of the power of it's hardware, a few patches later and he was proven wrong. Please don't repeat the same mistake here again.
 
Tessellation as its been implemented in the DX spec is not practical for many use cases.
Which one? Here is a 12 years old demo using Tessellation for character deformation:
And here is another 12 years old demo using Tessellation:

You know why nVidia was so much better with Tessellation? Because the hardware was years ahead. We want easy to use APIs which do not hold GPUs back.
 
This is where we differ, said game used to work well in DX11 (preceeding), switched to DX12 now it runs bad on NVIDIA, waited a few years, released a new sequel with DX12 only and voilla, now DX12 run well on NVIDIA (succeeding). What do you make out of that? The only thing you can make is that developers fucked up in their first game, but managed to properly fix it in the second.
Bad logic.
Since the success came later it is possible for example that extra effort was needed to succeed on Nvidia and it could not be managed in time.
 
Which one? Here is a 12 years old demo using Tessellation for character deformation:
And here is another 12 years old demo using Tessellation:

You know why nVidia was so much better with Tessellation? Because the hardware was years ahead. We want easy to use APIs which do not hold GPUs back.
Demos don't necessarily equal actual game development and content creation realities.
 
That doesnt make any sense. These demos were produced with DX11. Tessellation was never a software problem. Only a certain company was unable to produce better hardware for nearly 10 years.

This is incorrect as anyone who remembers Crysis 2's DX11 patch will attest to.
 
This is incorrect as anyone who remembers Crysis 2's DX11 patch will attest to.
Crysis 2 tessellation issues weren't caused by API but either deliberate harming of competition by sponsor or most incompetent devs ever (the fact that the concrete slab for example was never fixed suggests former rather than latter)
 
Crysis 2 tessellation issues weren't caused by API but either deliberate harming of competition by sponsor or most incompetent devs ever (the fact that the concrete slab for example was never fixed suggests former rather than latter)
Like Raytracing in Calisto Protocoll? Or DX12 in Modern Warfare 2?

And no, there wasnt a problem with Tessellation in Crysis 2. But i guess you would come to this conclusion when you can not accept that AMD hardware was just outdated 10 years ago... Which basically is the exact same reality today.
 
Like Raytracing in Calisto Protocoll? Or DX12 in Modern Warfare 2?

And no, there wasnt a problem with Tessellation in Crysis 2. But i guess you would come to this conclusion when you can not accept that AMD hardware was just outdated 10 years ago... Which basically is the exact same reality today.
You first have to dig up something other than performance numbers to make such claims. Like for example the concrete slab was in case of Crysis.

Callisto apparently is steaming pile of crap in many ways, haven't dug into it at all myself so can't comment on that. On CoD however only thing anyone has come up with so far (that I'm aware of) is worse than expected performance on GeForces compared to Rafeons, which by itself doesn't tell anything about anything.
 
You first have to dig up something other than performance numbers to make such claims. Like for example the concrete slab was in case of Crysis.
You mean like these "concrete slab" in Fortnite UE 5.1? Funny how the exact same result gets praised now...
There was never a problem with Tessellation. Culling happens in Hull- and Domain-shader. DX11-Tessellation was a pure hardware feature and performance was dictacted by hardware and not software (like DX12). We seeing the same behaviour with Raytracing, which performs much better on nVidia hardware because the hardware is so much better. You can only hold them down with an ineffcient use of DX12 like in Gotham Knights and Calisto Protocoll.
 
It has been debunked over and over yet the conspiracy lives on. This is the problem with internet. Lies never die.

This screen shot showing the games ocean being fully tessellated under the City for no reason has been debunked?

Where?
 

Attachments

  • Screenshot 2023-01-04 124720.png
    Screenshot 2023-01-04 124720.png
    1 MB · Views: 9
Perhaps if we first lay out some hopefully non-contentious points it will help smooth the debate:

  1. We know DX12 requires more developer side optimisation for specific vendor architectures than DX11 where the optimisation was done more by the vendors themselves in the driver and architecture specifics were more hidden from the developer by a thicker abstraction layer.
  2. Therefore more developer effort will have to be spent to optimise for each of those architectures individually under DX12 than under DX11.
  3. If a game is also on the consoles and especially the XBox Series X which is also running DX12 then if budget and resources are limited, optimisation time is likely to be spent there first
  4. Console optimisations will more readily carry over to AMD GPU's than Nvidia GPU's.
  5. Nvidia performs less consistently well under DX12 than it did under DX11
I'm not going to write down my conclusions from the above, but to me at least, one conclusion seems to emerge naturally.
 
Back
Top