No DX12 Software is Suitable for Benchmarking *spawn*

Back in 2015, I remember Dice specifically asking for lower level APIs, now almost every frostbite game that uses DX12 manged to completely screw both AMD and NVIDIA GPUs compared to DX11, the latest of such examples was Battlefield V. Star Wars Squadrons was even released lacking DX12 completely.

Many of the chief engineers and architects at DICE that were asking for a lower level API back then are no longer with DICE. They split off and formed their own development studio.

Hence why Battlefield has basically gone down the toilet in the past few releases. Also why it's such a technical mess now.

Regards,
SB
 
Many of the chief engineers and architects at DICE that were asking for a lower level API back then are no longer with DICE. They split off and formed their own development studio.

Hence why Battlefield has basically gone down the toilet in the past few releases. Also why it's such a technical mess now.

Regards,
SB
I really don't see BF going anywhere over the past few releases, at least tech wise, and as for the last entry it's not too hard to understand the issues which it had to deal with during development.
 
AC Vlahalla recently underwent a major restructuring that required players to redownload the game again, the latest version supposedly offered better streaming performance and improved GPU performance for certain GPUs under DX12.

HBU tested the new version and found out that NVIDIA GPUs gained significantly from the new version. The RTX 2070 Super for example gained 20% more performance, and went from being significantly behind the 5700XT to slightly ahead.

Old version@1080p:
RTX 2070 Super: 69fps
RX 5700XT: 82fps

Old version, 1440p:
RTX 2070 Super: 55fps
RX 5700XT: 63fps


New version, @1080p:
RTX 2070 Super: 83fps
RX 5700XT: 82fps

New version, @1440p:
RTX 2070 Super: 65fps
RX 5700XT: 63fps


It's interesting to note that this is a clear case of the direct and necessary developer input that is now mandatory to improve DX12 performance for certain GPUs, it took a very long time (more than a year), on contrary to the old days of IHVs easily fixing a game's performance through drivers in a matter of months.
 
Last edited:
AC Vlahalla recently underwent a major restructuring that required players to redownload the game again, the latest version supposedly offered better streaming performance and improved GPU performance for certain GPUs under DX12.

HBU tested the new version and found out that NVIDIA GPUs gained significantly from the new version. The RTX 2070 Super for example gained 20% more performance, and went from being significantly behind the 5700XT to slightly ahead.

Old version@1080p:
RTX 2070 Super: 69fps
RX 5700XT: 82fps

Old version, 1440p:
RTX 2070 Super: 55fps
RX 5700XT: 63fps


New version, @1080p:
RTX 2070 Super: 83fps
RX 5700XT: 82fps

New version, @1440p:
RTX 2070 Super: 65fps
RX 5700XT: 63fps


It's interesting to note that this is a clear case of the direct and necessary developer input that is now mandatory to improve DX12 performance for certain GPUs, it took a very long time (more than a year), on contrary to the old days of IHVs easily fixing a game performance through drivers in a matter of months.

Very interesting result. It's also an example of why older PC architectures perform relatively worse in newer games than the equivalent console hardware as is being discussed here. Its clear that there are massive performance impacts of not optimising for a specific GPU architecture either at the game or driver level, which of course older architectures will often get neither of.
 
It's interesting to note that this is a clear case of the direct and necessary developer input that is now mandatory to improve DX12 performance for certain GPUs, it took a very long time (more than a year), on contrary to the old days of IHVs easily fixing a game's performance through drivers in a matter of months.
The last thing we want is IHVs "fixing" game performance.

It should always be the responsibility of developers.
 
Very interesting result. It's also an example of why older PC architectures perform relatively worse in newer games than the equivalent console hardware as is being discussed here. Its clear that there are massive performance impacts of not optimising for a specific GPU architecture either at the game or driver level, which of course older architectures will often get neither of.

It's also kind of interesting when thinking about future hardware. Large departures in architecture could mean newer hardware showing worse efficiency leading to smaller than expected gains or regressions. Almost encourages the gpu makers to not change to much, so when their new cards come out they won't look bad in reviews.
 
It's also kind of interesting when thinking about future hardware. Large departures in architecture could mean newer hardware showing worse efficiency leading to smaller than expected gains or regressions. Almost encourages the gpu makers to not change to much, so when their new cards come out they won't look bad in reviews.
I think past a point new cards are mostly always going to have more than enough brute force power to easily proved a great experience in older titles anyway. After a point you have to just stop optimizing for older architectures and let the chips fall where they lie and focus on the future.

Like, who cares if an old game doesn't perform at 300fps on my brand new GPU. 250 will have to suffice... lol

There's so many other things we need developers to focus on instead of just performance numbers. The CPU and software itself is IMO by far the bigger issue. I feel like if they don't start solving some of these issues, I'm not going to give a damn how powerful future architectures are.
 
I think past a point new cards are mostly always going to have more than enough brute force power to easily proved a great experience in older titles anyway. After a point you have to just stop optimizing for older architectures and let the chips fall where they lie and focus on the future.

Like, who cares if an old game doesn't perform at 300fps on my brand new GPU. 250 will have to suffice... lol

There's so many other things we need developers to focus on instead of just performance numbers. The CPU and software itself is IMO by far the bigger issue. I feel like if they don't start solving some of these issues, I'm not going to give a damn how powerful future architectures are.
Sounds good on paper, but try playing Saints Row 2 even on today's best hardware. :p
 
It's also kind of interesting when thinking about future hardware. Large departures in architecture could mean newer hardware showing worse efficiency leading to smaller than expected gains or regressions. Almost encourages the gpu makers to not change to much, so when their new cards come out they won't look bad in reviews.
This was always the case. Drivers can only do that much, and generally there aren't a lot of driver level optimization happening for old game code anyway. These should ideally be put onto original developers and not IHVs so D3D12/VK doesn't change much here.
 
AC Vlahalla recently underwent a major restructuring that required players to redownload the game again, the latest version supposedly offered better streaming performance and improved GPU performance for certain GPUs under DX12.

HBU tested the new version and found out that NVIDIA GPUs gained significantly from the new version. The RTX 2070 Super for example gained 20% more performance, and went from being significantly behind the 5700XT to slightly ahead.

Old version@1080p:
RTX 2070 Super: 69fps
RX 5700XT: 82fps

Old version, 1440p:
RTX 2070 Super: 55fps
RX 5700XT: 63fps


New version, @1080p:
RTX 2070 Super: 83fps
RX 5700XT: 82fps

New version, @1440p:
RTX 2070 Super: 65fps
RX 5700XT: 63fps


It's interesting to note that this is a clear case of the direct and necessary developer input that is now mandatory to improve DX12 performance for certain GPUs, it took a very long time (more than a year), on contrary to the old days of IHVs easily fixing a game's performance through drivers in a matter of months.
So happy to see HUB report on this - this is quite the change of circumstances for ACV. It is also a bit concerning if it means each release requires so much lifting from the developer side. Hmmm. What a situation indeed.
 
Hasn't that been a thing for decades?
Yep, "lazy devs" has been a thing for decades.

IHVs will always be fixing bugs of their own making.

Adding into that mix bugs from random game devs and having to keep fixing those dev bugs for every hardware iteration for decades afterwards is a shocking waste.

So seeing devs take responsibility for their own code is encouraging. In the end if the kitchen's too hot get out of the fucking kitchen.
 
I'm not going to pretend to fully understand it, but I've seen a lot written about resource barriers, memory types etc that a lot of developers just didn't fully understand. Over time they're coming to grips with these things and learning best practices. I still don't really think any devs are lazy. It's just a question of time and resources. Getting a game working is a bigger priority than optimizing it when they have deadlines with publishers, understandably.
 
The blanket of console NDAs makes it hard to discern the degree of "close to the metal" in console implementations and how translatable to PC those close to the metal techniques might be.

At the same time, what proportion of games are primarily developed on PC, before a version is made for console? Are the close to the metal techniques in console actually easier to use than seen in DX12/Vulkan? Or is it merely that the count of consoles to optimise-for is small?

We also don't know if the fix for AC Valhalla on NVidia is achieved specifically by detecting certain cards and running code dedicated to those cards. If that's being done, how different is the code? Did NVidia contribute specifically? If so did NVidia see that as a more productive approach rather than kludging the driver some more?

There was a time when asynchronous compute was bad for NVidia and there were games optimised specifically by detecting the card/IHV to avoid the performance hit. So that's an example of devs "learning" to optimise on PC, having to put in extra effort.

The cycle repeats over and over. Arguably the costs go up more and more each cycle, with yet more "clever" DX12 functionality added in.

As a programmer, it's nice (motivating) to have a learning curve to keep climbing. But maybe not so nice when the curve keeps getting steeper.
 
Like, who cares if an old game doesn't perform at 300fps on my brand new GPU. 250 will have to suffice... lol

Hardware scaling isn't increasing at the same pace as years past. Especially if you aren't buying the high end generation to generation you aren't going to see that much performance to make the issue irrelevant.

Take something like Cyberpunk 2077 with max settings (including RT) at 1080p. Would next gen (2022) entry level GPUs be able to run it consistently even at 60 fps? What about 2024 entry GPUs if they have to work against possibly more "overhead"?
 
So happy to see HUB report on this - this is quite the change of circumstances for ACV. It is also a bit concerning if it means each release requires so much lifting from the developer side. Hmmm. What a situation indeed.
Most releases tend to get it right from launch these days. ACV was one of these D3D12 titles which showed weird performance results between AMD and Nv, and this always was a clear sign of a badly optimized renderer, if only for PC h/w (Radeons tend to do well on console optimized code for obvious reasons). The fact that it's "fixed" now just proves that.

There was a time when asynchronous compute was bad for NVidia
There never was such a time. Some GPUs don't get any speed up from using async compute, and may even get slowed down due to cache fighting between different kernels. Such GPUs are not limited to Nvidia and can in fact be from AMD as well due to async compute gains being fairly specific to a particular compute/bandwidth ratio of some product.

I'm not going to pretend to fully understand it, but I've seen a lot written about resource barriers, memory types etc that a lot of developers just didn't fully understand. Over time they're coming to grips with these things and learning best practices. I still don't really think any devs are lazy. It's just a question of time and resources. Getting a game working is a bigger priority than optimizing it when they have deadlines with publishers, understandably.
Things are getting better as we move away from the need to support anything but Win10(11)+D3D12/VK. That transition period where there was a business need to support >1 OS and >1 renderer was hard for everyone, including designers even.
 
Last edited:
Hardware scaling isn't increasing at the same pace as years past. Especially if you aren't buying the high end generation to generation you aren't going to see that much performance to make the issue irrelevant.

Take something like Cyberpunk 2077 with max settings (including RT) at 1080p. Would next gen (2022) entry level GPUs be able to run it consistently even at 60 fps? What about 2024 entry GPUs if they have to work against possibly more "overhead"?
Hardware isn't doing a 180' between generations architecturally... so it's very unlikely that something which performs extremely well on one generation is going to perform unacceptably bad on the next. These companies can't turn on a dime, and I think the arc in which they do turn is large enough that by the time the architecture has change sufficiently enough, there's far more than enough power there to easily satisfy the demands of those past games.

Most releases tend to get it right from launch these days. ACV was one of these D3D12 titles which showed weird performance results between AMD and Nv, and this always was a clear sign of a badly optimized renderer, if only for PC h/w (Radeons tend to do well on console optimized code for obvious reasons). The fact that it's "fixed" now just proves that.
Yea, a lot of it seems to simply be a matter of time/budget, and priority. How many titles are releasing which are "sponsored by X or Y" or designed around a specific architecture which come with limited features and support... only to receive that support later on through patches. You know, like Radeon supported games suddenly receiving DLSS and other updates which have come WAY later than expected... almost as if certain pubs/devs are catching on to the fact that supporting these features is great for sales and advertising. It's quite clear to me, that if you add DLSS to your game.. it instantly becomes more desirable. It has a knock on effect of showing the market that you're putting in "effort" for the release, and that you're pursuing the best performance possible. FSR falls in here as well, because despite not being the critical darling that DLSS is... it shows support for AMD and nobody is being left out. Developers putting in the work to drastically improve performance for the other side, whichever it may be, goes a long way.

It's all a matter of time and priorities. Undoubtedly games could be better optimized for PC than what they are... and this once again seems to show that a lot of it comes down to needing sufficient motivation to pursue it. The longer you have a game out there, and the more you're adding to it expecting people to continually play and purchase it... you have to build and grow it and that means fixing bugs and improving performance over time.
 
Another right call that eventually turned out well for explicit APIs was making memory management explicit. Newer APIs made it easier for applications to be able to let the driver allocate memory for resizable BAR. In the possible future, if we want dynamic dispatch then we'd likely have to introduce the concept of explicit masking since many shader compilers aren't capable of automatic divergence handling with dynamic dispatch ...

If developers want more features then being more explicit constituted as the right move all along ...
 
HUB tests 6900xt vs 3080 12 GB in 50 games.


I have some reservations about this comparison even if current pricing puts them in the same tier. Is the 3080 12GB the same MSRP as the 10GB model? Either way I just like seeing this many games tested as it gives you a larger sample.
 
I have some reservations about this comparison even if current pricing puts them in the same tier. Is the 3080 12GB the same MSRP as the 10GB model? Either way I just like seeing this many games tested as it gives you a larger sample.
Is there a MSRP for the 12Gb? I thought it was just created as a profiteering GPU.
 
Back
Top