Why do some developers struggle with game performance? *spawn*

zed

Legend
Like I've said here before, I just can't believe what a mess dx12 is in, in the same amount of time they managed to go from dx1 to 2 to 3 to 5 to 6 to 7 and now 8, but here we are coming up to 7 years since the release of dx12 and its still broken. Maybe the ppl responsible for dx12 died in a room in MS headquarters years ago and noones thought to go and see if they are alright?
 
Like I've said here before, I just can't believe what a mess dx12 is in, in the same amount of time they managed to go from dx1 to 2 to 3 to 5 to 6 to 7 and now 8, but here we are coming up to 7 years since the release of dx12 and its still broken. Maybe the ppl responsible for dx12 died in a room in MS headquarters years ago and noones thought to go and see if they are alright?
I don’t think it’s like that, and there is a huge misconception that people have when it comes to API management.
Learning to dx12 properly is likely the culprit here wrt Elden Ring. But as a whole, There could be some additional challenges because MS decided that PC and Xbox DX12 would not diverge. In general it is hard to make one piece of code to support 2 consoles and a variety of pc GPUs without there being some sort of hiccuping. The architectures between families of GPUs, drivers etc; it’s not easy.

PS5s showing still exhibits the same issue. Im just saying elden ring isn’t a good title to glean information about hardware capabilities.
 
Im just saying elden ring isn’t a good title to glean information about hardware capabilities.
Is any game, though? There is what the the development mean is capable of (time/budget, experience and effort) then there is their available tools, engine and technology then there is its APIs and finally there is what the hardware is capable of. How can any game be a metric of what the hardware is capable of given so many variables.
 
Is any game, though? There is what the the development mean is capable of (time/budget, experience and effort) then there is their available tools, engine and technology then there is its APIs and finally there is what the hardware is capable of. How can any game be a metric of what the hardware is capable of given so many variables.
I would say titles like Forza horizon, Gears of War, TLOU 2 etc offer up a good idea of what the hardware is capable of.
 
Is any game, though? There is what the the development mean is capable of (time/budget, experience and effort) then there is their available tools, engine and technology then there is its APIs and finally there is what the hardware is capable of. How can any game be a metric of what the hardware is capable of given so many variables.
From our perspective we can do some basic investigation that generally starts at the top and works its way down.
A fair statement: with the current architectures now being all PC based - we can run some analysis on features and see how they impact performance. Without fear that we are over/under prioritizing exotic customizations.

So among a bunch of features like: CUs, Clock Speed, Teraflops, Bandwidth, Cache Sizes, Triangle per clock, Culling, ROPs, etc. and then resolution; you'd target frame rate for instance and profile a bunch of GPUs. I think consistently you'd see Teraflops and Bandwidth as being the largest features to impact performance through a correlation graph. Which makes sense since that is the bulk of computation happening. Then you'd probably see ROPs show up in there at the top after bandwidth and CUs. etc.

So knowing which features should have the largest impact to performance should give us some guidance to figure out if a game is a good measure or not.

So if we see a title perform poorly regardless of how much Teraflops and Bandwidth you throw at it, then I would say that's not likely to be a useful game title. We should see titles bottleneck around TF, Bandwidth and ROP performance well before the other features usually. If not there's likely an optimization issue at the API level, or they've gone and done something to stall the GPU from doing its job.

IMO, when we see titles that show a progressive bottleneck at Teraflops and bandwidth and GPUs and consoles fit within that regression, we see a fairly usable title to benchmark GPUs.
 
I would say titles like Forza horizon, Gears of War, TLOU 2 etc offer up a good idea of what the hardware is capable of.
Within the relatively short space in which people are focussed on progressing techniques on this hardware. Software techniques continue to evolve and these are often key to extracting performance from old hardware, which is why we keep seeing new games or ports released on very old hardware, delivering what would have been considered impossible during the hardware's original lifetime.

How much more could've been extracted from PS4 and Xbox One had the current gen consoles not released?

So among a bunch of features like: CUs, Clock Speed, Teraflops, Bandwidth, Cache Sizes, Triangle per clock, Culling, ROPs, etc. and then resolution; you'd target frame rate for instance and profile a bunch of GPUs. I think consistently you'd see Teraflops and Bandwidth as being the largest features to impact performance through a correlation graph

Right, but you you know what these numbers mean in real terms of what games can deliver changes over the course of a console's generation because the equivalent of hardware drivers and the APIs improve over time. This improvement cycle is continuous and only stops when effort is diverted to the successor platform.

If gameplay and graphics of the level of The Last of Us Part II could have been delivered at the scary of the PS4's generation, it would have been. But it wasn't. That was only deliverable over time, both with experience and development of improved software techniques and improvements of the APIs and development tools.
 
Last edited by a moderator:
Within the relatively short space in which people are focussed on progressing techniques on this hardware. Software techniques continue to evolve and these are often key to extracting performance from old hardware, which is why we keep seeing new games or ports released on very old hardware, delivering what would have been considered impossible during the hardware's original lifetime.

How much more could've been extracted from PS4 and Xbox One had the current gen consoles not released?



Right, but you you know what these numbers mean in real terms of what games can deliver changes over the course of a console's generation because the equivalent of hardware drivers and the APIs improve over time. This improvement cycle is continuous and only stops when effort is diverted to the successor platform.

If gameplay and graphics of the level of The Last of Us Part II could have been delivered at the scary of the PS4's generation, it would have been. But it wasn't. That was only deliverable over time, both with experience and development of improved software techniques and improvements of the APIs and development tools.
For the sake of benchmarking we are less interested in comparing title to title. That is a function of benchmarking software. As any innovations there result in better outputs.

What we want to know how the hardware is able able to navigate the software. A properly coded game should in theory exhibit a regression curve where more powerful hardware scales to more performance. If we don’t see that happening then the title is not a worthwhile benchmark.
So In this case with Elden Ring, quadrupole CPU and GPU still does not remove stuttering and random slow downs; thus it is not a good title benchmark. As we would expect through regression these problems to go away as we increase the size of the bottlenecks.
 
For the sake of benchmarking we are less interested in comparing title to title. That is a function of benchmarking software. As any innovations there result in better outputs.

Comparisons from title-to-tile is a different conversation. I'm only taking about whether there can ever be an objective assessment of what the hardware is capable of, because fundamentally that's a decision by the game developer based on the needs of the game.

This is why so many conversations on B3D end up in pointless arguments. Posters simply fail to recognise this.
 
Like I've said here before, I just can't believe what a mess dx12 is in, in the same amount of time they managed to go from dx1 to 2 to 3 to 5 to 6 to 7 and now 8, but here we are coming up to 7 years since the release of dx12 and its still broken. Maybe the ppl responsible for dx12 died in a room in MS headquarters years ago and noones thought to go and see if they are alright?

Don't blame the API for poor use by developers.
 
I'm not sure
But they are more about some other topic than the ER Game discussion. Feel free to suggest new title suggestions in the non-game thread.

"Nonsense about why some games run badly(tm)."
 
Comparisons from title-to-tile is a different conversation. I'm only taking about whether there can ever be an objective assessment of what the hardware is capable of, because fundamentally that's a decision by the game developer based on the needs of the game.

This is why so many conversations on B3D end up in pointless arguments. Posters simply fail to recognise this.
I don’t think there can be no. If a developer saturated every aspect of a GPU to the fullest nothing would run. Developers definitely will pick and choose where their bottlenecks will be when designing their games.
We can only benchmark the hardware with respect to the software; but as you say, there will never be a title to “show what hardware is truly capable of”.

that’s just a software discussion imo; tangential to hardware discussion.
 
I think the real thing here is: games get a vulkan backend when developers want to invest and make a lower level renderer. Games get a dx12 backend when it's required for platform/feature reasons, so they're more likely to bite it off without having the resources to do it right. This has resulted in a lot of games with bad dx12 modes and built a reputation that it's somehow slower or worse than dx11.

There's a real phenomenon, it's just not at all related to dx12 as a technology.
 
How much more could've been extracted from PS4 and Xbox One had the current gen consoles not released?

"Necessity is the mother of invention" kind of struggles to force advancements? While I would like to see what it would have brought for rendering, the load times were horrible.
 
Now for the comparison between titles to know if its using hardware efficiently, I think that's not strictly needed. That only serves as quick guidelines that maybe something isn't the best. From those suspicions, we can look at scaling on the PC hardware to see if it behaves as one would expect. Even with all that, unless you load it through developer debug and analysis tools you won't have specifics as to why it might be bad.
 
I thought the lazy dev excuse was frowned upon here?
Surely if the tools are badly designed thats the fault of the tool makers

I didn't say they were lazy. They might not have accumulated the knowledge yet or didn't have the time allocated from management as required to make it good.

Let me point you to the hundreds of other games as the counter point to your baseless claim.
 
Exactly as others have said. This isn't an API issue at all... this is developers not doing what is required of them to make proper use of that API. Look at Capcom... all the RE Engine DX12 games run amazingly on PC. RE2R, RE3R, DMC5, RE7 and RE8. All DX12, and all with little to no stutter.

Developers can... (and must) precompile shaders in DX12. The mechanisms are there for them to do it. For some reason they just don't seem to put in the effort to make sure everything is properly precompiled either at initial boot, during load screens, or during background compilations.

The biggest issue... and I stress this... is that some developers seem content to ship games, knowing full well that there are stuttering/hitching/streaming/loading issues. There's absolutely NO WAY that FROMSOFT didn't know that the PC version has all these stuttering issues... and yet they released it anyway.

And that's the issue. They release the games anyway... and if people complain enough... then they put in some effort to fix it. The thing about that... is that PC gamers are notorious for accepting #%$^ quality. They blame their own or other's people PCs for issues. They blame drivers. They claim magic fixes work.

This is why developers feel they can get away with doing this. Well, it's time people start speaking out, and holding these developers/publishers accountable. A game should function properly on the recommended hardware. They need to start holding up their end of the bargain and doing whatever it takes to make sure they do.
 
I do think the pc software stack has some fundamental problems. You can find developers complaining about it pretty easily.

@DSoup I do agree about performance testing. People will say there are "hardware benchmarks" in reviews of new gpus etc. They're actually software benchmarks, and from doing many we try to infer hardware capability. The problem is if the software stack is broken and doesn't leverage the hardware very well, then you're not really testing the limits of the hardware. You're testing the limits of the software, as in how well the software can scale to new hardware. There are a lot of difficult problems to solve in terms of making good use of hardware, and it's especially hard on the pc.
 
Last edited:
Surely if the tools are badly designed thats the fault of the tool makers

In this case we objectively know the tools are good (they are publicly available) and also that properly developing for a vulkan or dx12 style api is difficult and expensive. This only looks like “lazy devs” or “bad tools” if you aren’t informed about graphics apis.
 
Back
Top