Digital Foundry Article Technical Discussion [2023]

Status
Not open for further replies.
No. There is no shortage of "talent" across the spectrum of developers (nor are the sets disjoint). There is often a shortage of time and differences in priorities.

...

Again, improvements can (and are) being made constantly to the whole PC stack. But pretending this is some sort of lack of effort or expertise from those involved, rather than a gigantic compatibility problem that has expanded in scope as games and hardware themselves have expanded in scope rubs me the wrong way.
Thanks for saying this!
 
At the end of the day, a product is being put out, and some of these games are absolutely releasing in unacceptable states. I think that is pretty much undeniable at this point. It's not on consumers to be accepting of the realities of PC development... nor release schedules.. nor should PC gamers have to wait for patches while other "more prioritized" versions release in much better states. What is acceptable is subjective, and some people have no problems dealing with issues that others don't. Regardless of the reason, more games are releasing with more issues, and people are taking notice and finally standing up to it. Blame will be thrown around, and consumers will ultimately vote with their wallets..

Absolutely under NO circumstances do any developers deserve to be singled out and harassed... and it's absolutely abhorrent what some developers are subjected to daily. However, companies as a whole need to start being held accountable for the products they release. We see time and time again that with just a little bit more time (or perhaps focus) the majority of these issues can be avoidable.. which is what really contributes to certain opinions being formed.

I get that it's probably not the best idea to go to forums or other places where developers and technically informed people post and constantly complain about stuff like this.. especially if you're not willing to accept their responses.. but some of us just want to get focus on this issue, and make sure it doesn't just get swept under the rug like so many issues do. Some of us want to understand the situation better, but also push firmly for progress.. as this is, in many of our opinions, the most pressing issue. It's something that needs to be dealt with now.. because you can see first hand that people are really getting frustrated with this PC stuttering stuff and are about to drop it altogether unless things start changing.
 
The way I understand it, this is a problem of inheriting and stacking gradually the problems of an inefficient backend that existed in PCs until it became too big to ignore and harder to fix.
The particular issues are so common, that it creates suspicion that it is indeed a complicated problem for the developers to fix across the board.
 
At the end of the day, a product is being put out, and some of these games are absolutely releasing in unacceptable states. I think that is pretty much undeniable at this point. It's not on consumers to be accepting of the realities of PC development...
As I've said before, there is little financial incentive for most publishers to not release PC games in a suboptimal state whilst consumers continue to buy them day 1 regardless - particularly if the publisher is trying to hit a quarterly release deadline and needs to show revenue for investors.

Consumers grumbling but still buying games day 1 before technical reviewers have had a look is sending the message that the market does tolerate this. You say companies should be held accountable, well the way consumers do that is to not buy something that isn't in an acceptable state to buy. ¯\_(ツ)_/¯.
 
Last edited by a moderator:
No. There is no shortage of "talent" across the spectrum of developers (nor are the sets disjoint). There is often a shortage of time and differences in priorities.

Moreover though it cannot be emphasized enough how much more complicated the problem is on PC. Most of the problems that PC games suffer from are because those problems don't exist at all on consoles, but *must* exist to some extent on PC to support a wide range of hardware. This also involves interactions between large code bases between a bunch of different companies all with different goals themselves (which are sometimes actively opposed to getting a nice, consistent experience ex. across different hardware vendors). The fact that games and APIs must be designed in a way on PC that they will be future compatible with GPUs that you can never test at the time the game ships/gets its last patch makes the problem an order of magnitude more complicated.

As someone who was heavily involved in the design of DX12 and Vulkan I have no issues saying that both made mistakes like every graphics API has. It's probably also fair to say the pursuit and hype around "low level console-like performance" went too far to the point that it created problems that were even harder to solve/work around than the problems in DX11 and previous APIs. With things like Nanite it's pretty clear now that the window in which we cared about doing 100,000 draw calls was somewhere between narrow and non-existent. And while the goals of ensuring that shader compilation happened at predictable points (and thus had to abstract any API state that "might possibly be compiled into shaders on any current or future GPU") remain good, it's fair to say that we underestimated the reality of how harmful this "superset of least common denominator" approach would be to the experience on one specific piece of hardware. It's much harder to fix PSO explosions after the fact with deduplication in the driver than it is to avoid it happening in the first place, but unfortunately the new APIs force us down that rougher path for the foreseeable future.

Again, improvements can (and are) being made constantly to the whole PC stack. But pretending this is some sort of lack of effort or expertise from those involved, rather than a gigantic compatibility problem that has expanded in scope as games and hardware themselves have expanded in scope rubs me the wrong way. A big part of the reason things were sometimes not as much of an issue in the past is because they were literally simpler problems then. It's still worth admitting that DX12/Vulkan did make some of these actually harder. Hindsight is 20/20, etc.
congrats on becoming a moderator. Good times in the moderators side. I kinda miss Geo, but Shifty is back, so things look good.

A guy who I know in person and is the CEO of a videogames company who wanted to hire me some time ago, told me that DX12 wasn't easy to program for, that DX9 was a breeze by comparison, and mentioned that a game, They are billions, was made with DX9 and sold a lot of copies.

In regards to low level programming on consoles, maybe it's just me but I am seeing less and less of that every single day, and in fact sometimes I wonder if machines like PS4 -generic APU generic everything- have any interesting trick if you use low level access features.
 
An interesting thought from Adam Sawicki, who was linked earlier in this thread. He proposes that one way to perhaps ease the complexity of Vulkan/DX12 is to actually have another, higher-level API that can take some of the management burden off of developers layered on top of DX12/Vulkan:

Thoughts on graphics APIs and libraries

Adam Sawicki said:
I know it will sound controversial, but sometimes I get a feeling they are at the exactly worst possible level – so low they are difficult to learn and use properly, while so high they still hide some implementation details important for getting a good performance.


Based on that, I am thinking if there is a room for a new graphics API on top of DX12 or Vulkan. I don’t mean whole game engine with physical simulation, handling sound, input controllers and all, like Unity or UE4. I mean an API just like DX11 or OGL, on a similar or higher abstraction level (if higher level, maybe the concept of persistent “frame graph” with explicit pass and resource dependencies is the way to go?). I also don’t think it’s enough to just reimplement any of those old APIs. The new one should take advantage of features of the explicit APIs (like parallel command buffer recording), while hiding the difficult parts (e.g. queues, memory types, descriptors, barriers), so it’s easier to use and harder to misuse. (An existing library similar to this concept is V-EZ from AMD.) I think it may still have good performance. The key thing needed for creation of such library is abandoning the assumption that developer must define everything up-front, with nothing allocated, created, or transferred on first use.
 
Last edited:
An interesting thought from Adam Sawicki, who was linked earlier in this thread. He proposes that one way to perhaps ease the complexity of Vulkan/DX12 is to actually have another, higher-level API that can take some of the management burden off of developers layered on top of DX12/Vulkan:
Just to be clear on my previous reply - the issues I was referencing are not something that can be solved by another layer on top of these APIs. The article actually mentions this a bit in the third post-update. But for instance, many of the problems around shader compilation are made much worse by the design of DX12/Vulkan and indeed they are baked into the driver interfaces of these APIs unfortunately. If it was "simply" an issue of implementing some stuff in Unreal or another game engine that would make things operate just like they did on DX11 drivers that would have happened *long* ago. Part of the issue is that the way we shifted some of the burden to the application instead of the driver actually created a far more difficult problem than when it was solved before in the driver. For instance, the driver knows what states are going to baked into compiled shaders on a given GPU/driver combo, while the application cannot and thus has to be incredibly conservative with shader permutations.

There may be some space for "simpler API that also exposes newer features like raytracing in DX11-like layer" for smaller engines and hobby projects of course, but it doesn't really do anything to help the AAA space. There are specific cases where additional layers make sense and are already present and used (see for instance the D3D residency helper libraries and the vulkan memory manager/allocator libraries), but these don't address many of the core issues people are currently having.

I don't think there is going to be a silver bullet here guys; if there was such a thing we would have had it long ago by this point. In reality we will continue to chip away at the problems from many angles (both content and code) and iterate towards better experiences.

Blame will be thrown around, and consumers will ultimately vote with their wallets..
[snip]
I get that it's probably not the best idea to go to forums or other places where developers and technically informed people post and constantly complain about stuff like this.
The thing is you already called out the proper response in your reply here: if a game is in an unacceptable state, don't buy it! That is by far the most direct feedback you can give to the appropriate channels.
 
Last edited:
... the way we shifted some of the burden to the application instead of the driver actually created a far more difficult problem than when it was solved before in the driver.
What was the rationale for doing this in the first place? Surely IHVs have heftier budgets -- and more importantly, motivation -- to solve this than devs for multiplatform games?
 
What was the rationale for doing this in the first place? Surely IHVs have heftier budgets -- and more importantly, motivation -- to solve this than devs for multiplatform games?
To eliminate random shader compilation stutters in DX11. It's not as if this is new problem, even if it's getting more deserved attention lately for various reasons (many of them - as noted - because the scope of games themselves have increased). The thought was that by allowing the application to control when compilation happens it could be scheduled to a more appropriate time (ex. load time, or prefetched, or similar). To some extent that part has indeed born out.

The issue is that to make this happen in a way that works across all sorts of different GPUs and drivers requires effectively assuming even the tiniest change in state could potentially cause a shader recompile. Thus in practice while the application in many cases does have additional information about when the ideal time to compile is, it also has to deal with an order of magnitude more potential compilations. In practice many of these will be deduplicated by the driver and not actually trigger a backend compile, but this is of course unpredictable to the application.

So again, it's not so much a "simple" question of budgets, motivation or even time to some extent... the reality is it's a hard problem that did exist before too but has been made worse by a combination of factors, one of which just being an increasingly large scope of many AAA games. Note that many of the games that suffer less from these issues are cases where the scope is intentionally narrowed and the engine/art tools do not expose the ability to create significant amounts of shader permutations to the artists. This of course makes the problem more tractable but at the potential expense of material and effects variety.

The reality is that DX12/Vulkan made improvements in some areas but made it more difficult in others. I wouldn't even call the changes a mistake or claim they weren't necessary at some level, but I just want to point out at some level the problem space around shader permutations in particular is complex on PC and the new APIs take a different attack vector with different pros/cons.

Also worth noting... one of the reasons this is still such a big issue is that GPUs are still rather primitive in some ways when it comes to running larger, general purpose code. No significant advances have really been made in terms of providing softer edges and more portable performance; we're still mostly in the "we'll inline everything into one kernel and your code runs as bad as the worst possible branch, even if it is never taken" world that we were a decade ago. Obviously the hardware problems on this front are difficult as well, but ultimately if we are to continue to write more and more complex systems on the GPU the status quo in the hardware needs to move here too. The fact that we need a bazillion permutations with minor differences to get reasonable performance and large cliffs abound is fundamentally a problem created by GPU architectures.
 
To eliminate random shader compilation stutters in DX11. It's not as if this is new problem, even if it's getting more deserved attention lately for various reasons (many of them - as noted - because the scope of games themselves have increased). The thought was that by allowing the application to control when compilation happens it could be scheduled to a more appropriate time (ex. load time, or prefetched, or similar). To some extent that part has indeed born out.

The issue is that to make this happen in a way that works across all sorts of different GPUs and drivers requires effectively assuming even the tiniest change in state could potentially cause a shader recompile. Thus in practice while the application in many cases does have additional information about when the ideal time to compile is, it also has to deal with an order of magnitude more potential compilations. In practice many of these will be deduplicated by the driver and not actually trigger a backend compile, but this is of course unpredictable to the application.

So again, it's not so much a "simple" question of budgets, motivation or even time to some extent... the reality is it's a hard problem that did exist before too but has been made worse by a combination of factors, one of which just being an increasingly large scope of many AAA games. Note that many of the games that suffer less from these issues are cases where the scope is intentionally narrowed and the engine/art tools do not expose the ability to create significant amounts of shader permutations to the artists. This of course makes the problem more tractable but at the potential expense of material and effects variety.

The reality is that DX12/Vulkan made improvements in some areas but made it more difficult in others. I wouldn't even call the changes a mistake or claim they weren't necessary at some level, but I just want to point out at some level the problem space around shader permutations in particular is complex on PC and the new APIs take a different attack vector with different pros/cons.

Also worth noting... one of the reasons this is still such a big issue is that GPUs are still rather primitive in some ways when it comes to running larger, general purpose code. No significant advances have really been made in terms of providing softer edges and more portable performance; we're still mostly in the "we'll inline everything into one kernel and your code runs as bad as the worst possible branch, even if it is never taken" world that we were a decade ago. Obviously the hardware problems on this front are difficult as well, but ultimately if we are to continue to write more and more complex systems on the GPU the status quo in the hardware needs to move here too. The fact that we need a bazillion permutations with minor differences to get reasonable performance and large cliffs abound is fundamentally a problem created by GPU architectures.
is it only a dx12 or pc api and hardware problem or also xbox problem cause some games show similar stutters on xbox similar to pc that are rarely found on playstation? and would pc api's get better or would it just get worse as games become more complex and bigger?
 
Also worth noting... one of the reasons this is still such a big issue is that GPUs are still rather primitive in some ways when it comes to running larger, general purpose code. No significant advances have really been made in terms of providing softer edges and more portable performance; we're still mostly in the "we'll inline everything into one kernel and your code runs as bad as the worst possible branch, even if it is never taken" world that we were a decade ago. Obviously the hardware problems on this front are difficult as well, but ultimately if we are to continue to write more and more complex systems on the GPU the status quo in the hardware needs to move here too. The fact that we need a bazillion permutations with minor differences to get reasonable performance and large cliffs abound is fundamentally a problem created by GPU architectures.

Does PBR help? Presumably a unified lighting model reduces the amount of branching and thus possible permutations.
 
As I've said before, there is little financial incentive for most publishers to not release PC games in a suboptimal state whilst consumers continue to buy them day 1 regardless - particularly if the publisher is trying to hit a quarterly release deadline and needs to show revenue for investors.

Consumers grumbling but still buying games day 1 before technical reviewers have had a look is sending the message that the market does tolerate this. You say companies should be held accountable, well the way consumers do that is to not buy something that isn't in an acceptable state to buy. ¯\_(ツ)_/¯.

Agreed.

If games were commonly being released in a poor state with no ongoing efforts by devs to resolve issues that plagued their games, it would be one thing. But a lot of devs do continue to work on their titles to get them to an acceptable state or at least try.

If you don't want to encounter the problems that are typical of day one releases, the PC market is telling you, "Don’t be a day one buyer i.e. a gamma tester, be a day 30, 60 or 90 buyer".

We may not like the it, but the simple act of exercising some self-control allows anyone to avoid it with the potential opportunity of buying at cheaper prices later on.

PC gamers basically have two choices, change your purchasing practices or convince companies like Epic or Steam to impose some type of quality control on titles sold through their digital stores. While MS's and Sony's QC/QA programs aren't perfect, they do force devs to do some level of quality control. I imagine because PCs lack a similar requirement, it exacerbates the situation as there are far more hardware configs and potential for issues that may affect gaming performance.

I may be a little unsensitive to the issue because if I am amped or hyped to jump on a game when released, I buy the console version. I save my PC purchases for games that don't scratch that "need to play ASAP" itch and there are indications that any significant problematic issues at release have been resolved.
 
And this makes me wonder specially in conenction to all the talks about ABK acquisition and all the dirt that cames out to light. We have seen this behavior in many titles before, xbox and pc versions seems incomplete in comaprisson to PS5 version. PS5 version of the product often offers better performance and full features, whilse xbox version is badly optimised, missing RT and other features and takes weeks if not months of patches to get it to the state where you can considered it complete.
Its clear that one of the platforms was chosen as lead platform, this platform received good complete product on realase day with all features. We know that companies pay for exclusive deals, exclusive content, time exclusivity etc etc IMO what we se in those examples is just one platform chosen as lead platform wich was influenced by special deal.

This is just my observation and you can grill me for that but here comes why i think this could be the fact.
Time is money, so if time=money and one platfrom cleary had more time

"Only reason XSX does not run as well is because it was not given the time needed."

so ps5 version had more time=more money where does money come from ;)
Anyway back to being serious again. Its is a huge problem as you mentioned it and i agree, really suck for Pc and xbox users.
You can add that devs may be dealing with something that’s been historically true for the last two previous gens of Xbox. It takes a year or two for MS to work out most of kinks.

DX12 was released on the One in 2015 so over a 2-3 year period devs had to contend with three different drivers.


I remember similar issues with the 360. PlayStation hardware always seem to hit the ground running better at release than Xbox hardware.
 
Does PBR help? Presumably a unified lighting model reduces the amount of branching and thus possible permutations.
PBR arguably made things worse. A unified uber-shader style material/lighting model with a forward renderer was not the popular way to implement PBR. The most popular way in which PBR was implemented was with tons of specialized/separate material/lighting shaders and deferred renderers ...

The permutation problem is a self-inflicted disease among the industry where you have a self-fulfilling prophecy of developers creating all sorts of shaders so that the compilers will generate the optimal code in terms of occupancy/register pressure or memory usage and soon then after IHVs will catch on to then design their hardware around this common usage pattern. This force propagates fierce competition between different developers and hardware vendors to get the highest performance. No developer/vendor are going to optimize their renderer/hardware respectively to minimize permutation/compilation times at the cost of real-time performance. The choice becomes more obvious, it's either the developer/vendor gets left behind potentially going out of business or stay relevant to reap the rewards ...
 
PBR arguably made things worse. A unified uber-shader style material/lighting model with a forward renderer was not the popular way to implement PBR. The most popular way in which PBR was implemented was with tons of specialized/separate material/lighting shaders and deferred renderers ...

That's unintuitive. Isn't the whole point of PBR to derive lighting based on material parameters in a uniform "physically correct" model? Having to code distinct models per material doesn't seem very "physically based" to me.

The permutation problem is a self-inflicted disease among the industry where you have a self-fulfilling prophecy of developers creating all sorts of shaders so that the compilers will generate the optimal code in terms of occupancy/register pressure or memory usage and soon then after IHVs will catch on to then design their hardware around this common usage pattern. This force propagates fierce competition between different developers and hardware vendors to get the highest performance. No developer/vendor are going to optimize their renderer/hardware respectively to minimize permutation/compilation times at the cost of real-time performance. The choice becomes more obvious, it's either the developer/vendor gets left behind potentially going out of business or stay relevant to reap the rewards ...

What's preventing games from pre-compiling all of the permutations at install time? Are there runtime parameters that generate even more permutations?
 
Another game (Atomic Heart) that runs worse on XSX when compared to PS5.

XSX has been out for over 2 years now, how long are we going to be blame the software side before we start to look at the hardware being the issue?
 
Status
Not open for further replies.
Back
Top