Digital Foundry Article Technical Discussion [2023]

Status
Not open for further replies.
On the other hand console devs have a long track record of wrangling low level apis and exotic hardware. Maybe they’re just better at it.
Or it’s easier to do on a single configuration closed box and much harder to do low level coding that needs to perform equally across
Hundreds of configurations.
 
Seems like the most talented devs all end up making console games since that is where the money is. A few rare studios do achieve good low level results on PC so its certainly doable. DX12 probably could be better but can never be as good as GNM appears to be due to requiring more abstraction. Though Microsoft does have a history of poorly thought out API design(geometry shaders, tessellation, the endless stages of the vertex pipeline etc.)
 
Last edited:
We’re missing a key part of the story. It would be helpful if the same devs who championed low level apis years ago gave their take on how close DX12 comes to achieving that vision. I don’t think I’ve seen a single studio admit that DX12 is hard yet the results haven’t been great.

On the other hand console devs have a long track record of wrangling low level apis and exotic hardware. Maybe they’re just better at it.
These are ways PC D3D12 is still sub-optimal compared to console APIs ...

HLSL still doesn't have pointers (GNM/PSSL exposes the true capability of bindless where everything is just plain memory)
Up until recently D3D12 didn't have many dynamic states which led to duplicate pipelines being generated (GNM exposes more dynamic states directly while Xbox had derivative PSOs instead)
ExecuteIndirect API is more powerful Xbox (you can specify in the command signature to change the entire PSO on Xbox when issuing an indirect draw command and even Nvidia's device generated commands extension is also more powerful than the ExecuteIndirect API since you can change shader groups for every indirect draw command)
D3D12 still doesn't make any forward progress guarantees so starvation-free algorithms like we see with Nanite's hierarchal culling technique aren't safe (consoles just rely on scheduling hardware behaviour for this)
HLSL features a separate source programming model (PSSL just like CUDA's kernel language feature a single source programming model)
D3D12 doesn't have a way to do link-time optimizations (separating shader compilation from pipeline compilation reduces compilation times)
Copying resources is trivial on consoles with unified memory (no API commands required thus lower overhead)
An explicit API to use SAM or pinned memory in D3D12 would simplify sharing data between the CPU and GPU (the capability for the CPU to directly access GPU memory and the ability import host memory is more powerful in terms of memory management)
 
I cant speak for other devs, but it has been my understanding that the profiling tools are generally regarded as being ( much ) better on the Sony side of things.
I think the actual APIs are pretty similar in performance and capabilities. But code optimization is all about finding the problem spots, THEN fixing them.
good profiling tools are hugely important in this aspect.

The MS tools are always improving, but I think they are still a fair way behind where the similar Sony tools are at.
Of course on the PC side of things you can always use 3rd party tools to profile your code, which is helpful, but not everyone does that, or has the time to do it, let alone do it well.

FYI, I understand AMD actually put extra hardware in the zen CPU arch. to assist with the cost of context switching in the XBOX hardware.
Errr, slightly enlarged certain buffers, not entirely new parts of the CPU.
 
It would be helpful if the same devs who championed low level apis years ago gave their take on how close DX12 comes to achieving that vision. I don’t think I’ve seen a single studio admit that DX12 is hard yet the results haven’t been great.
Quite a few have expressed discontent about DX12, especially in the beginning, Stardock (the developer of the first ever DX12 game: Ashes of Singularity and one of those who championed lower level APIs) has posted a blog explaining why they abandoned DX12/Vulkan in one of their games: Star Control, in favor of DX11.

It basically boils down to the extra effort it takes to develop the DX12/Vulkan path, longer loading times and VRAM crashes are common in DX12, if you don't manage everything by hand. Performance uplift is also highly dependent on the type of game, and they only achieved a 20% uplift on the DX12 path of their latest games, which they say is not worth all the hassle of QA and bug testing.

In the end, they advice developers to pick DX12/Vulkan based on features not performance, Ray Tracing, AI, utilizing 8 CPU cores or more, Compute-Shader based physics ..etc.


Another UbiSoft developer advised against going into DX12 expecting performance gains, as according to them achieving performance parity with DX11 is hard.

If you take the narrow view that you only care about raw performance you probably won’t be that satisfied with amount of resources and effort it takes to even get to performance parity with DX11,” explained Rodrigues. “I think you should look at it from a broader perspective and see it as a gateway to unlock access to new exposed features like async compute, multi GPU, shader model 6, etc



Another veteran developer (Nixxez) states that developing for DX12 is hard, and can be worth it or not depending on your point of view, the gains from DX12 can easily be masked on high end CPUs running maxed out settings and you end up with nothing. They also call Async Compute inconsistent, and too hardware specific which makes it cumbersome to use. They also state the DX12 driver is still very much relevant to the scene, and with some drivers complexity goes up! VRAM management is also a pain in the ass.

 
Last edited:
Devs don't like change.. news at 11 lol.

DX12 has come a long way from where it was in the beginning. God I remember the Gears of War Ultimate Edition release... DX12 was just completely busted.. and you had the UWP and Windows Store push that all tied together to really hurt the perception of DX12... and to this day that game performs like pure trash.

It's an uphill battle, and devs have seemingly had to fight hard to essentially get to where they were before it seems.. but gaining the knowledge and best practices along the way will undoubtedly help improve and push things further for the entire industry.

Unfortunately things move slow in MS / PC land.. but there have been numerous improvements to DX12 since 2017. Also, not having to support DX11 and being able to focus solely on DX12 wasn't something developers could afford back then, but now it's much more common and DX12 is essentially THE API for new higher end games.
 
First Look at New D3D12 Enhanced Barriers (asawicki.info)

Something more recent 2021.

tldr;

"This is it – probably the first description of the new D3D12 Enhanced Barriers API. Now it is time for my opinion. I must admit that during analysis of this whole thing I changed my mind. Overall I like DX12 slightly more than Vulkan because it’s a bit simpler API and so I always thought that barriers in DX12 are better because they are simpler. .........
But now, after realizing that the new DX12 barriers are pretty much a copy of Vulkan API, it is almost like Microsoft said “Vulkan got it right” and I agree with them. "


edit:

Actually whole blog page its good place for knowledge when it comes to gfx apis, lot of articles that focuses on different aspects.

Adam Sawicki Home Page - programming, graphics, games, media, C++, Windows, Internet and more... (asawicki.info)
 
Last edited:
Devs don't like change.. news at 11 lol.

DX12 has come a long way from where it was in the beginning. God I remember the Gears of War Ultimate Edition release... DX12 was just completely busted.. and you had the UWP and Windows Store push that all tied together to really hurt the perception of DX12... and to this day that game performs like pure trash.

It's an uphill battle, and devs have seemingly had to fight hard to essentially get to where they were before it seems.. but gaining the knowledge and best practices along the way will undoubtedly help improve and push things further for the entire industry.

Unfortunately things move slow in MS / PC land.. but there have been numerous improvements to DX12 since 2017. Also, not having to support DX11 and being able to focus solely on DX12 wasn't something developers could afford back then, but now it's much more common and DX12 is essentially THE API for new higher end games.

+1 This is only the beginning of DX12. For some teams it is only the first DX12 title. It will improve, teams will improve.
 
And yet Forsaken loads faster on PC with a similar SSD. CPU bottlenecks are absolutely a factor here. Just because the overall CPU doesn't show high utilisation does not mean a single thread isn't bottlenecking performance. We have very well documented sources explaining just how much IO is limited by the CPU on both consoles and PC. And on PC, there is often simply more to do at load time than there is on the console, hence why load times can be slower, In Spidermans case for example the developers themselves specifically said shader complication and (I think) BHV building were both an additional CPU burden on the PC side vs the PS5.
Sure, with partial loadings of assets (like in Spider-man BTW). In Spider-man PC some areas (and depending of the CPU) need from like 5sec to 15 sec to fully load all the assets (which was not missed by forums users and NXGamer).
 
Sure, with partial loadings of assets (like in Spider-man BTW). In Spider-man PC some areas (and depending of the CPU) need from like 5sec to 15 sec to fully load all the assets (which was not missed by forums users and NXGamer).

True but that's offset by PC having to load more assests in when compared to PS5 due to the increased LOD range in PC.

So swings and roundabouts.
 
Sure, with partial loadings of assets (like in Spider-man BTW). In Spider-man PC some areas (and depending of the CPU) need from like 5sec to 15 sec to fully load all the assets (which was not missed by forums users and NXGamer).
Do you have examples of things being partially loaded on PC in Forespoken, where they are fully loaded on PS5?

Because I'm pretty sure I've seen PS5 have issues loading textures in that game, but never experienced anything on my PC.
 
Sure, with partial loadings of assets (like in Spider-man BTW).

This is not correct. Forsaken has issues with low texture mips being used on GPU's with 8GB VRAM or less. It has no problems at all with loading assets on GPU's with sufficient VRAM and it still loads faster on those GPU's than on the PS5 with similar SSD's.

In Spider-man PC some areas (and depending of the CPU) need from like 5sec to 15 sec to fully load all the assets (which was not missed by forums users and NXGamer).

This is obviously a bug and should not be interpreted as some kind of IO bottleneck. It occurs on GPU's with 24GB VRAM and in some cases, the textures simply never appear. We've also seen similar behavior in other games which has later been resolved through patches. That said, it's disappointing this hasn't yet been resolved for Spiderman - assuming it actually hasn't.
 
I've been playing Super-Man Remastered since last week, I haven't seen anything inordinary regarding textures not loading. Game is fabulous btw, highly recommend it.
a good Superman game, that's good to know.

I was looking for this DLSS/XeSS/FSR2 mod every single day ever since it was announced, and now it works on my favourite Resident Evil games (RE2, RE8, among others). Didn't know where to put this 'cos Arquitecture & Products is read only for the time being, but I remember Alex's video on RE2 Remake new patch a few months ago.

They also mention @Dictator in the thread, for another reason though -negative lod bias-. The mod also improves DMC5 and MH Rise. As for RE2 Remake, this mod will finally allow for high fps at 4K without using Interlacing -it does not look that bad on RE2 Remake, but compared to "Normal" the difference is really huge-.

 
Last edited by a moderator:
Seems like the most talented devs all end up making console games since that is where the money is. A few rare studios do achieve good low level results on PC so its certainly doable. DX12 probably could be better but can never be as good as GNM appears to be due to requiring more abstraction. Though Microsoft does have a history of poorly thought out API design(geometry shaders, tessellation, the endless stages of the vertex pipeline etc.)
Then you could also say that there are generally fewer talented people working in the games industry because there is far more money in other IT professions. I know a developer from a small indie studio whose skills make me expect the game to achieve 100% scaling in 32 threads with its own costum engine. Passion plays a particularly important role in games. Whether games go badly often depends on priorities, time and resources.

Of the big games the performance of Battlefield 2042 impresses me the most. Sometimes 128 players at the same time on the screen (or 63 AI opponents), a lot of moving vehicles, a lot going on and it runs flawlessly. It's rather unimpressive when a single-player game manages a stable 60 frames per second with a few enemies on the screen. One needs particularly good scaling when there are a hundreds or thousands of characters on the screen. And that's exactly what I don't see in console games.
 
Last edited:
Then you could also say that there are generally fewer talented people working in the games industry because there is far more money in other IT professions. I know a developer from a small indie studio whose skills make me expect the game to achieve 100% scaling in 32 threads with its own costum engine. Passion plays a particularly important role in games. Whether games go badly often depends on priorities, time and resources.

Of the big games the performance of Battlefield 2042 impresses me the most. Sometimes 128 players at the same time on the screen (or 63 AI opponents), a lot of moving vehicles, a lot going on and it runs flawlessly. It's rather unimpressive when a single-player game manages a stable 60 frames per second with a few enemies on the screen. One needs particularly good scaling when there are a hundreds or thousands of characters on the screen. And that's exactly what I don't see in console games.
The same small pool of developers are the ones who consistently deliver high quality products. It’s not a very variable situation. We can agree to disagree on BF2042. That title is a stuttering mess with low performance in a bunch of very low detail maps with almost no interactivity of any kind.
 
BF2024 runs fantastic today. It was a mess at launch... A shame that the Dead Space Remake isnt as good as BF2042.
 
Seems like the most talented devs all end up making console games since that is where the money is.
No. There is no shortage of "talent" across the spectrum of developers (nor are the sets disjoint). There is often a shortage of time and differences in priorities.

Moreover though it cannot be emphasized enough how much more complicated the problem is on PC. Most of the problems that PC games suffer from are because those problems don't exist at all on consoles, but *must* exist to some extent on PC to support a wide range of hardware. This also involves interactions between large code bases between a bunch of different companies all with different goals themselves (which are sometimes actively opposed to getting a nice, consistent experience ex. across different hardware vendors). The fact that games and APIs must be designed in a way on PC that they will be future compatible with GPUs that you can never test at the time the game ships/gets its last patch makes the problem an order of magnitude more complicated.

As someone who was heavily involved in the design of DX12 and Vulkan I have no issues saying that both made mistakes like every graphics API has. It's probably also fair to say the pursuit and hype around "low level console-like performance" went too far to the point that it created problems that were even harder to solve/work around than the problems in DX11 and previous APIs. With things like Nanite it's pretty clear now that the window in which we cared about doing 100,000 draw calls was somewhere between narrow and non-existent. And while the goals of ensuring that shader compilation happened at predictable points (and thus had to abstract any API state that "might possibly be compiled into shaders on any current or future GPU") remain good, it's fair to say that we underestimated the reality of how harmful this "superset of least common denominator" approach would be to the experience on one specific piece of hardware. It's much harder to fix PSO explosions after the fact with deduplication in the driver than it is to avoid it happening in the first place, but unfortunately the new APIs force us down that rougher path for the foreseeable future.

Again, improvements can (and are) being made constantly to the whole PC stack. But pretending this is some sort of lack of effort or expertise from those involved, rather than a gigantic compatibility problem that has expanded in scope as games and hardware themselves have expanded in scope rubs me the wrong way. A big part of the reason things were sometimes not as much of an issue in the past is because they were literally simpler problems then. It's still worth admitting that DX12/Vulkan did make some of these actually harder. Hindsight is 20/20, etc.
 
Last edited:
It's probably also fair to say the pursuit and hype around "low level console-like performance" went too far to the point that it created problems that were even harder to solve/work around than the problems in DX11 and previous APIs

It's still worth admitting that DX12/Vulkan did make some of these actually harder. Hindsight is 20/20, etc.
Thanks for the wonderful insight, and congratulations on joining the moderator team, you will make a fine one at that too.
 
Status
Not open for further replies.
Back
Top