Digital Foundry Article Technical Discussion [2023]

Status
Not open for further replies.
Almost always going to have better results scaling down to worse hardware then scaling up to better hardware. Make the better version first, then figure out what needs to be done to scale it to worse hardware and make it run well.

Regards,
SB
Don't really agree. It's the exact opposite and gaming development has historically shown so.


Monster hunter rise and most cross gen games shows it's way easier to make a base game and scale up via res resolution and textures than trying to crush big games onto smaller hardware configurations.

The whole problem with lod management, creating entirely unique assets to try and squeeze a game into weaker hardware remains much bigger challenge.

Hence why getting games running on switch is not really a priority for devs. It took far greater effort to make Witcher 3 barely run on switch than it took for Sony for example to make a PS4 and PS5 version of god of war etc.

This is why devs historically used consoles as the primary sku as well as just finding it easier to code for a singular target
 
Nanite uses mesh shaders for what it's worth... the reality is for stuff that is disruptive to content especially it takes a lot longer before it gains enough market penetration to be able to rely on it without having an infeasible number of paths. Hell it was already hard enough to draw a line at DX12 for Nanite/VSM let alone smaller features like those!

Sampler feedback in particular has always had a pretty narrow window of utility in my opinion. It's addressing a problem that mostly doesn't exist... we've been using virtual texturing just fine for a long time without sampler feedback. Maybe in the far future when it's supported everywhere and there's only one path it will make more sense, but even the theoretical benefits in the best case are pretty marginal so it just doesn't really pass the cost/benefit test. I'm not sure why anyone is expecting anything important from this feature, nor why it was ever advertised to consumers...
Can you give us some insight into how beneficial mesh shaders are in practice? Particularly in the absence of a virtualized geometry system like nanite.
 
Last edited:
Rocksteady used UE3 for Arhkam: City and Knight and outside of the port problem of Knight these games run much better than the lastest "open world" UE4 games.
Days Gone delivered an excellent excellent open world settings, with modern visuals like SSGI, SS Shadows, and many other techniques. All on DX11, with no problems whatsoever, no stuttering, no VRAM problems, no traversal stutter, no CPU limitations, no GPU utilization issues, nill. It was even an early low priority effort from Sony, like a small trial to ship some games on PC. So yes, UE4 can absolutely deliver an excellent open world game.
 
Beyond that issue though I don't think there's as much commonality in the causes
The commonality between developers is that DX12/Vulkan is too much work.

Stardock is the one of the first ever developers (along with Dice) to champion DX12/Vulkan, with their Ashes of Singularity game, they pushed it so hard, that they seem to have staked the future of the entire company on DX12. Then after that? Nothing. None of their games after that came with DX12. They made 4 games after Ashes and all of them relied on DX11 primarily.

Some observers have noted that Stardock's most recent two releases, Star Control: Origins and Siege of Centauri were DirectX 11 only. What gives?

They made an official statement about this. Guess what was their reasons?

DX12/Vulkan triples your QA testing, too many things can easily go wrong, which causes instant artifacts and instant crashes, this directly translated to a hugely inflated QA budget for DX12/Vulkan. They simply ran out of budget just to investigate the crashes.

The performance gain is about 20% over DirectX 11. The gain is relatively low because, well, it's Star Control. So we have to weigh the cost of doubling or tripling our QA compatibility budget with a fairly nominal performance gain. And even now, we run into driver bugs on DirectX 12 and Vulkan that result in crashes or other problems that we just don't have the budget to investigate.


Nowadays, after DXR, developers have to code for two separate paths, a raster one with no Ray Tracing, and a Ray Tracing one, sometimes even a Path Tracing one. They also have to support several upscaling techniques, internal solutions (UE's TSR/Checkerboarding/Dynamic Resolution/UbiSoft Temporal Upscalers/Insomniac GTI, .. etc), and external solutions, FSR2 or DLSS or XeSS, or DLAA, or DLSS3 or all of them combined, that's simply too much work, supporting all of that with DX12 on two separate render paths is too much work, all of these featured lead to a ballooning budget. Again, developers on PC simply lack the time/budget/knowledge to do all of that. That's the reality of the situation that was overlooked when these APIs were developed, which Khronos admitted several times in their latest blog.

In my field this is called the disconnect between pharmacologists and clinicians, pharmacologists develop and test the drug, after the trials they deem it safe to be publicly available worldwide. A few years later clinicians in the field scream outloud that the drug has too many side effects to be useful, and is also cumbersome to use and is expensive compared to older drugs. Yet instead of listening to them pharmacologists ignore all of that and insist that they developed a "better" drug, leading to suffering across the board, or leading to clinicians abandoning the drug (like Stardock abandoned DX12). I find this to be an extremely fitting analogy to the current API situation.

Another UbiSoft developer advised against going into DX12 expecting performance gains, as according to them achieving performance parity with DX11 is hard and resources intensive.

If you take the narrow view that you only care about raw performance you probably won’t be that satisfied with amount of resources and effort it takes to even get to performance parity with DX11,” explained Rodrigues.


Another veteran developer (Nixxez) states that developing for DX12 is hard, and can be worth it or not depending on your point of view, the gains from DX12 can easily be masked on high end CPUs running maxed out settings and you end up with nothing. They also call Async Compute inconsistent, and too hardware specific which makes it cumbersome to use. They also state the DX12 driver is still very much relevant to the scene, and with some drivers complexity goes up! VRAM management is also a pain in the ass.

 
Last edited:
Drop all the old tech; and you don’t need to support so many different paths.

Viewpoints from 2017 are less likely to apply in 2023. Just saying, 6 years later they have significantly more experience with it as do the IHVs in supporting their drivers and developers.

The constant comparison of a 20 year old high level API to a brand new Low level API without allowing for proper adjustment and growth time is unrealistic from an execution standpoint; however I understand that marketing may have provided a different view of this.
 
Last edited:
Maybe it’s not worth overhauling your underlying geometry pipeline while you still need vertex shader fallbacks for older hardware. Lots of pascal gpus and last gen consoles out there still.
Nah, 3 years of cross gen, a solid installbase of current gen consoles and PC GPUs makes this argument invalid. Games should push the envelope now. Old hardware was carried long enough.
 
Nah, 3 years of cross gen, a solid installbase of current gen consoles and PC GPUs makes this argument invalid. Games should push the envelope now. Old hardware was carried long enough.
You are right we need to drop off old hardware, but I think realistically we are only nearing a complete transition. I still expect 1 more year of this. Ie: any game that started production the same year consoles launched.
 
Nah, 3 years of cross gen, a solid installbase of current gen consoles and PC GPUs makes this argument invalid. Games should push the envelope now. Old hardware was carried long enough.

When would those games have started development? You would only abandon old hardware after you’re sure they no longer represent a significant revenue source. My bet is we won’t see real overhauls for another 2 years.
 
The commonality between developers is that DX12/Vulkan is too much work.

Stardock is the one of the first ever developers (along with Dice) to champion DX12/Vulkan, with their Ashes of Singularity game, they pushed it so hard, that they seem to have staked the future of the entire company on DX12. Then after that? Nothing. None of their games after that came with DX12. They made 4 games after Ashes and all of them relied on DX11 primarily.



They made an official statement about this. Guess what was their reasons?

DX12/Vulkan triples your QA testing, too many things can easily go wrong, which causes instant artifacts and instant crashes, this directly translated to a hugely inflated QA budget for DX12/Vulkan. They simply ran out of budget just to investigate the crashes.




Nowadays, after DXR, developers have to code for two separate paths, a raster one with no Ray Tracing, and a Ray Tracing one, sometimes even a Path Tracing one. They also have to support several upscaling techniques, internal solutions (UE's TSR/Checkerboarding/Dynamic Resolution/UbiSoft Temporal Upscalers/Insomniac GTI, .. etc), and external solutions, FSR2 or DLSS or XeSS, or DLAA, or DLSS3 or all of them combined, that's simply too much work, supporting all of that with DX12 on two separate render paths is too much work, all of these featured lead to a ballooning budget. Again, developers on PC simply lack the time/budget/knowledge to do all of that. That's the reality of the situation that was overlooked when these APIs were developed, which Khronos admitted several times in their latest blog.

In my field this is called the disconnect between pharmacologists and clinicians, pharmacologists develop and test the drug, after the trials they deem it safe to be publicly available worldwide. A few years later clinicians in the field scream outloud that the drug has too many side effects to be useful, and is also cumbersome to use and is expensive compared to older drugs. Yet instead of listening to them pharmacologists ignore all of that and insist that they developed a "better" drug, leading to suffering across the board, or leading to clinicians abandoning the drug (like Stardock abandoned DX12). I find this to be an extremely fitting analogy to the current API situation.

Another UbiSoft developer advised against going into DX12 expecting performance gains, as according to them achieving performance parity with DX11 is hard and resources intensive.




Another veteran developer (Nixxez) states that developing for DX12 is hard, and can be worth it or not depending on your point of view, the gains from DX12 can easily be masked on high end CPUs running maxed out settings and you end up with nothing. They also call Async Compute inconsistent, and too hardware specific which makes it cumbersome to use. They also state the DX12 driver is still very much relevant to the scene, and with some drivers complexity goes up! VRAM management is also a pain in the ass.


The Nixxes complaint comes from 2017 we are 6 years later...
 
I've seen this back around Turing launch. I'd love to see insights from actual usage and not speculation.

Again we only begin to see non cross gen games and PC player without at least Turing or RDNA 2 GPU don't have access to Mesh shader. And AAA game take between 3 years and a half to 6 years to be made.
 
Don't really agree. It's the exact opposite and gaming development has historically shown so.


Monster hunter rise and most cross gen games shows it's way easier to make a base game and scale up via res resolution and textures than trying to crush big games onto smaller hardware configurations.

The whole problem with lod management, creating entirely unique assets to try and squeeze a game into weaker hardware remains much bigger challenge.

Hence why getting games running on switch is not really a priority for devs. It took far greater effort to make Witcher 3 barely run on switch than it took for Sony for example to make a PS4 and PS5 version of god of war etc.

This is why devs historically used consoles as the primary sku as well as just finding it easier to code for a singular target

Scaling up your game is likely to run well regardless as long as you spend even a little bit attention to the porting process. However, it will never ... ever ... look as good as a game that targets better hardware and then is scaled down. You will always be limited by some extent to the assets you chose to use for the lower performing platform. Monster Hunter: Rise is a great example of this. They increased some things that were feasible to increase, texture detail, effects, resolution, etc. But you can't easily create model asset detail, truly high fidelity textures (although AI is getting pretty good at this), for instance, without recreating the model, textures, geometry, etc. OTOH: if you started out with a highly complex model, it's far easier to scale it down to less complexity.

Scaling up, you have to try to figure out what can you add to the game and it's rendering pipeline without breaking it's rendering pipeline.

Scaling down, you have to try to figure out what you can either remove or reduce in detail to run on worse hardware.

The second is far easier than the first to achieve a certain level of graphical fidelity combined with good performance on higher end hardware. The first will never be able to approach the graphical fidelity of the second method while the second method can still attain the same performance level on hardware that the first method's game was designed around and potentially look better if the developer is particularly good at optimization and scaling of assets, similar, or if the developers aren't good at scaling down it could also be worse.

Regards,
SB
 
Last edited:
Scaling up your game is likely to run well regardless as long as you spend even a little bit attention to the porting process. However, it will never ... ever ... look better than a game that targets better hardware and then is scaled down. You will always be limited by some extent to the assets you chose to use for the lower performing platform. Monster Hunter: Rise is a great example of this. They increased some things that were feasible to increase, texture detail, effects, resolution, etc. But you can't easily create model asset detail, for instance, without recreating the model. OTOH: if you started out with a highly complex model, it's far easier to scale it down to less complexity.

Scaling up, you have to try to figure out what can you add to the game and it's rendering pipeline without breaking it's rendering pipeline.

Scaling down, you have to try to figure out what you can either remove or reduce in detail to run on worse hardware.

The second is far easier than the first to achieve a certain level of graphical fidelity combined with good performance.

Regards,
SB

Scale up can work very well. It depends of the minimum point to scale up if minimum configuration is PS5, Xbox Series and low end Turing and low end RDNA 2 PC GPU there won't be sacrifice to be made. A game engine like UE 5 can push software GI on consoles, virtual shadow maps and raytraced cubemap + planar reflection and use RTX and DLSS technology on PC. And for asset quality use Nanite and virtual texturing for every hardware.
 
Status
Not open for further replies.
Back
Top