Digital Foundry Article Technical Discussion [2023]

Status
Not open for further replies.

The glory days of Dice.
as I mentioned in a different thread, BF3 is the game my best friend in real life played the most ever in his entire life. He was so addicted to it. I played BF4 more

On a different note, since DF is about games' technicalities one of my favourite episodes was one by Richard where he humbly played some games on a A770 and locked them to 30 or 60fps, depending, and given the openness of PC hardware, that makes me think... what are the best games for low resources PCs? Or the best games that work on almost any modern PC? Some of those games are AAA but of a different kind.
 
However, where I still think we absolutely should have console equivalents is in the individual settings themselves (and even better if they're labelled as such). Taking Spiderman again, the object detail goes all the way up to 10 despite being very expensive on PC which is an absolute must regardless of the cost of an effect IMO - it's unacceptable to have a game on PC unable to at least match the visual fidelity of the console version even if it's more expensive to achieve.

EDIT: also, there is a benefit to this from the developers point of view I think, as there are plenty of channels out there that like to compare PC and console performance in specific games. If you make it easy for them to do that, you're more likely to get screentime from them and more likely to get people talking about the game - free publicity.
I mostly agree, but I want to float two caveats/counterpoints to this --
1- Between custom hardware features, locked down platforms where you can carefully schedule work between multiple components, reliable async compute, etc, there's no gaurantee that the "console setting" even exists as such. Some settings, especially more "high level" concepts like crowd complexity (where you're not just turning up one expensive feature, but more like 5 or 6 independant but overlapping costs) may not actually have an "equivalent" on another platform where the same general approach is being taken -- they could have an entirely different approach, which performs with entirely different tradeoffs. This seems increasingly common with RT settings in particular. It's never just a matter of "how fast is my pc" vs "how fast is the console", console techniques may not even map on to the "low -> medium -> high" spectrum. Which leads me to...
2- More importantly, games aren't just software tech demos released in a vacuum, they're products, where users have certain expectations about performance, intelligbility and consistency of settings compared to other games. There's no benefit to "free publicity" from tech reviewers if your game exposes weird settings -- your game will get review bombed and decryed universally as a bad port.
 
This is precisely what I'm talking about. In almost all cases games already provide quality settings that span both above and below the consoles. But coming up with "optimal" settings is always coming up with a tradeoff of the best use of performance across *all* of the various quality knobs... they are not really independent in practice. Thus if something is far more expensive on one platform it is likely to be turned down/disabled in favor of something that is relatively cheaper but provides a bigger visual impact.

These sorts of situations are actually fairly common, and thus while it's intellectually interesting to ask for "console settings" where it makes sense, IMO it's more important to get presets that are optimized *for PCs* on PCs. Of course this is an imperfect process in and of itself because PC configurations vary widely, but there are definitely systematic cases where something will be worse or better on all PCs just due to the nature of the available code paths on each platform.
Agreed. I rather PC devs offer an automated feature that allows you set a preference like resolution or frame rate and the app will provide a settings profile based off that preference that fits your hardware. From there you can provide additional tweaks to the settings if your value of settings are weighted differently.

Basically a performance or quality mode thats dynamically customized to your hardware. That’s a console like settings feature that’s fitted to accommodate the PC market.
 
... talking about a clear labeling scheme for the samples per-pixel, etc that the ART team deems reasonable in terms of performance to hit the game's artistic vision being labeled as "console" or "original" settings. It just so happens that these "original" or "console" settings generally always happen to be the visual/performance sweet spots on nearly every GPU out there
...
Why is the "console" setting of lower quality not available on PC?
I think I already covered this with pjbliverpool now, but just to reiterate: I certainly agree that *individual* graphical settings on PC should cover the range of whatever is used on the consoles, and ideally both higher and lower where appropriate. In most cases it's fine to label one of these options as the "console" or "recommended" or similar name if it makes sense too.

The subtlety I'm pointing out is that due to a variety of factors it's common for entire systems to perform differently enough between console and PC to make the "overall best choice between all those settings" different. Ex. post-processing might be cheaper on console while shadow depth rendering might be cheaper on PC, so in terms of the best bang for the buck it would generally be better to bias towards more quality post-processing on the console side, but higher resolution shadows on the PC side for instance. While it's more convenient to evaluate these settings in isolation, in the end it is a zero sum game where the options do compete for the fixed performance budget.

There are of course even times when this happens within one subsystem. For instance it *tends* to be that virtual shadow map depth rendering can be made relatively faster on console, but shadow projection is relatively faster on PC. (These tradeoffs change over time as implementations change of course but just picking a current example.) We could of course separate out two scalability settings for these different aspects for shadow rendering, but in this case the difference isn't so stark that its worth the mental overload and confusion. Obviously there's always a balance to be struck between something like exposing the developer console and config files and all that entails (which some games do!) and grouping things into settings buckets.

Unfortunately much of the time the settings buckets that make conceptual sense (ex. "shadows", "effects", "GI", etc.) are not super well aligned to the places that are actually expensive in the engine, which leads to the very common case of settings that make almost no performance difference on a given system. Further complicating this is things that primarily impact VRAM usage where they will make almost no difference to performance until you entirely fall off a cliff based not on whatever setting you last changed, but the combined set of settings. Some games try and give an estimate of the vram use of various combinations of settings; this is certainly nicer than nothing but quite approximate still. And of course on PC users may be running chrome on a second monitor and eating half your VRAM anyways ;)

Now as enthusiasts we might say "just expose everything and we'll figure out the best settings" but at some level the argument about labeling console settings admits the reality that the majority of users set an (overall!) preset and move on, if they even change settings at all. Thus while it's fun for us power users to tweak things and certainly nice to provide some knobs so that folks like Alex and the IHVs can come up with their own optimized recommendations, those overall preset settings and how they trade off against one another are still of primary importance I think.
 
Last edited:
WRT the discussion of what constitutes high vs low, I wonder why more devs don't just have console or console similar settings as normal/medium with low being scaled down and high scaled up. If a given setting doesn't scale above or below consoles don't list a setting above or below normal/medium in the menu.
 
WRT the discussion of what constitutes high vs low, I wonder why more devs don't just have console or console similar settings as normal/medium with low being scaled down and high scaled up. If a given setting doesn't scale above or below consoles don't list a setting above or below normal/medium in the menu.
Because many times pc games aren't working with a console codebase and instead running with what the PC can push ahead with. so console isn't always the "standard" graphical preset to begin with let alone what pc users want to see in their menus.

There have been plenty of cases where consoles are using low or lower than low settings relative to what pcs can accomplish and that wouldent make any sense to label the "normal" or "medium" setting.

As much as "console settings" in menu by default would be nice, I don't think devs are gonna try and streamline PC relative to console anytime soon. Which is where digital foundry videos come in to tell people which settings they should be using to get a good balance of performance and visual settings
 
Agreed. I rather PC devs offer an automated feature that allows you set a preference like resolution or frame rate and the app will provide a settings profile based off that preference that fits your hardware. From there you can provide additional tweaks to the settings if your value of settings are weighted differently.

Basically a performance or quality mode thats dynamically customized to your hardware. That’s a console like settings feature that’s fitted to accommodate the PC market.
If I recall the ForzaTech engine is capable of something like this. At least with Forza Motorsport 6 Apex the game can dynamically change settings while you're playing in order to reach the target resolution and framerate.
 
Battlefield 3 was graphically good at the time but Crysis 2 was much more impressive for me.

The campaign in Battlefield 3 was bad. If I would have bought a Battlefield part just for the single player it's only Bad Company 1. The multiplayer in Battlefield 3 was one of the best. I also thought Battlefield V was outstanding, but there are a lot of cheaters now.
 
The glory days of Dice. I disagree that the BF4 campaign was better. Half of the entire campaign was replaying the same exact level.
Why picking BF3 though? I think BF4 was even more impressive on PS360. On PS3 that game was a technical marvel with great IQ for that gen with MLAA and usually a quite stable framerate (without screen-tearing on Sony machine).

PS4 version was disappointing comparing to PS3 version with an fluctuating uncapped framerate and blurry image quality (FXAA). Hopefully others games later showed PS4 true graphical power but I never seen such a small gap between generations when using that game. I was getting worried about PS4 gen here (we won't even dare compare PS3 against XB1 with that game...)
 
Why picking BF3 though? I think BF4 was even more impressive on PS360. On PS3 that game was a technical marvel with great IQ for that gen with MLAA and usually a quite stable framerate (without screen-tearing on Sony machine).

PS4 version was disappointing comparing to PS3 version with an fluctuating uncapped framerate and blurry image quality (FXAA). Hopefully others games later showed PS4 true graphical power but I never seen such a small gap between generations when using that game. I was getting worried about PS4 gen here (we won't even dare compare PS3 against XB1 with that game...)

I think you need to factor in other differences than just static visual quality. I believe PS4 BF4 had a 64 player vs 24 player map limit for the PS3 with much larger map sizes and better in dynamic effects (such as higher levels of destruction). Player counter/map size difference alone was showing the massive leap in memory difference between the generations.

As an aside the biggest fork in the road decision with last gen consoles may have been the decision to go with 8GB instead of 4GB. That doubling of memory capacity over all else was likely critical from an aging and cross platform development perspective.
 
Last edited:
1673897984911.png

As Alex mentions, the problem is more that 1) Getting the possibility of shader compiling outside of actual gameplay at all is the primary issue for games now, not how long you have to wait and 2) Shader caches are invalidated with driver updates, game patches, and potential OS upgrades.

However what's more interesting to me is the part where @Dictator mentions someone in his Discord proposed to Nvidia about them possibly supplying compiled shader caches to PC gamers. The short answer is Nvidia felt it wasn't something they could do for various reasons, but they do in fact do it for games on Geforce Now. Not that's a very tiny fraction of compile targets vs all PC GPU's and driver combinations so the scope of the problem being much less is obvious, but what Alex is curious about is how Nvidia is doing this outside of gameplay at all for games that don't have the option of precompiling. So they're thinking of asking Nvidia directly about this, do they brute force it by having a group of Geforce Now Beta testers who all play the game on planned driver updates for the service, but if they're using their massive datacenters to do this - how are the getting access to those shaders to compile without gameplay?

That is, after all, the issue - getting access to those shaders to be compiled in the first place. We can worry about optimizing the offline process later, we just want access to them where they can be compiled without having to do it through gameplay as the first exposure. So very curious to learn how Nvidia does this.
 
That is, after all, the issue - getting access to those shaders to be compiled in the first place. We can worry about optimizing the offline process later, we just want access to them where they can be compiled without having to do it through gameplay as the first exposure. So very curious to learn how Nvidia does this.
This is a really odd problem. Can't they automatically index them in a table when developing the game using some tool?
 
So they're thinking of asking Nvidia directly about this, do they brute force it by having a group of Geforce Now Beta testers who all play the game on planned driver updates for the service, but if they're using their massive datacenters to do this - how are the getting access to those shaders to compile without gameplay?
Only *game* updates matter to the IHV shader caches, not *driver* updates, since they are the ones that control the driver invalidation in the first place. Game updates can invalidate shader hashes (if those shaders are touched), but it's not always the case.

I would be surprised if the IHVs didn't harvest the hashes of shaders encountered both in their own benchmarking/test runs but also potentially in the wild (you agree to a lot of stuff with GFE in particular...), which is the majority of the data that they need. Testing of titles/updates before they go live on Geforce Now likely covers most of the remaining cases.

Also worth noting... you don't necessarily need *perfect* coverage for every possible shader here to get the vast majority of the benefit. It's ok if a couple rare cases potentially slip through on a game update or similar.
 
This is a really odd problem. Can't they automatically index them in a table when developing the game using some tool?
Depending on the engine, the full permutation is space is infeasibly large. This is particularly true for engines that give the artists some direct input into the permutation space (via material graphs or similar). Engines that only let the artists tweak parameters and textures and not the shader code itself have a much simpler problem and generally a much smaller permutation space, but they are also more limited in terms of what artists can do with them.

Unfortunately it's not always easy to know which permutations are even possible to produce in gameplay, let alone which ones are common. Certainly in Unreal Engine that *is* the difficult part of the problem, and still has to involve some amount of gathering dynamic hashes from real gameplay or traces, even if it's primarily on the developers' side.
 
Status
Not open for further replies.
Back
Top