Digital Foundry Article Technical Discussion [2023]

Status
Not open for further replies.
Then the PC port is developed from there and designed for the corresponding PC configuration to get the best you can from that. But a dev making a simultaneous release, which configuration best represents the game you want to make? Console? A high end PC? Which PC configuration?
XSX and PS5 would be the default configuration still.
Then the person can tweak up and down from there. But on PC the general view would be that it starts at higher settings.

Regardless the point in the video was more so that it should have the same settings available even if it wasn't called original/default.
 
No one is actually advocating for console settings at the exclusion of settings that can go lower/higher, so not sure what you're even getting at with this. This is simply asking developers to make their console-equivalent settings easily identifiable, and actually offered - for example in Call of Duty Blacks ops as DF pointed out, regardless of the settings the PC has certain effects at a higher precision than the console ports, this potentially optimized performance option is not available.

Why? Console settings are optimized for a particular set of hardware. They are irrelevant for the majority of the PC setups of the market. Plus, devs are going to optimize for a console generation until the majority of the market moves on. Devs aren't going to optimize on today's equivalent PC hardware at the same level as the generation gets older. So current console settings may work for the 2060/2070 but there is a good chance that 2024/2025 console settings won't be optimal for those cards.

We and reviewers just want it mainly for academic and content reasons.
 
Why? Console settings are optimized for a particular set of hardware. They are irrelevant for the majority of the PC setups of the market. Plus, devs are going to optimize for a console generation until the majority of the market moves on. Devs aren't going to optimize on today's equivalent PC hardware at the same level as the generation gets older. So current console settings may work for the 2060/2070 but there is a good chance that 2024/2025 console settings won't be optimal for those cards.

We and reviewers just want it mainly for academic and content reasons.
Loosely speaking; the consoles particular hardware setup is likely to be the most common configuration if you include consoles and PC as a whole in the Venn diagram. Having a console specific setting makes a lot of sense to me. When a pc user can’t be bothered to know if it’s the game that is the issue or it’s their hardware; console level settings will solve that.

Why? Because you should know if you have more powerful hardware than the consoles and if you do, then you should technically be covered until the generation is over.
 
Consoles provide a good and stable baseline for developers and the industry at large. Their place in the market I think cant be understated or dismissed.

With this in mind then, PC users having the option of console like settings makes perfect sense logically for PCs that are stronger than the consoles to have a decent baseline experience without settings fiddling.

The issue with this idea of course it's that it's overly optimistic to expect something like that to have anywhere near industry wide adoption when at the present moment PC ports can't even get their basic crap together from port to port on their own to make most use of the hardware in the pc, let alone give users the abilities to optimize for console configurations which themselves may vary depending on the console platform in questions graphical settings.

It's a great idea but devs and pubs aren't in the position for it I'd say
 
l7yhe9.png





all my love to Alex for this video and the corresponding article. Alex you are the greatest. Someone is singing clear! Sing clear baby! Sing clear!

Some of the petitions are surprising and super well thought, others are expected and necessary, and the last one, is something that should be a standard too. I didn't expect that.

I remember this guy who used XeSS and he has a 3060Ti, and I asked him on his CoD MW2 youtube video why he was using XeSS when DLSS is better overall, and he told me that he preferred the look of XeSS for whatever reason he didn't suffer from migraines using it. So if an option is open source, why not to use it in your game?
 
Last edited:
One of the biggest issues for PC gaming that should be very easy to solve is just the inclusion of all the vendor smart upscaler in every game. It must be so annoying to have to choose between vendor, make and model just to know what one off feature may or may not be included.

I would think Nvidia AMD and Intel all have incentive to have their technologies places in as many places as possible and as companies they should have more than enough money to pay developers or whoever to include options for that stuff
 
Why? Console settings are optimized for a particular set of hardware. They are irrelevant for the majority of the PC setups of the market. Plus, devs are going to optimize for a console generation until the majority of the market moves on. Devs aren't going to optimize on today's equivalent PC hardware at the same level as the generation gets older. So current console settings may work for the 2060/2070 but there is a good chance that 2024/2025 console settings won't be optimal for those cards.

We and reviewers just want it mainly for academic and content reasons.

I'd say because the console settings often provide the biggest bang for buck in terms of the actual core graphics, but with more powerful PC's you can easily scale up the resolution and frame rate to your preference from there.

Obviously having higher graphics settings is a must too, but for people with weaker hardware, or people that prefer resolution and/or framerate to maxed core graphics, the console settings can be a great compromise.
 
I just want console settings, that's about it. Best bang for buck settings that devs specially tailored based on how they want their games to look.

God of War on ultra simply changes how the game looks (in a different way, not in a better or worse way). I'd prefer the vanilla, dev intended original settings, save for some settings here and there that does not change the artistic direction.
 
I just want console settings, that's about it. Best bang for buck settings that devs specially tailored based on how they want their games to look.

God of War on ultra simply changes how the game looks (in a different way, not in a better or worse way). I'd prefer the vanilla, dev intended original settings, save for some settings here and there that does not change the artistic direction.

I don't agree that console settings are how the developer intended the game to look. These will be a compromise for the developer just as much as they are for the user. What they do offer is the optimal position on the visual quality vs performance curve based on the developers vision. With more power though I think there's little doubt the devs would have chosen better LOD, higher viewing distance, higher res textures etc...

Sometimes - very rarely IMO, some higher settings can look a bit worse than lower settings, but the vast majority of the time they look better, if not better enough to justify the performance hit on performance constrained hardware.

For my part, I would like to see setting scale much higher than consoles both to service current and next generation top end GPU's. Devs just have to be very careful to label these settings in such a way that they don't get peoples back up. Psycho was a good way of doing that in Cyberpunk.
 
Up front I must admit I've hesitated to comment here because of the potential quagmire, but if there's any forum I trust to navigate that carefully I'll give here a shot. Also please note the usual disclaimer that I'm speaking primarily as a lifelong PC gamer but providing a bit of perspective from a few sides of the equation.

I won't touch most of these points as they are good, fairly obvious and uncontentious stuff. Different priorities for different folks (ex. some of these matter a lot if you are a tech reviewer but aren't exactly a big deal if you are going to set the settings once or twice then playing through the whole game) but for the most part when these don't happen it's for a lack of time/resources. In the odd case there's some design reason why certain options aren't provided or are limited (ex. FOV/aspect ratio in competitive games). Some of them can be deceptively complicated or far more tricky in some games than others.

There are a few I do want to comment slightly further on specifically though if I may.

5) I'm gonna take a slightly contentious stance here, but one based on interactions with core Windows OS/WDDM stuff while I was at Intel. Games should not have "mode changes" at all. This is a job for the operating system, not a game. "Exclusive fullscreen" has been a lie since Windows 10 and built on a bunch of hackery that tends to produce finicky and unexpected behavior. "Display output" stuff belongs in the operating system and no user space application should be able to mess with it. If you want to change output resolution or display refresh rate when running a game, that should be a feature of the OS, not each and every game. That said, I would also argue that changing the actual display output should not really be necessary in modern games, and is mostly a legacy thing. Smart (or even dumb...) upscaling and variable refresh can handle all the same cases without requiring the monitor to do some heavy-weight mode switch or making anything incompatible with alt-tab, non-hacky overlays and so on.

12) I don't disagree with the ask and in some cases it makes sense, but I do want to say that's not universally true for two reasons. First, in some games the specific way that things are done just doesn't really apply in the same way on PCs. Decisions on the console settings aren't just in terms of things like tradeoffs for a given performance budget, but also things like "this path is cheaper/more expensive on console because of this detail that isn't possible on PC." Async compute is a common case where things can be scheduled pretty carefully to overlap specific passes on console, whereas that isn't really possible on PC. Raytracing, mesh shaders and other "newish" things also have different paths and quirks on the different platforms that can have a major impact on tradeoffs. Thus even in cases where it's possible to have "settings with similar output" on PC as console, it sometimes doesn't make sense. I know the argument here will be "sure, but more settings the better and we can just tweak it further ourselves!", but sometimes certain concepts or tradeoffs just don't apply in the same way on PC, and other ones don't apply on console. Thus I agree it's a nice thing where possible to have but I don't think we should ding games for not having some console setting *specifically*. We should evaluate the games based on how well they cover the scalability spectrum with the settings they do provide, as holding them to some arbitrary standard about "comparable console settings" starts to feel a bit more like we're primarily concerned about PC/console comparisons and fanboying than the experience on PC itself.

13) On one level, sure. If you're going to provide one it's great to provide all. Certainly there should be *a* modern AA/upsampling solution available on all GPUs, even future ones that don't yet exist. That said, the amount of resources the three companies throw at implementing these into games is vastly different as you might imagine, and some of them have been around longer than others, so it's not really surprising to see the results. Still good aspirational point of course given the constraints. THAT SAID, I have a related rant here so buckle up :D

These vendor specific libraries should be AT MOST considered to be a stop gap "solution" until standard, cross-vendor implementations are possible to whatever level specific games require. Don't be fooled by the IHVs here, this is absolutely no different than GameWorks, tessellation hackery, MSAA hackery and other vendor-specific lock-in in the past. These algorithms are not some "separate little post effect", they are key parts of the rendering engine with tendrils and constraints that heavily impact everything. It is neither sustainable nor desirable for large backend chunks of game renderers to delegate over to an IHV-specific graphics driver for reconstruction.

Obviously FSR2 gets a bit of a pass here because it is effectively example code, not driver magic. For that reason though, we shouldn't require game developers to implement "specifically FSR2". If they want to use that as a starting point where it makes sense, great. If they implement their own thing or use TSR or whatever, that's great too. Let's compare on quality and performance, not on buzzwords.

Whether we really ultimately need ML for this is - IMO - hugely up for debate. But even if you strongly believe that we do, then we need to get to a solution where you can implement something similar to DLSS/XeSS in standards-compliant ML APIs and it produces identical pixels across the different implementations. Everyone in the industry should be pushing the IHVs to do this and not just accepting their arguments about why they can't/don't want to. If they refuse to, we absolutely should refuse to use their closed-source implementations in the long run and developers should not be criticized for taking a hard line on this.

Since DF often makes good points about retro games and wanting to be able to preserve experiences into the future I will make one final point on that front: stuff like DLSS and XeSS is absolutely the sort of shit that will break and not work 10-20 years from now, potentially leaving us with games that are missing AA/upsampling entirely in the future. It's not reasonable or desirable for games to delegate such a core part of the renderer to external software.

Enough ranting, but had to get that out :)
 
Last edited:
I'd say because the console settings often provide the biggest bang for buck in terms of the actual core graphics, but with more powerful PC's you can easily scale up the resolution and frame rate to your preference from there.

Indeed, the very nature of the fixed platform often means that the 'console settings' are often some of the most optimized settings - it's where the developer has put the most time into finding a balance between performance and visual impact. That is often not the case with Ultra settings on PC (as has been covered extensively on DF and other outlets), as so often it's just ratcheting up the precision of effects, that while they certainly can have a benefit, usually have diminishing returns relative to the GPU resources they require.

There are some outliers of course that highlight the differences in architecture, such as anisotropic filtering having such a higher resource cost on consoles than PC's, and naturally PC users also want the support of more advanced reconstruction methods as standard.

But as a simple starting base for your settings in having them classified as to what the developer has focused on with what is often the lead platform is helpful, but especially in an era where the price/performance curve of PC GPU's are not advancing to anywhere near the pace they have in years before. It's different in years past when even early in a generation an 'affordable' GPU would be an absolutely monumental leap over what the consoles were offering, that's not really the case now. The console optimizations will still apply to me if I'm spending $600 cad on a GPU.

This is a different argument though than wanting 'just' console settings, which is ridiculous IMO. I'm glad there are settings in games that I can't fully take advantage of until years from now, I replay games frequently and that's one of the benefits of PC gaming - just load up your old game with new hardware, no hoping/paying for developer patches. This is just a convenience thing.
 
5) I'm gonna take a slightly contentious stance here, but one based on interactions with core Windows OS/WDDM stuff while I was at Intel. Games should not have "mode changes" at all. This is a job for the operating system, not a game. "Exclusive fullscreen" has been a lie since Windows 10 and built on a bunch of hackery that tends to produce finicky and unexpected behavior. "Display output" stuff belongs in the operating system and no user space application should be able to mess with it. If you want to change output resolution or display refresh rate when running a game, that should be a feature of the OS, not each and every game. That said, I would also argue that changing the actual display output should not really be necessary in modern games, and is mostly a legacy thing. Smart (or even dumb...) upscaling and variable refresh can handle all the same cases without requiring the monitor to do some heavy-weight mode switch or making anything incompatible with alt-tab, non-hacky overlays and so on.

This is an interesting one, and my initial reaction was "what the hell, that's crazy talk"... but then thinking about it a little more I largely agree. With modern panels you don't actually want to run outside of your native resolution anyway and so it's preferably for the game to output as standard at that default provided of course that it can vary the internal rendering resolution. And that's the big condition of course. The game MUST have some way of scaling internal resolution, and frankly, DLSS/FSR2/XESS aren't enough because they don't offer sufficient flexibility for all cases if you're taking the resolution option away. You need another scaler that goes up in smaller increments, a maximum of 10% IMO and that also goes below 50% where many of them stop. This combined with the smart upscalers should make forcing the game to output at the monitors native resolution (and aspect ratio) the best solution. ANd of course you can always change resolution on the windows desktop if you need too. It's a PITA but few people would ever need or want to. Variable aspect ratio (at native output resolution would be icing on the cake too).

The refresh rate one I'm not too sure about. Obviously not everyone has a VRR display so we certainly need some way to lock the games refresh to something well below the monitors max output which is usually too high for most gamers to lock to - although I guess that's covered by 8. Games absolutely should have a properly paced frame rate limiter though. It's a must have for VRR IMO.

12) I don't disagree with the ask and in some cases it makes sense, but I do want to say that's not universally true for two reasons. First, in some games the specific way that things are done just doesn't really apply in the same way on PCs. Decisions on the console settings aren't just in terms of things like tradeoffs for a given performance budget, but also things like "this path is cheaper/more expensive on console because of this detail that isn't possible on PC." Async compute is a common case where things can be scheduled pretty carefully to overlap specific passes on console, whereas that isn't really possible on PC. Raytracing, mesh shaders and other "newish" things also have different paths and quirks on the different platforms that can have a major impact on tradeoffs. Thus even in cases where it's possible to have "settings with similar output" on PC as console, it sometimes doesn't make sense. I know the argument here will be "sure, but more settings the better and we can just tweak it further ourselves!", but sometimes certain concepts or tradeoffs just don't apply in the same way on PC, and other ones don't apply on console. Thus I agree it's a nice thing where possible to have but I don't think we should ding games for not having some console setting *specifically*. We should evaluate the games based on how well they cover the scalability spectrum with the settings they do provide, as holding them to some arbitrary standard about "comparable console settings" starts to feel a bit more like we're primarily concerned about PC/console comparisons and fanboying than the experience on PC itself.

Do you have some real world examples of this? As you say, where it's possible it's a very nice to have but I'm struggling to think of scenarios were technical limitations would make a specific quality setting of a particular effect impractical. This is obviously in the context of lowering settings on the PC version to match the console settings and so this scenario would need to be something along the lines of lowering the quality of the setting would result in an actual lower performance for the PC, or at least no appreciable increase.
 
5) I'm gonna take a slightly contentious stance here, but one based on interactions with core Windows OS/WDDM stuff while I was at Intel. Games should not have "mode changes" at all. This is a job for the operating system, not a game. "Exclusive fullscreen" has been a lie since Windows 10 and built on a bunch of hackery that tends to produce finicky and unexpected behavior. "Display output" stuff belongs in the operating system and no user space application should be able to mess with it. If you want to change output resolution or display refresh rate when running a game, that should be a feature of the OS, not each and every game. That said, I would also argue that changing the actual display output should not really be necessary in modern games, and is mostly a legacy thing. Smart (or even dumb...) upscaling and variable refresh can handle all the same cases without requiring the monitor to do some heavy-weight mode switch or making anything incompatible with alt-tab, non-hacky overlays and so on.

As you said earlier, it's often a question of time/resource budget. So I guess the question is what is a more reasonable expectation from developers?

Of course if I had my druthers all games would not have any animation stuttering issues from variable frame rates, there would be a high quality up/downscaling built into the settings, HDR could be enabled without forcing it through the desktop first, etc.

But the fact of the matter is that the majority of these games are being developed on platforms where the display attached is still going to be a fixed refresh display, and due to this - and the fact they're just fixed platforms in general - can lead to scenarios where they can have issues on PC's with varying refresh rates (Deathloop, Sackboy) or be unable to take advantage of vendor-specific features that depend on a 'proper' fullscreen (such as DLSSDSR).

So yes, all of that being handled by the game is the preferred way, sure. I like the optimizations the 'fullscreen' mode has gone through in Win10/11 to reduce a lot of the alt-tabbing friction before, and I think all games should start up first and foremost in borderless window mode.

But what is the more realistic ask? Is it more work for a developer to just have a separate refresh rate option and not actually change resolutions with every settings slider shift (which is really just a basic UX issue), or expect a porting team under a tight deadline to add a high-quality downscaler and fix the rendering engine so that the end user never has to bother with changing refresh rates in the first place?
 
... or be unable to take advantage of vendor-specific features that depend on a 'proper' fullscreen (such as DLSSDSR).
So to note up front, there is nothing "proper", or "exclusive" about how any of this actually works under the hood. It's a pile of hacks that breaks often, same with Gsync. This falls under the same bucket as some of the other stuff I mentioned... as a temporary workaround it's fine (analogous to other driver control panel overrides for stuff like AA), but we as consumers should not be treating this as a good long term solution. Display management is firmly in the realm of the OS (not user-mode applications) and we should not be ok with IHVs doing "half" the work. These features need to be supported via the core OS display stack and while proof of concepts are useful we as consumers should really be pushing Microsoft for first-class support of things that we think will be useful in the long run.

We got past the previous bullshit with "Optimus" and other iGPU/dGPU hackery with Windows 10. We almost have usable HDR on Windows... We absolutely can do the same for some of these display features given sufficient pressure.

But what is the more realistic ask? Is it more work for a developer to just have a separate refresh rate option and not actually change resolutions with every settings slider shift (which is really just a basic UX issue), or expect a porting team under a tight deadline to add a high-quality downscaler and fix the rendering engine so that the end user never has to bother with changing refresh rates in the first place?
If we fix this *once* in the OS, we no longer have to rely on porting teams to do either!
 
Do you have some real world examples of this? As you say, where it's possible it's a very nice to have but I'm struggling to think of scenarios were technical limitations would make a specific quality setting of a particular effect impractical. This is obviously in the context of lowering settings on the PC version to match the console settings and so this scenario would need to be something along the lines of lowering the quality of the setting would result in an actual lower performance for the PC, or at least no appreciable increase.
It's not necessarily about "completely impractical", it's more about the optimal tradeoff will be different. If setting A and B between subsystems have a similar cost on one platform but setting B costs 2x as much as A on another, that would often affect the choice of best settings for a given performance target. Async compute is the one example I gave where the performance overhead of certain things can be fairly different on PC (where they cannot be reliably overlapped across arbitrary SKUs since the tradeoffs are so different) vs console (where a simple static scheduling is often sufficient). There are other cases where shaders can be tuned using platform-specific features or assumptions that can affect the performance by a fairly significant amount. Even just tuning shader code gen on consoles to fit into various occupancy buckets is fairly common on console, but not really possible on PC. As a final example, being able to bake more things in advance on console (whether it be shaders or stuff like raytracing BVHs) can definitely affect the tradeoffs as well.

So just to reiterate the higher level point, in a lot of cases the console settings match up roughly with the PC settings, but in other cases even if a close visual match is possible it isn't necessarily the best tradeoff for a given PC performance level. I just think it's important that we focus on the latter as the target (good scalability settings that span the range of quality/performance well on PC) rather than arbitrarily making assumptions about how whatever settings various consoles use might or might not be a good tradeoff on PC. Of course in cases where it makes sense - like God of War where the game was explicitly designed with a fixed platform in mind and only later was ported - might as well include it, but if a game is developed simultaneously for multiple platforms from the start it doesn't necessarily apply in the same way.
 
Up front I must admit I've hesitated to comment here because of the potential quagmire, but if there's any forum I trust to navigate that carefully I'll give here a shot. Also please note the usual disclaimer that I'm speaking primarily as a lifelong PC gamer but providing a bit of perspective from a few sides of the equation.

I won't touch most of these points as they are good, fairly obvious and uncontentious stuff. Different priorities for different folks (ex. some of these matter a lot if you are a tech reviewer but aren't exactly a big deal if you are going to set the settings once or twice then playing through the whole game) but for the most part when these don't happen it's for a lack of time/resources. In the odd case there's some design reason why certain options aren't provided or are limited (ex. FOV/aspect ratio in competitive games). Some of them can be deceptively complicated or far more tricky in some games than others.

There are a few I do want to comment slightly further on specifically though if I may.

5) I'm gonna take a slightly contentious stance here, but one based on interactions with core Windows OS/WDDM stuff while I was at Intel. Games should not have "mode changes" at all. This is a job for the operating system, not a game. "Exclusive fullscreen" has been a lie since Windows 10 and built on a bunch of hackery that tends to produce finicky and unexpected behavior. "Display output" stuff belongs in the operating system and no user space application should be able to mess with it. If you want to change output resolution or display refresh rate when running a game, that should be a feature of the OS, not each and every game. That said, I would also argue that changing the actual display output should not really be necessary in modern games, and is mostly a legacy thing. Smart (or even dumb...) upscaling and variable refresh can handle all the same cases without requiring the monitor to do some heavy-weight mode switch or making anything incompatible with alt-tab, non-hacky overlays and so on.

12) I don't disagree with the ask and in some cases it makes sense, but I do want to say that's not universally true for two reasons. First, in some games the specific way that things are done just doesn't really apply in the same way on PCs. Decisions on the console settings aren't just in terms of things like tradeoffs for a given performance budget, but also things like "this path is cheaper/more expensive on console because of this detail that isn't possible on PC." Async compute is a common case where things can be scheduled pretty carefully to overlap specific passes on console, whereas that isn't really possible on PC. Raytracing, mesh shaders and other "newish" things also have different paths and quirks on the different platforms that can have a major impact on tradeoffs. Thus even in cases where it's possible to have "settings with similar output" on PC as console, it sometimes doesn't make sense. I know the argument here will be "sure, but more settings the better and we can just tweak it further ourselves!", but sometimes certain concepts or tradeoffs just don't apply in the same way on PC, and other ones don't apply on console. Thus I agree it's a nice thing where possible to have but I don't think we should ding games for not having some console setting *specifically*. We should evaluate the games based on how well they cover the scalability spectrum with the settings they do provide, as holding them to some arbitrary standard about "comparable console settings" starts to feel a bit more like we're primarily concerned about PC/console comparisons and fanboying than the experience on PC itself.

13) On one level, sure. If you're going to provide one it's great to provide all. Certainly there should be *a* modern AA/upsampling solution available on all GPUs, even future ones that don't yet exist. That said, the amount of resources the three companies throw at implementing these into games is vastly different as you might imagine, and some of them have been around longer than others, so it's not really surprising to see the results. Still good aspirational point of course given the constraints. THAT SAID, I have a related rant here so buckle up :D

These vendor specific libraries should be AT MOST considered to be a stop gap "solution" until standard, cross-vendor implementations are possible to whatever level specific games require. Don't be fooled by the IHVs here, this is absolutely no different than GameWorks, tessellation hackery, MSAA hackery and other vendor-specific lock-in in the past. These algorithms are not some "separate little post effect", they are key parts of the rendering engine with tendrils and constraints that heavily impact everything. It is neither sustainable nor desirable for large backend chunks of game renderers to delegate over to an IHV-specific graphics driver for reconstruction.

Obviously FSR2 gets a bit of a pass here because it is effectively example code, not driver magic. For that reason though, we shouldn't require game developers to implement "specifically FSR2". If they want to use that as a starting point where it makes sense, great. If they implement their own thing or use TSR or whatever, that's great too. Let's compare on quality and performance, not on buzzwords.

Whether we really ultimately need ML for this is - IMO - hugely up for debate. But even if you strongly believe that we do, then we need to get to a solution where you can implement something similar to DLSS/XeSS in standards-compliant ML APIs and it produces identical pixels across the different implementations. Everyone in the industry should be pushing the IHVs to do this and not just accepting their arguments about why they can't/don't want to. If they refuse to, we absolutely should refuse to use their closed-source implementations in the long run and developers should not be criticized for taking a hard line on this.

Since DF often makes good points about retro games and wanting to be able to preserve experiences into the future I will make one final point on that front: stuff like DLSS and XeSS is absolutely the sort of shit that will break and not work 10-20 years from now, potentially leaving us with games that are missing AA/upsampling entirely in the future. It's not reasonable or desirable for games to delegate such a core part of the renderer to external software.

Enough ranting, but had to get that out :)
some very interesting points there. As of recently, Raja Koduri commented on the fact that PC gaming might need a reset.

Microsoft could do more. Many many people use Windows for gaming. They could just create a standard or a "Windows gaming seal of quality" thing, and require developers to have a graphics setting to play a game according to that label, so people that don't like the hassle just would use that.

Say a "Certified Windows gaming device 2023" would be a Windows gaming machine which should comply with a certain number of CUs, VRAM, RoPS, etc, in the GPU side, a certain amount of cores and a given frequency in the CPU side, certain amount of RAM, full modern APIs compatibility, and run a series of tests to determine it is up to the task to run any game with that graphics setting enabled.
 
"Display output" stuff belongs in the operating system and no user space application should be able to mess with it. If you want to change output resolution or display refresh rate when running a game, that should be a feature of the OS, not each and every game.

How would that work in practice? As an end user I may want different games to render at different resolutions. E.g. in older games I may want 4xDSR. Also how would it correlate to render resolution given more powerful hardware can aim higher. It seems this would require universal dynamic resolution and universal dynamic upscaling at the OS and api level adopted by all games.
 
While exact console settings would make the lives of people like Alex and other tech reviewers much easier.. Andrew raised some good points about why it's not always practical or possible when taking various aspects of the difference in architectures into account.. but even still, if developers could put in the work to have an "optimized" setting which in general produces the optimal performance/visual return based on internal knowledge of how the engine works, that would be great. Anyway, in general the more granularity for settings the better.. it allows for a more user definable experience which is what PC gamers appreciate.

Borderless Fullscreen is essential. Exclusive Fullscreen can go in the garbage where it belongs. It's 2023.. I don't want to alt-tab and have the entire thing stop for a few seconds while it attempts to switch to some other thing, and then ends up not working half the time and switching back.. The game should run Borderless at my monitors native resolution, and the game should have scaling options with a large percentage range both above and below my native resolution. I also want and call for more Ultrawide support.
 
It's 2023.. I don't want to alt-tab and have the entire thing stop for a few seconds while it attempts to switch to some other thing,

If you're running at the same res as your desktop, that's really not something that should be occuring even with 'exclusive' fullscreen in most games for quite a while, as since Windows 10 it basically runs those games in borderless window mode by default - you have to specifically opt-out of this behavior per the .exe properties. Setting exclusive in a game just allows stuff like DSR, custom resolutions and earlier HDR implementations that only worked in 'fullscreen' to still operate, but you're getting the benefits of the flip-model presentation, one of which is a minimal disruption when using alt-tab and no performance loss. Alt-tabbing should be instantaneous, certainly is for me whenever I alt-tab in a game even when exclusive fullscreen is set in its options (again, assuming the game is set to the same res/refresh as my desktop).

It's possible that some older games are whitelisted and still run in their 'true' exclusive fullscreen modes which may exhibit that clunky behavoir, but for most you're really running in a form of borderless windowed on a modern MS OS.
 
Last edited:
Status
Not open for further replies.
Back
Top