Digital Foundry Article Technical Discussion [2025]

Question for my more technically minded friends here. Is there a reason why performance doesn't scale with resolution on consoles? On my laptop for instance I can easily boost framerates just by lowering the resolution. Yet if I lower the resolution on say my Series X it doesn't improve the framerate at all. I know developers can put in their own specific performance modes but not all of them include such an option. It just seems weird to me that if I want to run my Series X at 1080p in System settings I'm going to get the same performance as if I was running at 4k. Seems like a waste really.
 
MSAA + upscaling is almost like VRS with extra steps. Making VRS part of a neural upscaling/antialiasing solution could be interesting, it could apply a higher shading rate to areas of the frame where the upscaler needs it the most like particle effects, thin geometry, disocclusions, and fast-moving objects, while applying a lower shading rate in the areas the upscaler can handle easily.
Yup, if MSAA/VRS is used the TAAU should be made aware of it.

Would be awesome to teach neural to use 4xMSAA buffer as an input, should give amazing results, even if it's quite low result for upscaling output.

For VRS, 2 bits per subsample quad to indicate coarse shading rate should be enough to give enough hints for it.
TAA jitter is not enough to get full coverage within coarse sample area, so changing active MSAA sample per frame might be needed as well.

Few bit ID buffer to seperate moving, animated, static objects for sample rejection and add perhaps light/shadow visibility or change.. etc.

Might get quite expensive and fast.

For very thin lines the Humus line methods certainly would still be the way to go.
 
Yup, if MSAA/VRS is used the TAAU should be made aware of it.
The camera jittering and the MSAA samples are orthogonal to each other. The MSAA samples will jitter along with the camera, so they are compatible out of the box. However, it is possible to leverage the programmable subsample offsets for MSAA to improve pixel coverage with fewer color samples when combined with TAA. And as I mentioned a few posts earlier, all of this works in the recent Assetto Corsa game with DLSS and MSAA even without network retraining. Not sure whether additional training would improve the results at all - the network is responsible for image segmentation, followed by per pixel blending factors for different parts of the image, while what matters for the wires is the subpixel connectivity information provided by MSAA in the form of additional pixel shades that connect parts of the wires.
 
Question for my more technically minded friends here. Is there a reason why performance doesn't scale with resolution on consoles? On my laptop for instance I can easily boost framerates just by lowering the resolution. Yet if I lower the resolution on say my Series X it doesn't improve the framerate at all. I know developers can put in their own specific performance modes but not all of them include such an option. It just seems weird to me that if I want to run my Series X at 1080p in System settings I'm going to get the same performance as if I was running at 4k. Seems like a waste really.
There are several factors at play here.

A) where the bottleneck is. If you reduce a setting that relieves a bottleneck then you get more performance.

B) typically console games have framerate caps so you’re not really sure what the performance is above the cap. Reducing settings could get you more performance but you won’t really see it, perhaps it will manifest as more stability at higher frame rates.

C) some games are capped by CPU. No amount of GPU changes may necessarily result in notable increased performance.

D) shared bandwidth, and perhaps one defining difference between consoles and PCs. The CPU and GPU share the memory bandwidth, and when they compete for bandwidth it actually drains the available bandwidth a lot more. So even if you made changes to run faster, you CPU may be eating up those resources to go faster and you’re back to not getting much more.
 
Here’s the piece on Sony ports I don’t understand. You release a game always 3 months early to a flurry of issues and inconsistent experiences across the community.

At some point someone with a working brain has to look at that and extend the timeframe for port development based on every trend so far. Yet it hasn’t happened.
 
Question for my more technically minded friends here. Is there a reason why performance doesn't scale with resolution on consoles? On my laptop for instance I can easily boost framerates just by lowering the resolution. Yet if I lower the resolution on say my Series X it doesn't improve the framerate at all. I know developers can put in their own specific performance modes but not all of them include such an option. It just seems weird to me that if I want to run my Series X at 1080p in System settings I'm going to get the same performance as if I was running at 4k. Seems like a waste really.

Typically on console render resolution is decoupled from output resolution. So you might drop from say 4k to 1080 at a system level (in console level settings) but the game is still rendering at the same resolution but with the system scaling the final output to a lower output resolution.

Edit: I seem to recall that for a while, at least one of the consoles didn't even let the game know what the system level output was. I suppose 1080p, and now 4k, are so standardised that abstracting away system output resolution from the software typically makes sense on console where it would be a disaster on PC.
 
Last edited:
There are several factors at play here.

A) where the bottleneck is. If you reduce a setting that relieves a bottleneck then you get more performance.

B) typically console games have framerate caps so you’re not really sure what the performance is above the cap. Reducing settings could get you more performance but you won’t really see it, perhaps it will manifest as more stability at higher frame rates.

C) some games are capped by CPU. No amount of GPU changes may necessarily result in notable increased performance.

D) shared bandwidth, and perhaps one defining difference between consoles and PCs. The CPU and GPU share the memory bandwidth, and when they compete for bandwidth it actually drains the available bandwidth a lot more. So even if you made changes to run faster, you CPU may be eating up those resources to go faster and you’re back to not getting much more.
Thanks for the detailed reply! These issues are somewhat unique to APUs and unified memory correct? The PC can scale more easily due to the separation of CPU and GPU resources?

Also in reference to D. Does framerate require more bandwidth vs resolution?
Typically on console render resolution is decoupled from output resolution. So you might drop from say 4k to 1080 at a system level (in console level settings) but the game is still rendering at the same resolution but with the system scaling the final output to a lower output resolution.
Yeah this is what I figured. Thanks.
 
Thanks for the detailed reply! These issues are somewhat unique to APUs and unified memory correct? The PC can scale more easily due to the separation of CPU and GPU resources?

Also in reference to D. Does framerate require more bandwidth vs resolution?

Yeah this is what I figured. Thanks.
Doubling the Framerate requires double. On console more than Double.

Framerate means you hit the entire GPU and CPU pipeline twice. Increasing the resolution only increases bandwidth usage in the areas where it’s needed on the GPU side of things only.
 
Here’s the piece on Sony ports I don’t understand. You release a game always 3 months early to a flurry of issues and inconsistent experiences across the community.

At some point someone with a working brain has to look at that and extend the timeframe for port development based on every trend so far. Yet it hasn’t happened.

Maybe management think the noice works as marketing by making the game more infamous?
 
Doubling the Framerate requires double. On console more than Double.

Framerate means you hit the entire GPU and CPU pipeline twice. Increasing the resolution only increases bandwidth usage in the areas where it’s needed on the GPU side of things only.
Thanks this was why Metroid Prime 4 went from 4k 60fps to 1080p 120fps when theoretically it should have been able to do 1440p 120fps. Bandwidth limits.
 

DF Direct Weekly #209: Cyberpunk 2077 Switch 2 Hands-On, Quake 2 AI, RTX 5060 Ti Price Leak!​


0:00:00 Introduction
0:01:11 News 1: Cyberpunk 2077 tested on Switch 2!
0:16:54 News 2: Metal Eden PC demo has issues
0:31:04 News 3: Asus teases Xbox PC handheld
0:37:59 News 4: RTX 5060 Ti prices allegedly leak
0:51:46 News 5: Microsoft launches Muse AI Quake 2 experience
1:04:53 News 6: Switch 2 portable mode simulated!
1:20:42 Supporter Q1: What will future Nvidia and AMD handhelds look like?
1:27:37 Supporter Q2: What will PhysX development look like after being made open source?
1:32:00 Supporter Q3: Why doesn’t Switch 2 support 4K120?
1:36:16 Supporter Q4: Should Nintendo unlock higher Switch 1 clocks?
1:40:43 Supporter Q5: Will Switch 2 run Switch 1 games in their docked configurations in portable mode?
 
Yup, if MSAA/VRS is used the TAAU should be made aware of it.
For VRS specifically, I think the converse is also important; VRS should be aware of the upscaler. With a regular VRS implementation the goal is to apply low shading rates to areas of the frame where the human viewer won't notice lower resolution. But when combined with an upscaler, the goal should instead be to apply lower shading rates to areas of the frame that the upscaler can still produce a good result with low input resolution, or where the human viewer won't notice upscaler artifacts caused by lower resolution inputs.
Doubling the Framerate requires double. On console more than Double.
Doubling the frame rate with frame generation shouldn't require more than double the bandwidth. Maybe even less than double. I suspect most single player next-gen games with 120FPS modes will be using frame generation to hit that target.
 
Doubling the frame rate with frame generation shouldn't require more than double the bandwidth. Maybe even less than double. I suspect most single player next-gen games with 120FPS modes will be using frame generation to hit that target.
Yea this one I'm not sure about. Could really use a deep dive on the cost of Frame Generation.
 
Here’s the piece on Sony ports I don’t understand. You release a game always 3 months early to a flurry of issues and inconsistent experiences across the community.

At some point someone with a working brain has to look at that and extend the timeframe for port development based on every trend so far. Yet it hasn’t happened.
One reason would be revenue recognition. Your project is expected to deliver $X top line revenue on such and such a date. You have to make that date if you don’t want to get sued for securities fraud.

Another reason: Software is never done. You have to ship it at some point.
 
One reason would be revenue recognition. Your project is expected to deliver $X top line revenue on such and such a date. You have to make that date if you don’t want to get sued for securities fraud.

Another reason: Software is never done. You have to ship it at some point.

Yes, the people owning shares in Sony would sue Sony's board because TLOU2 PC port was delayed 2 months.

Maybe you should look up the definition of "securities fraud"?
 

I'm not sure why DF would think this is 540p without upscaling.

The graphics I'm seeing here are very clearly tpyical DLSS upscaling artifacts if you run at a crazy low resolution.

I predict this is pretty much DLSS Ultra Performance (so ~360p interally) at 1080p. It would also make the most sense in that power constrained environment and at that size, it will still look very good.
 

I'm not sure why DF would think this is 540p without upscaling.

The graphics I'm seeing here are very clearly tpyical DLSS upscaling artifacts if you run at a crazy low resolution.

I predict this is pretty much DLSS Ultra Performance (so ~360p interally) at 1080p. It would also make the most sense in that power constrained environment and at that size, it will still look very good.
I struggle to believe that too. I can't believe that no third party developer wouldn't use DLSS, even just as a way to advertise the game and sell a few more copies from people curious about how it looks.

We also have the Hoegwarts developers saying that they are using DLSS.
 

Oh my goodness, what's happening here? First I was excited, Nintendo sponsoring DF surely DF was given exclusive footage to analyse but... it's literally just an ad reuploaded on DF.

Not a good look imo.

Really, wtf? This is a bizarre decision. Sponsored is one thing, but this doesn't have any analysis by DF staff. It's just a full ad, what in the heck was the thinking here?

Edit: From the ResetEra thread, don't know what platform this is from (their Discord?) but...ugh:

1744819400862.png
 
Last edited:
Back
Top