Digital Foundry Article Technical Discussion [2025]

So

If current gen consoles use 3060 as their GPU in 2020 and now with DLSS4, the image quality may easily defeat series x with FSR or even PS5 PRO with PSSR.

Both console makers hugely underestimated NVIDIA advantages in GPU. They should go for NVIDIA even losing some backward compatibility.
 
So

If current gen consoles use 3060 as their GPU in 2020 and now with DLSS4, the image quality may easily defeat series x with FSR or even PS5 PRO with PSSR.

Both console makers hugely underestimated NVIDIA advantages in GPU. They should go for NVIDIA even losing some backward compatibility.
The 3060 12gb is some god tier GPU honestly. But when they started working on current gen consoles (2016) DLSS and ray tracing wasn't even a thought in their minds.

And we already know that Sony is going with AMD (project amethyst confirms that partnership). And that's a good thing, giving AMD the resources and the support to close the gap with Nvidia is good not only for gaming, but for computing in general. Nvidia is at the top of the world right now, and we know what monopolies entail.
 
so,,, RE2 Remake always stutters seconds before a cutscene is about to happen. Other than that the game runs totally flawless.

So I disabled MPO using the nVidia file, in a place where RE2 Remake always stutter -just like in any other place preceding a cutscene-.

With MPO disabled the pre-cutscene stutter was still there. So I enabled MPO again using another nVidia file created for that purpose.

 
so,,, RE2 Remake always stutters seconds before a cutscene is about to happen. Other than that the game runs totally flawless.

So I disabled MPO using the nVidia file, in a place where RE2 Remake always stutter -just like in any other place preceding a cutscene-.

With MPO disabled the pre-cutscene stutter was still there. So I enabled MPO again using another nVidia file created for that purpose.

For some reason I can't see the image you posted. I never had anything like that when playing any of these games, so I'm curious to see this.
 
For some reason I can't see the image you posted. I never had anything like that when playing any of these games, so I'm curious to see this.
here, just before you find the corrupt police chief. It ALWAYS happens seconds before any cutscene -at least for me-, maybe it occurs when the game starts reading the cutscene's data.


GhYiO5i.png

Currently completing the game, like for the 15th time, playing it at 45 x 8 FG 140% super sampling.
 
Last edited:
So

If current gen consoles use 3060 as their GPU in 2020 and now with DLSS4, the image quality may easily defeat series x with FSR or even PS5 PRO with PSSR.

Both console makers hugely underestimated NVIDIA advantages in GPU. They should go for NVIDIA even losing some backward compatibility.
They avoid Nvidia for a reason. They wont offer price competitive deals. Consoles need to be affordable or else they cease being mass market devices and lose all the massive benefits they get from that.

Heck, look at Nintendo, using like four year old Nvidia technology on a five year old process node, and that's still probably gonna require Nintendo to raise the Switch 2 price a fair chunk.
 
0:00:56 News 1: Developer Direct 2025 - Doom: The Dark Ages
0:17:36 Ninja Gaiden 4
0:22:54 Clair Obscur: Expedition 33
0:27:55 South of Midnight
0:32:38 News 2: RTX 5090 review reaction
1:14:39 News 3: Radeon 9070 cards arriving in March
1:27:53 News 4: Sony cancels Bluepoint and Bend Studios live service games
1:36:27 Supporter Q1: Could the Switch 2 pack unanticipated custom hardware?
1:42:18 Supporter Q2: Will GTA 6 cost $100 US?
1:48:11 Supporter Q3: Should the PS5 Pro be supported with new games for longer than the PS5?

 
Time stamped the part discussing reviews and kind of how they should handle things like DLSS. Imagine there will be a lot of disagreement here:


I think in general, I'm kind of in agreement. I think the apples-to-apples stuff is definitely still relevant, but in some ways it's almost like a synthetic benchmark if you're a person that uses upscaling. Balancing in the actual user experience is a tough part of the review. Really interested in their 5080 review to see what they come up with.
 
I think there’s certainly a use case for summing up performance as “this card will give you 30% gain in raster and 25% in ray tracing at 4k” then focus on actual use cases.

“With the 4090, we were limited to 4k 60 using path tracing and dlss upscaling and frame gen, And while the dlss benefits will still be there if you’re on a 4090, for those with high refresh monitors can enjoy playing with the same settings at 75fps which helps smooth out the latency from using frame gen. Going a step further by using MFG, you can achieve the motion clarity produced at 240hz which will be welcomed for those with high refresh monitors”

Just a crude example but the point is, I’m now informing the reader how the new card will alter the experience for them in a game and what that means in a relatable experience.
 
Last edited:
It's really nice to see the improvements to ray reconstruction - IMO that's the area that really needed something the most. While it's theoretically appealing to have one-denoiser-to-rule-them-all, the previous version was just not good enough. Maybe y'all can stop calling me crazy for not using it now that the newer video does a good job highlighting how poor some of the old results were 😆

The new version is certainly an improvement in a lot of those areas, ghosting in particular. It still has quite a bit of weird swimming effects in the lighting but I think it may well tick over to something that is a net positive for me, while it was not before. I'm looking forward to trying it when I can get my hands on a 5-series card.

I did expect less of a performance hit on the 5-series cards in particular. It's certainly not a deal breaker, but that is definitely quite a lot of performance/time if you do the math on these high end cards, even with dedicated hardware. For RR I can more easily accept the notion that spending a lot of time and complexity doing a very fancy single denoiser is worth it in the long run. With super-resolution the "remaining quality" delta is obviously a lot smaller, but conversely improvements can directly trade off with render resolution so it's really just a question of which looks better at a given throughput, or which performs better at roughly similar IQ. Obviously that will still shift depending on how much motion and indirect lighting games have though.
 
Last edited:
I think there’s certainly a use case for summing up performance as “this card will give you 30% gain in raster and 25% in ray tracing at 4k” then focus on actual use cases.

“With the 4090, we were limited to 4k 60 using path tracing and dlss upscaling and frame gen, And while the dlss benefits will still be there if you’re on a 4090, for those with high refresh monitors can enjoy playing with the same settings at 75fps which helps smooth out the latency from using frame gen. Going a step further by using MFG, you can achieve the motion clarity produced at 240hz which will be welcomed for those with high refresh monitors”

Just a crude example but the point is, I’m now informing the reader how the new card will alter the experience for them in a game and what that means in a relatable experience.

What would be interesting is instead of comparing GPUs in terms of frame rate output it was instead visual output. As in say instead of looking at 4090 as x% more frames than 4080 at 4k instead it's just you play at DLSS quality vs performance at the same frame rate output and how that relates to the user experience.

I remember when Rage came out with no graphics settings and just automatically adjusted visuals to maintain 60 fps the hardware enthuasist circles/coverage sites had a meltdown. I do wonder with the technology trends if this would be the future going forward in that what separates the highest tier GPU vs n-1 isn't the frame rate output but the visual quality.
 
With Nanite, geometry costs scale mainly with resolution, not triangle count. With importance sampled RT lighting like ReSTIR and MegaLights, lighting cost also scales with resolution and samples per pixel instead of light counts. With Neural Materials, material evaluation cost is fixed for a given resolution regardless of material complexity. We're looking at a future in which the only factor affecting performance is rendering resolution. Instead of Low - Medium - High - Ultra presets with plenty of individual options, there will just be a resolution slider. Games may well decide to set dynamic resolution scaling on by default.
 
With Neural Materials, material evaluation cost is fixed for a given resolution regardless of material complexity.
I actually don't know that I understand how neural materials really solves current bottlenecks. Material shaders can be expensive because artists are basically putting giant content procedural generation/mixing algorithms in them, not because there's some sort of explosion of BRDF evaluation. Obviously we need more ways for these material graphs to get baked down and fused in efficient ways going forward, but that is pretty purely a software engineering and tools problem... I'm not really sure I see much use from ML on the GPU to that end in the near future.

We're looking at a future in which the only factor affecting performance is rendering resolution. Instead of Low - Medium - High - Ultra presets with plenty of individual options, there will just be a resolution slider. Games may well decide to set dynamic resolution scaling on by default.
Resolution definitely matters more and more things scale with it for sure... but the constant factors are still pretty big and will continue to be. It's definitely desirable to have slides that can scale performance across a broad range of hardware without fundamentally changing the image as much as some of those classic settings do.

That said, I don't imagine they will necessarily go away any time soon as even if they have a pretty irrelevant impact on performance/quality (which let's be honest, at least half of these settings barely affect anything these days) I think PC players get a warm fuzzy feeling inside if we/they have a list of options to play with. I mean it's 2025 and a non-trivial number of games still like to pretend that setting the level of anisotropic filtering is a relevant tweakable...
 
Last edited:
Ha that one always confuses me. Maybe texture filtering still matters on consoles?
Virtual textures need a border around their tiles to make AF work. The higher the AF the larger the border. Depending on you tile size having a big border can eat a chunk of VRAM. This can become problematic if you target Xbox X and then later try to port your game to S. Same if you want to run your game on a PC with a budget graphics card that has less VRAM.
 
Virtual textures need a border around their tiles to make AF work. The higher the AF the larger the border. Depending on you tile size having a big border can eat a chunk of VRAM. This can become problematic if you target Xbox X and then later try to port your game to S. Same if you want to run your game on a PC with a budget graphics card that has less VRAM.

Ironically virtual textures are meant to reduce VRAM usage.
 
I mean they still obviously reduce texture usage vs... not using virtual textures. The extra border data comes from the source texture that would otherwise be entirely resident in VRAM. AF does increase memory bandwidth/usage in general because it drives up the mip level chosen, depending on the surface orientation. But really I don't know that AF is a primary consideration for scaling down; stuff like the primary render resolution is still going to be the main tweakable that affects the necessary VT page pool size.
 
Someone should look at the various individual settings across games and see what they actually contribute. Maybe able to declutter the settings to just a few (perhaps combined) instead of tweaking every tiny parameter.
 
Back
Top