Digital Foundry Article Technical Discussion [2024]


DF Direct Weekly #149: Xbox Multi-Plat Meltdown, Helldivers 2, Final Fantasy 7 Rebirth Hands-On

0:00:00 Introduction
0:00:52 News 01: Is Microsoft embracing multiplatform gaming?
0:43:52 News 02: Hands-on with Helldivers 2
0:54:41 News 03: Final Fantasy 7 Rebirth impressions!
1:05:03 News 04: Penny’s Big Breakaway preview
1:11:42 Supporter Q1: What’s the minimum advisable frame-rate when using frame gen with a controller?
1:17:50 Supporter Q2: Is Nvidia purposefully limiting frame gen to the RTX 4000 series to increase sales?
1:27:18 Supporter Q3: Given that the Series S is the best-selling current-gen Xbox, should we consider it the baseline experience this gen?
1:34:14 Supporter Q4: Has the use of FSR been good or bad for console gaming?
1:43:14 Supporter Q5: Why doesn’t AMD often pioneer new graphics techniques like Nvidia?
1:51:44 Supporter Q6: Can John recommend any 3D Sonic titles?
That last question feels like it pops up from nowhere
 
Random thought, couldn't you use VRS to reduce shader aliasing by increasing rather than decreasing the shading rate?
You can't increase the shading rate via VRS. The upper rate limit is bounded by the original resolution, which is a one-one pixel mapping. If you have a shading rate > 1, you don't have extra geometry data to evaluate.
However via MSAA you get that extra data to evaluate on selective instances, but then it just becomes deferred MSAA. And don't forget it doesn't help with post-processing effects like SSAO, SSR and etc.
 
You can't increase the shading rate via VRS. The upper rate limit is bounded by the original resolution, which is a one-one pixel mapping. If you have a shading rate > 1, you don't have extra geometry data to evaluate.
However via MSAA you get that extra data to evaluate on selective instances, but then it just becomes deferred MSAA. And don't forget it doesn't help with post-processing effects like SSAO, SSR and etc.
Didn't Nvidia have some type vrs super sampling they introduced with Turing? From what I understand it would work on a portion of the screen. Not sure if it was ever implemented to any degree because I think it was meant for VR use.
 
Isn't it effectively the same to raise the resolution across the board and then use VRS to down shade on 90% of everything you didn't want to up shade? :)
 
Didn't Nvidia have some type vrs super sampling they introduced with Turing? From what I understand it would work on a portion of the screen. Not sure if it was ever implemented to any degree because I think it was meant for VR use.
Yes, this was used in combination with eye tracking for VR. Lower res for everything that is not in direct sight of the eyes and higher res for things in sight.

Well you could use this for console games as well, but without eye tracking it doesn't make that much sense.
 
I know people complain about TAA..but it really is peak imo. As for image quality concerns about things like FSR2, I think the solution is simply a higher internal res target or a better upscaling technology like TSR as FSR is pushed well past it's limits and then attacked for not living up to a certain standard
 
I know people complain about TAA..but it really is peak imo. As for image quality concerns about things like FSR2, I think the solution is simply a higher internal res target or a better upscaling technology like TSR as FSR is pushed well past it's limits and then attacked for not living up to a certain standard
Agreed. Death to aliasing! The pros far outweigh the cons for me personally. I want that image temporally stable with no shimmering before anything else!
 

DF Direct Weekly #149: Xbox Multi-Plat Meltdown, Helldivers 2, Final Fantasy 7 Rebirth Hands-On

0:00:00 Introduction
0:00:52 News 01: Is Microsoft embracing multiplatform gaming?
0:43:52 News 02: Hands-on with Helldivers 2
0:54:41 News 03: Final Fantasy 7 Rebirth impressions!
1:05:03 News 04: Penny’s Big Breakaway preview
1:11:42 Supporter Q1: What’s the minimum advisable frame-rate when using frame gen with a controller?
1:17:50 Supporter Q2: Is Nvidia purposefully limiting frame gen to the RTX 4000 series to increase sales?
1:27:18 Supporter Q3: Given that the Series S is the best-selling current-gen Xbox, should we consider it the baseline experience this gen?
1:34:14 Supporter Q4: Has the use of FSR been good or bad for console gaming?
1:43:14 Supporter Q5: Why doesn’t AMD often pioneer new graphics techniques like Nvidia?
1:51:44 Supporter Q6: Can John recommend any 3D Sonic titles?
quite agree with their comments at the 0:19:40 mark. To stay in the hardware business, a bit more open platform and strategy might help. They have 100% of the OS after all. Maybe a MSX like thing, learn from the japanese.
 
Last edited:
You can't increase the shading rate via VRS. The upper rate limit is bounded by the original resolution, which is a one-one pixel mapping. If you have a shading rate > 1, you don't have extra geometry data to evaluate.
However via MSAA you get that extra data to evaluate on selective instances, but then it just becomes deferred MSAA. And don't forget it doesn't help with post-processing effects like SSAO, SSR and etc.
It uses MSAA hardware and nvidia has already shows demos to do just that.
On NV hardware you have 16xMSAA mode, so you can also go from 2x2 coarse shading to 4xSSAA by will.(Or 4x4 coarse, or from native to 16xSSAA)

In such case you setup the original/render buffer resolution to coarsest possible with 16xMSAA, resolve to screen resolution which is a inbetween resolution where you have still 4xsubsamples.
Sample locations are programmable so you can do ordered grid for coarse shading and mimic MSAA sample pattern for SSAA.

One problem with VRS is that you really should take new sample locations in account when using it with TAA, not sure if any game has done so yet.
Intel had paper on doing constant 2x2 coarse shading with TAA a while ago, but I'm sure tier2 VRS would make things a bit more complicated.
 
It uses MSAA hardware and nvidia has already shows demos to do just that.
On NV hardware you have 16xMSAA mode, so you can also go from 2x2 coarse shading to 4xSSAA by will.(Or 4x4 coarse, or from native to 16xSSAA)

In such case you setup the original/render buffer resolution to coarsest possible with 16xMSAA, resolve to screen resolution which is a inbetween resolution where you have still 4xsubsamples.
Sample locations are programmable so you can do ordered grid for coarse shading and mimic MSAA sample pattern for SSAA.

One problem with VRS is that you really should take new sample locations in account when using it with TAA, not sure if any game has done so yet.
Intel had paper on doing constant 2x2 coarse shading with TAA a while ago, but I'm sure tier2 VRS would make things a bit more complicated.
When has there been 16x MSAA?
 
It uses MSAA hardware and nvidia has already shows demos to do just that.
On NV hardware you have 16xMSAA mode, so you can also go from 2x2 coarse shading to 4xSSAA by will.(Or 4x4 coarse, or from native to 16xSSAA)

In such case you setup the original/render buffer resolution to coarsest possible with 16xMSAA, resolve to screen resolution which is a inbetween resolution where you have still 4xsubsamples.
Sample locations are programmable so you can do ordered grid for coarse shading and mimic MSAA sample pattern for SSAA.

One problem with VRS is that you really should take new sample locations in account when using it with TAA, not sure if any game has done so yet.
Intel had paper on doing constant 2x2 coarse shading with TAA a while ago, but I'm sure tier2 VRS would make things a bit more complicated.
Ya, this is what I mean by "via MSAA".
For the TAA issue, I guess with more games using vis buffer, resolving subsamples would be a lot easier given instance and primitive ids.
 

DF Direct Weekly #149: Xbox Multi-Plat Meltdown, Helldivers 2, Final Fantasy 7 Rebirth Hands-On

0:00:00 Introduction
0:00:52 News 01: Is Microsoft embracing multiplatform gaming?
0:43:52 News 02: Hands-on with Helldivers 2
0:54:41 News 03: Final Fantasy 7 Rebirth impressions!
1:05:03 News 04: Penny’s Big Breakaway preview
1:11:42 Supporter Q1: What’s the minimum advisable frame-rate when using frame gen with a controller?
1:17:50 Supporter Q2: Is Nvidia purposefully limiting frame gen to the RTX 4000 series to increase sales?
1:27:18 Supporter Q3: Given that the Series S is the best-selling current-gen Xbox, should we consider it the baseline experience this gen?
1:34:14 Supporter Q4: Has the use of FSR been good or bad for console gaming?
1:43:14 Supporter Q5: Why doesn’t AMD often pioneer new graphics techniques like Nvidia?
1:51:44 Supporter Q6: Can John recommend any 3D Sonic titles?
I am very glad how strongly they expressed their negative opinions about both FSR1 and 2 (the console settings). They both look terrible, are destroying the image quality in many ways, and shouldn't be used at all by developers, period. And finally John is admiting that CBR rendering as used in many Pro games was a better reconstruction solution overall.

And most interestingly the best use cases was actually done in Multiplatform games like Monster Hunter World, Dark Souls remastered or Shadow of the Tomb Raider. Those games still look great on Pro! Very few complaints about image quality, blurryness, unstability in motion etc. here.
 
I am very glad how strongly they expressed their negative opinions about both FSR1 and 2 (the console settings). They both look terrible, are destroying the image quality in many ways, and shouldn't be used at all by developers, period. And finally John is admiting that CBR rendering as used in many Pro games was a better reconstruction solution overall.

And most interestingly the best use cases was actually done in Multiplatform games like Monster Hunter World, Dark Souls remastered or Shadow of the Tomb Raider. Those games still look great on Pro! Very few complaints about image quality, blurryness, unstability in motion etc. here.
As much as I'm not satisfied with the quality and performance of FSR 2, I don't think CBR outperforms FSR 2. At least the OG implementation, which is a simple 1/2 internal res rendering with diagonal samples and using the differential blend to reconstruct the info spatially.
The only reason I can think of CBR might look better than FSR 2 is that most CBR titles are rendered at 1/2 internal res, whereas many FSR 2 titles render at 1/4 internal res, which makes a lot of sense why it looks worse.
Also keep in mind that even with FSR 2, a lot of games just target a lower res. There are quite some games reaching 1800p-2160p using CBR, but a lot of FSR 2 titles on PS5 only output at 1440p (and with 1/4 internal res even).

Of course, when done right, a customized CBR implementation can look good. Like how in DS remastered, there's a 2x temporal upscaling combined with a 2x spatial upscaling to achieve 4x supersampling. Given consoles' fixed hardware specs, devs can utilize infos such as triangle and primitive data to better resolve temporal/spatial data (ofc these data are available on recent PC gpus as well).

Personally speaking, I don't hate FSR2. It's a good open-to-use toolkit for many small devs to gain that extra performance. What I don't like is that many devs have given up developing more customizable upscaling technologies that utilize the extra data of their own render pipeline and better fit their own need. All of a sudden everyone just switch to FSR 2, including AAA studios who are the ones have resources to advance the tech
 
...

Of course, when done right, a customized CBR implementation can look good. Like how in DS remastered, there's a 2x temporal upscaling combined with a 2x spatial upscaling to achieve 4x supersampling. Given consoles' fixed hardware specs, devs can utilize infos such as triangle and primitive data to better resolve temporal/spatial data (ofc these data are available on recent PC gpus as well).

Personally speaking, I don't hate FSR2. It's a good open-to-use toolkit for many small devs to gain that extra performance. What I don't like is that many devs have given up developing more customizable upscaling technologies that utilize the extra data of their own render pipeline and better fit their own need. All of a sudden everyone just switch to FSR 2, including AAA studios who are the ones have resources to advance the tech
This is exactly what I am talking about. I prefer a lower official resolution with something that can at least look good than a fake "4K" resolution (from an even lower resolution than 1080p!) that always looks bad whatever the art / assets used in the game, all that because of a completely botched AA / upscaling solution that works OK in only 10% of scenarii like when: not in motion, not transparencies used, no vegetations etc.

In theory FSR2 might be a better solution than CBR because of the technicall stuff you listed, but in practice it's way worse overall than a good CBR. And also there are very different type of CBR implementations from awful to excellent.
 
What I don't like is that many devs have given up developing more customizable upscaling technologies that utilize the extra data of their own render pipeline and better fit their own need.
Yep. I still remember all the customised to death types of AA, such as the HRAA, and other cool methods used a decade ago. Yet, the development of the current upscalers is awfully and annoyingly slow, as if the past decade of research in the AA field was suddenly forgotten. Like how does it come that we are still enjoying those low-res aliased edges in motion when morphological AA methods were available a decade ago and cost nothing today? Just prefilter the damn input before upscaling and accumulating samples, as I suggested years ago, as was done in the SMAA 2X a decade ago, and as the STP has finally implemented it now (ctrl+f the GEAA. God, thank you!), or use the MLAA itself for the spatial upscaling (simple search and replace problem) instead of relying on the Lanczos or bicubic filtering for the spatial upsampling inside of the TAA loop (are we in a stone age?). There were cheap coverage samples a decade ago, which cost nothing and required just a couple of bits per sample - use them to achieve the perfect spatial edge upscaling with the higher resolution coverage samples frequency. Or at least use the barycentrics to calculate the distance to the edge (GBAA from the Humus) to properly reconstruct it in higher resolution. Without this essential stuff we will never achieve upscalers with good enough quality of geometry edges in motion and with good enough quality for higher than 4x upscaling factors.
 
The reality is temporal solutions are only going to increase. Sampling over time is the future. You're just not going to see real gains in the quality of rendering without temporal data. Spatial upscaling is very limited because it will always be some kind of interpolation, where temporal has access to real good samples that you've already generated. Downscaling is just a dead end because it requires generating more samples per frame which is just brute force. I do think the real issue is pushing the limit of temporal upscalers past their capabilities. Upscaling from 720p to 1440p, and then applying an additional spatial upscale to 4k is never going to look sharp, at least with current solutions (but probably never).
 
Last edited:
Spatial upscaling is very limited because
If this was addressed to me, of course, I was referring to spatial upscaling in the context of temporal upscalers. Spatial upscalers are utilized in the core loop of TAAU and all other temporal upscalers. When TAAU/FSR/whatever else fails to accumulate samples for various reasons, you will see the low resolution spatially upscaled image with all the low res underlying aliasing (on camera cuts, in the disoccluded regions or on the periphery of the screen or in motion when MVs have not been dilated). Robust spatial upscaling is a huge part of the puzzle on the way to better temporal upscalers, if not the main one right now.
 
Back
Top