Nvidia DLSS 3 antialiasing discussion

What methodology did they use to measure the input latency ? Are they using a high speed camera to track and measure the difference between control input and a change in the scene on the display ?

As most of us would have thought, buffering multiple frames isn't optimal from an end user experience standpoint. Even after using frames before being presented, it's astonishing in itself how UI elements are negatively impacted. It's a bit concerning how all these comparisons were made by using DLSS performance mode where there's conveniently the most amount of information being generated in the time domain despite having the lowest image quality. It will be an interesting comparison to see if anymore more visual anomalies will crop up in higher quality upscaling modes where the least amount of information is being generated in the time domain ...

DLSS 3 is an explicit tradeoff between visual fluidity and end user experience ...
 
I understand this is just a 'First Look', and wrt the restrictions in terms of equipment on hand and time, they couldn't do the deep dive that they really wanted to. You do though, have to sit through what feels like an extended promotion before you get to any semblance of analysis - that is not a function of restricted access. If the shackles placed upon you prevent you from doing the critical analysis you've staked your reputation on, maybe...don't do it quite yet? At the very least, edit it down so you've got 10 minutes of "High framerates are good!" material instead of the first ~20. I mean I get it, people are desperate to peek behind the hype curtain and this video will likely do numbers, but I was hoping for a little more than an upturned flap.

I think it's a pretty valid split of the content because the additional frames can be argued to be having a significantly greater impact on the overall experience than the occasional visible anomaly. It's the same reason why the majority of console gamers prefer 60fps performance mode vs 30fps fidelity mode despite the quality loss in that console example likely being much greater than the loss of going from DLSS2 to DLSS3. Granted, the console example comes with improved latency as well as fluidity which is also a factor.

FSR2 could be another example though. Many (to an extend including myself) would consider it "good enough" vs DLSS2 despite usually having worse quality when analysed closely. In a similar way, from this early analysis it looks like DLSS3 could be good enough from a quality perspective to make the additional frames more than worth a largely unnoticeable quality loss. Especially so for the wider consumer base as opposed to enthusiasts like us.
 
DLSS 3 is an explicit tradeoff between visual fluidity and end user experience ...

Thjs happens all the time in Video Game graphics, would you not say so? Triple buffering, motion blur, Ultra Settings etc.
Optional things that increase Render latency or Input latency to increase the subective experience elsewhere.

And like anything optional... It comes down to taste and one's priorities there I guess.
 
A comparison of DLSS to say, checkerboarding and spatial upscalers is completely understandable as those were the actual choices gamers had available to them - they were DLSS's competition in reconstruction. You can then look at the efficiency in terms of performance/image quality by comparing what had come before.

DLSS 3 isn't meant to replace the motion smoothing methods offline renders use, and the methods offline renders use were never going to be used for games. It's showing how it's an 'improvement' over something that will never be applicable to gamers. I guess I just don't see the value in comparing the 'advancement' over something in a different medium. The two methods are not in competition.

Fair enough. I found the comparison pretty interesting. I'd still like to see it compared to tv motion smoothing, but I don't know how viable that is.

Overall I just didn't have a problem with the video. They acknowledged the limitations of what they're able to talk about, they explained how the technology is supposed to work, and then outlined the pros (smoothness) and the cons (quality). They acknowledged the complexity of even being able to accurately showcase the technology because of youtube limitations, and the complexity of the subjective impression of frame generation during gameplay. I mean, maybe they could have addressed the cons first, but I honestly don't care. If they'd cut those segments in a different order, would it really have changed the video?

DF is not just a benchmarking site. They do feature pieces on technology, new and old. It's as much, if not more, an enthusiast site about game technology than it is a benchmarking site, though console/platform warriors focus more on the latter content.
 
FSR2 could be another example though. Many (to an extend including myself) would consider it "good enough" vs DLSS2 despite usually having worse quality when analysed closely. In a similar way, from this early analysis it looks like DLSS3 could be good enough from a quality perspective to make the additional frames more than worth a largely unnoticeable quality loss. Especially so for the wider consumer base as opposed to enthusiasts like us.

In the future, perhaps - DLSS3 is really only going to be accessible by 'enthusiasts' atm with the pricing of Ada. :)

We all make compromises with every setting we alter, but my main concern with the compromises DLSS3 may bring though is predicated on whether DLSS Performance mode is necessary for it to function, or that the artifacts its new frames can generate go up/down with increasing DLSS quality, that's just not clear to me atm.

I mean you can see in this very thread alone, some users will never touch DLSS performance mode - now you're potentially basing this new mode on those frames which were of unacceptable quality for some users, and potentially introducing more artifacts on top of that. Otoh...you have the higher framerate perhaps masking those (or as Alex alluded to, maybe making some of that worse). Those compromises are seemingly starting to pile up a bit though.
 
though console/platform warriors focus more on the latter content.

They did last time with DLSS too when it made its debut. Same for ray tracing. These things arent even applicable to them to begin with, they have TAAU and other solutions (which are good in their own right).
DLSS3 is, as DF says, a really, really impressive piece of technology. Considering its the first iteration it does very well, much better than DLSS1 did at the start. In special when CPU usage is so high these days, this is a welcome feature. That and RT performance has increased alot with Lovelace. Not to forget normal raster which sees a healthy improvement.

The only complaint one could have is price. And with that anyone can agree, however theres absolutely no reason to bash the products and technologies or Digital Foundry. Considering the lowest in the newly-announced Ada gpus is in RTX3090 level of performance, its not that bad either. A future 4070/4060 is going to offer enough performance. The upper-limit has raised basically.

its too bad console warriors can go their way here, atleast moderate like you do in the console sections.
 
What methodology did they use to measure the input latency ? Are they using a high speed camera to track and measure the difference between control input and a change in the scene on the display ?

As most of us would have thought, buffering multiple frames isn't optimal from an end user experience standpoint. Even after using frames before being presented, it's astonishing in itself how UI elements are negatively impacted. It's a bit concerning how all these comparisons were made by using DLSS performance mode where there's conveniently the most amount of information being generated in the time domain despite having the lowest image quality. It will be an interesting comparison to see if anymore more visual anomalies will crop up in higher quality upscaling modes where the least amount of information is being generated in the time domain ...

DLSS 3 is an explicit tradeoff between visual fluidity and end user experience ...

Most likely using nvidia LDAT for latency measurements.

Edit: Also visual fluidity is part of the end user experience. Input lag, resolution, smoothness are all part of the experience and can be traded off against one another. I'm the guy with an rtx 3080 that will set graphics on low or turn on DLSS to hit 230 fps because I have a 240Hz gsync compatible monitor. Some people would rather game at 8k 30 Hz because they're monsters and they enjoy that kind of thing. It's all tradeoffs. DLSS 3 is just another option. You can toggle it on and off independently, so I'm not sure what the big deal is, other than people who are worried that it will be used to obfuscate benchmarks. People had the same reaction when DLSS 1 came out. "It's bad because benchmarks", but no one is confused about the real performance of gpus just because DLSS 1 came out.
 
Last edited:
In the future, perhaps - DLSS3 is really only going to be accessible by 'enthusiasts' atm with the pricing of Ada. :)

We all make compromises with every setting we alter, but my main concern with the compromises DLSS3 may bring though is predicated on whether DLSS Performance mode is necessary for it to function, or that the artifacts its new frames can generate go up/down with increasing DLSS quality, that's just not clear to me atm.

I mean you can see in this very thread alone, some users will never touch DLSS performance mode - now you're potentially basing this new mode on those frames which were of unacceptable quality for some users, and potentially introducing more artifacts on top of that. Otoh...you have the higher framerate perhaps masking those (or as Alex alluded to, maybe making some of that worse). Those compromises are seemingly starting to pile up a bit though.

It looked from the video like you could select your DLSS quality and your frame generation independently. Maybe @Dictator can confirm. There really aren't any games that I would ever use DLSS performance mode on. It doesn't look good enough for 1440p.

Edit:
1664402703073.png

The DLSS upscale quality is set to QUALITY while frame generation is enabled, and the slider is not disabled. Looks like you have full flexibility.

Even with just enthusiast cards, I'm a person that would consider using it in pretty much any non-competitive game to hit 240 fps. A game like spider-man at 240Hz with roughly the input lag of roughly 120fps would be great. In the future, if I had a 500 Hz display, I'd think about it for every game. Some games have internal fps limits and being able to double fps for perceived smoothness outside the bounds of the engine would be pretty cool. Overall it would just come down to whether I thought the quality trade off was worth it. I just played the open beta for Moderware 2 two, and the game is pretty blurry as is, and DLSS just made it really hard to see what was going on. So even DLSS quality there was not an obvious win for me.
 
Overall I just didn't have a problem with the video. They acknowledged the limitations of what they're able to talk about, they explained how the technology is supposed to work, and then outlined the pros (smoothness) and the cons (quality). They acknowledged the complexity of even being able to accurately showcase the technology because of youtube limitations, and the complexity of the subjective impression of frame generation during gameplay. I mean, maybe they could have addressed the cons first, but I honestly don't care. If they'd cut those segments in a different order, would it really have changed the video?

...Yes? Like I said though, it was also the length of it. We had 20+ minutes extolling this wonderous new future of 200+ fps before getting into critique. In both aspects, the written piece is far superior - the qualifiers are mentioned far earlier and it gives better context to its potential pitfalls vs Nvidia's press.

If you don't care you don't care, fine then.

DF is not just a benchmarking site. They do feature pieces on technology, new and old. It's as much, if not more, an enthusiast site about game technology than it is a benchmarking site, though console/platform warriors focus more on the latter content.

This is the argument I've heard sometimes whenever critique of the tone of some of their coverage is brought up, but as I've said before - we're not discussing SIGGRAPH papers here. This interest only exists because commercial products exist to bring these technologies to the hands of consumers.

We're primarily waiting for for Digital Foundry and other reputable outlets to look at this because the only information we've had on this before is via advertisements from the company selling it. We expect a confined view of the technology from the commercial property producing it. It's impossible to be wholly agnostic on a 'technology' that only has valence because it's being sold as a product.

I don't expect DF to review these cards like say, Gamers Nexus and blast us with a stream of charts, they're the site that actually takes the time to explain how the techniques they're looking at actually function - there's certainly a preamble expected to get the viewer to understand what's being attempted before picking at pixels.

They also don't float in the ether above the petty commercial concerns of their audience though - the fact of this technology existing as a function of the profit motive of a private company cannot be segregated. How you frame the coverage matters is this context. We expect skepticism from any review, and my contention is that you can't just say "Well we're only talking about the technology", especially when it's a proprietary tech from one vendor. It is inescapable that it is a product at that point.
 
Last edited:
Thjs happens all the time in Video Game graphics, would you not say so? Triple buffering, motion blur, Ultra Settings etc.
Optional things that increase Render latency or Input latency to increase the subective experience elsewhere.

And like anything optional... It comes down to taste and one's priorities there I guess.
Yes but I imagine that having proper visual cues (uncorrupted UI elements) and and lower latency input response are sacred foundations to interactive digital media that won't be easily crossed for many ...

Not only is DLSS 3 making the above sacrifices but it's also objectively lowering the visual quality all in the name of more visual fluidity as well as opposed to it's prior iteration (DLSS 2) where it could perceptibility increase image quality depending on the scene/implementation ...

DLSS 3 in comparison to DLSS 2 is going backwards in most measures (input latency, visual quality, UI/UX) ...
 
People had the same reaction when DLSS 1 came out. "It's bad because benchmarks", but no one is confused about the real performance of gpus just because DLSS 1 came out.

I mean...they were right though? DLSS 1 was shit. It was indeed 'about benchmarks', because it was presented - like DLSS3 - at achieving a huge performance boost over standard rendering, when it wasn't. Only with DLSS 2, released ~18 months later, did it start to approach the early marketing.
 
We're primarily waiting for for Digital Foundry and other reputable outlets to look at this because the only information we've had on this before is via advertisements from the company selling it. We expect a confined view of the technology from the commercial property producing it. It's impossible to be wholly agnostic on a 'technology' that only has valence because it's being sold as a product.

I don't expect DF to review these cards like say, Gamers Nexus and blast us with a stream of charts, they're the site that actually takes the time to explain how the techniques they're looking at actually function - there's certainly a preamble expected to get the viewer to understand what's being attempted before picking at pixels.

They also don't float in the ether above the petty commercial concerns of their audience though - the fact of this technology existing as a function of the profit motive of a private company cannot be segregated. How you frame the coverage matters is this context.
"We"? I think everyone here knows how Gamersnexus and other outlets will see DLSS 3, Raytracing and other features of Lovelace. Their opinion havent changed since Turing. So why should i care about some youtuber telling me that DLSS 3 isnt worth it? They said the same about DLSS and temporal upscaling. The present has proven them wrong. I hope they have learned from it.
 
Yes but I imagine that having proper visual cues (uncorrupted UI elements) and and lower latency input response are sacred foundations to interactive digital media that won't be easily crossed for many ...

Not only is DLSS 3 making the above sacrifices but it's also objectively lowering the visual quality all in the name of more visual fluidity as well as opposed to it's prior iteration (DLSS 2) where it could perceptibility increase image quality depending on the scene/implementation ...

DLSS 3 in comparison to DLSS 2 is going backwards in most measures (input latency, visual quality, UI/UX) ...

So don't turn it on? Some people will, some people won't. It's an option. It has pros and cons.

DLSS, TAA, MSAA, dynamic resolution, vsync. All of these things have compromises regarding things like blurriness, artifacts, hardware scaling, aliasing, input lag.

There are likely practical limits to spatial upscaling, so the tech will move into other domains. We can see that DLSS performance looks significantly worse than DLSS quality, because the source data from the current frame and past frames is just not enough to upscale in high quality at most resolutions. I'll be curious to see when spatial upscale hits its limit, and they can no longer improve the quality for a performance target because there just isn't enough source data. Like it or not, the industry is going to move to solutions like DLSS 3 that generate frames because hardware will not scale in a way to allow for new cutting edge rendering or high frame rates.
 
"We"? I think everyone here knows how Gamersnexus and other outlets will see DLSS 3, Raytracing and other features of Lovelace. Their opinion havent changed since Turing. So why should i care about some youtuber telling me that DLSS 3 isnt worth it? They said the same about DLSS and temporal upscaling. The present has proven them wrong. I hope they have learned from it.

Yes, "we" were waiting for DF to actually provide more in depth coverage than Nvidia's promotional materials (and I'm not saying they didn't). What exactly is the contention here? I'm specifically pointing out that DF takes a different approach than other youtubers and that's the value they bring, GN just doesn't cover the same things and DF doesn't do that GN does either. I don't go to GN to learn about rendering technologies, and I don't go to DF for a spicy exposé on a company selling a defective line of PSU's.

I'm saying that ultimately, they both are reviewing 'products' though.
 
Last edited:
I don't agree with this. This DLSS piece was not a review. Sometimes Digital Foundry reviews things, sometimes they don't. Even Gamers Nexus isn't solely about reviews. Sometimes they just do news reporting.

If you think Gamers Nexus has done anything remotely similar to a video where they cover a product announcement for 20 minutes+ and don't constantly pepper it with sarcastic quips and reminders of the bullshit they've heard before, I question how much of GN you've actually seen.
 
I think it's a pretty valid split of the content because the additional frames can be argued to be having a significantly greater impact on the overall experience than the occasional visible anomaly. It's the same reason why the majority of console gamers prefer 60fps performance mode vs 30fps fidelity mode despite the quality loss in that console example likely being much greater than the loss of going from DLSS2 to DLSS3. Granted, the console example comes with improved latency as well as fluidity which is also a factor.

FSR2 could be another example though. Many (to an extend including myself) would consider it "good enough" vs DLSS2 despite usually having worse quality when analysed closely. In a similar way, from this early analysis it looks like DLSS3 could be good enough from a quality perspective to make the additional frames more than worth a largely unnoticeable quality loss. Especially so for the wider consumer base as opposed to enthusiasts like us.

Personally I hate visible anomalies, and don't care that much about framerate beyond "good enough". But then that's me, I refuse to "upgrade" my Galaxy S10 because they stopped putting 1440p screens in the smaller bodies, and I run my photos through an ai denoiser and then an ai upscaler just get the quality I want out of my pretty good modern ff camera.

But hey, you can turn off the frame insertion, so whatever. It's also probably worth it for VR, definitely notice low framerates a lot more there, at least if frame guessing/reprojection can be extended to head/camera movement latency. And realtime optical flow tracking is good for taa/reprojection stuff, missing/reprojecting shading changes has been an issue for a while.

I do find it funny though that Nvidia needed to have a special hw accelerator for their optical flow tracking, while a hobbyists put up their pretty good one that works on any hardware on github a while ago: https://github.com/JakobPCoder/ReshadeMotionEstimation
 
I do find it funny though that Nvidia needed to have a special hw accelerator for their optical flow tracking, while a hobbyists put up their pretty good one that works on any hardware on github a while ago: https://github.com/JakobPCoder/ReshadeMotionEstimation

An FF unit will generate optical flow image quick at a determined quality level, which I think is the point. Reducing latency, not taking 10 milliseconds to work or whatever.

But even then DLSS 3 is not just an optical flow image being generated and nothing being done with It. It is an ML Programm as well that is deciding how to combine the information from Motion vectors and the generated optical flow Image from that FF unit to make a convincing half step in Game time and Motion.
 
An FF unit will generate optical flow image quick at a determined quality level, which I think is the point. Reducing latency, not taking 10 milliseconds to work or whatever.

But even then DLSS 3 is not just an optical flow image being generated and nothing being done with It. It is an ML Programm as well that is deciding how to combine the information from Motion vectors and the generated optical flow Image from that FF unit to make a convincing half step in Game time and Motion.
Yup.

Motion vectors is most likely the primary way how opaque objects get contructed in reprojection.
Motion flow is for other changes in screen.

It's shame those early frame doubling methods were never released. (Force unleashed 2 etc.)

 
It looked from the video like you could select your DLSS quality and your frame generation independently. Maybe @Dictator can confirm. There really aren't any games that I would ever use DLSS performance mode on. It doesn't look good enough for 1440p.

Edit:
View attachment 7072

The DLSS upscale quality is set to QUALITY while frame generation is enabled, and the slider is not disabled. Looks like you have full flexibility.

Even with just enthusiast cards, I'm a person that would consider using it in pretty much any non-competitive game to hit 240 fps. A game like spider-man at 240Hz with roughly the input lag of roughly 120fps would be great. In the future, if I had a 500 Hz display, I'd think about it for every game. Some games have internal fps limits and being able to double fps for perceived smoothness outside the bounds of the engine would be pretty cool. Overall it would just come down to whether I thought the quality trade off was worth it. I just played the open beta for Moderware 2 two, and the game is pretty blurry as is, and DLSS just made it really hard to see what was going on. So even DLSS quality there was not an obvious win for me.

An added question is whether or not frame generation can be enabled without enabling upscaling at all. If not is it an actual technical limitation or just a limitation with how it is set up currently? It does look like it can't be enabled in conjunction with DLAA currently.

I'm wondering if DF and @Dictator will be doing follow-up coverage specifically looking at how this functions at sub 4K resolutions at both 1080p and 1440p but also at much higher framerates targeting 240fps or more (eg. does it help with artifacts?). The greatest number of currently 144hz+ displays in the field are going to be at those two resolutions (especially 1080p). Frame generation in theory, at least to me, has always had the greatest potential in somewhat decoupling performance from the rest of the system limitations (making it essnetially practical for AAA SP games and not just esports titles) in the push for super high fps and coupled with very high refresh displays (especially those in the future) will push motion clarity beyond the limitations we have to currently work with.

Also how this works in conjunction with things such as v-sync or other frame rate limiters. This also has some implications in terms of usability in conjunction with BFI displays.
 
Last edited:
Back
Top