Nvidia DLSS 3 antialiasing discussion

I don't it's accurate to think of it as full frame latency.

If I move my mouse to the right between frames 1 and 2, I will already see the camera panning to the right in generated frame 1.5 since the movement from 1 -> 2 is affecting that generated frame.

If input latency was truly one whole frame long as you suggest, the first frame where I'd be seeing the camera pan to the right (as per my input), would be when frame 2 hits my screen.

Right, the question is how much later is frame 2 displayed when DLSS3 is enabled. Total delay is interpolation time + any artificial delay for frame pacing. Ideally you want this total delay to be half the render time of a “real” frame. Should be easy assuming interpolation is much faster than rendering. Another assumption is that interpolation of frame 1.5 runs in parallel with rendering frame 3 and doesn’t slow it down.
 
It's just strange to me. We get to the point where tech advancement begins to slow, where advancement in computer graphics begins to somewhat stagnate... and then there's this boom with AI and ML which allows us to do so much more than we traditionally have any right to.. and some people are like 'It's fake/not real"... I don't get it.

This area in computer graphics... is the most exciting it's been for a while now, at least in my opinion. You watch the people working on this stuff and they are visibly excited for the future. They feel like they have purpose and they know that this type of work is the key to bringing us to the next level, and they know there's lots of advancement to be made in these areas.

The box is opened, and there's no closing it.. AI/ML/DL... we can generate visuals in a much more efficient manner than ever before, rendering fewer pixels and inferencing the rest. It's only makes sense that it advances and makes its mark on other areas of visuals. Looking past that, imagine what it could do for audio in games? There's so much advancement that could come to audio from this it's not even funny. And that of course leads into game development itself. AI will plan an integral part of game development in the future... allowing developers to create more things at faster rates than ever thought possible.
 
I don't it's accurate to think of it as full frame latency.

If I move my mouse to the right between frames 1 and 2, I will already see the camera panning to the right in generated frame 1.5 since the movement from 1 -> 2 is affecting that generated frame.

If input latency was truly one whole frame long as you suggest, the first frame where I'd be seeing the camera pan to the right (as per my input), would be when frame 2 hits my screen.
It's a full frame of latency because the application polling the user input and receiving feedback is at least setback by that length. The generated frame isn't the application giving you actual feedback or letting you poll for user input. It's basically there to increase visual fluidity ...
 
It's a full frame of latency because the application polling the user input and receiving feedback is at least setback by that length. The generated frame isn't the application giving you actual feedback or letting you poll for user input. It's basically there to increase visual fluidity ...
So only when frame 2 hits the screen, one frame later than it would without DLSS 3 frame generation, will I see the first signs of having moved my mouse?
 
It's just strange to me. We get to the point where tech advancement begins to slow, where advancement in computer graphics begins to somewhat stagnate... and then there's this boom with AI and ML which allows us to do so much more than we traditionally have any right to.. and some people are like 'It's fake/not real"... I don't get it.

It's disruptive. Creating real time graphics with out a traditionel graphics pipeline allows for more competition. Look at all these companies with ML accelerators, they dont need an efficient architeture for FP32 or FP64 TFLOPs.
 
Do you know that input latency is decoupled from render time? Which makes most frames "fake".
Also, not that people move mouse in chaotic random directions while aiming every 16 ms or so. Betting 90% of motion if not more is linear anyway and that's why people didn't complain about the prerendering queue for the last few decades, but it's interesting how it's a problem now all of a sudden)
 
So only when frame 2 hits the screen, one frame later than it would without DLSS 3 frame generation, will I see the first signs of having moved my mouse?
Academic thought experiment here but suppose the intervals between you creating the inputs (mid-frame) and the application polling for user inputs (every frame) never overlaps. Does the game in question here change scenes or not ?

The answer is no because the game is unable read your inputs thus you can never enter into the gameplay loop. By buffering an additional frame, this loop is now delayed by exactly this amount because it takes longer to restart user input polling ...
 
Academic thought experiment here but suppose the intervals between you creating the inputs (mid-frame) and the application polling for user inputs (every frame) never overlaps. Does the game in question here change scenes or not ?

The answer is no because the game is unable read your inputs thus you can never enter into the gameplay loop. By buffering an additional frame, this loop is now delayed by exactly this amount because it takes longer to restart user input polling ...
Why do we assume that application input polling won't happen at "final" DLSS3 framerate? Reflex is there, it can do this if it needs to - feed the input back into the application at the final post-FG framerate, no?
 
Academic thought experiment here but suppose the intervals between you creating the inputs (mid-frame) and the application polling for user inputs (every frame) never overlaps. Does the game in question here change scenes or not ?

The answer is no because the game is unable read your inputs thus you can never enter into the gameplay loop. By buffering an additional frame, this loop is now delayed by exactly this amount because it takes longer to restart user input polling ...

Is this a relevant scenario? Input processing should be decoupled from and much higher frequency than the graphics renderer.
 
It's just strange to me. We get to the point where tech advancement begins to slow, where advancement in computer graphics begins to somewhat stagnate... and then there's this boom with AI and ML which allows us to do so much more than we traditionally have any right to.. and some people are like 'It's fake/not real"... I don't get it.

This area in computer graphics... is the most exciting it's been for a while now, at least in my opinion. You watch the people working on this stuff and they are visibly excited for the future. They feel like they have purpose and they know that this type of work is the key to bringing us to the next level, and they know there's lots of advancement to be made in these areas.

The box is opened, and there's no closing it.. AI/ML/DL... we can generate visuals in a much more efficient manner than ever before, rendering fewer pixels and inferencing the rest. It's only makes sense that it advances and makes its mark on other areas of visuals. Looking past that, imagine what it could do for audio in games? There's so much advancement that could come to audio from this it's not even funny. And that of course leads into game development itself. AI will plan an integral part of game development in the future... allowing developers to create more things at faster rates than ever thought possible.

I have to challenge this take a touch. Rendering tech advancements have not been slowing down. Creating techniques for APIs and GPUs to render frames in a more efficient manner has been an area that we continually see research and results in. This is just another bullet point to add to the list of checkerboard rendering, temporal reprojection, DLSS 2.X, FSR etc. Rendering smarter not harder has been something the great minds of this space have been preaching for years.

More specifically, this area of frame generation has been seen before. Serious work on frame doubling, re-projection/interpolation and getting past hardware limitations have been an ongoing area of research since the launch of VR. Async space warp 2.0 even added the ability to read data from the Z-buffer of previous frames to create a more accurate reprojected frame. Looking at the link provided some of the main points mirror that of DLSS 3 (lessening CPU/CPU impact, increased framerate, low input latency). SteamVR is a different implementation that more accurately calls this type of tech for what it is: "Motion Smoothing". To count and market these frames towards the game's total framerate is disingenuous hand-waves away some technical integrity. (Admittedly, to date, the use cases have been limited to VR. I won't speculate why since that will be a whole other post.)

While some of the posts here appear to be a bit inflammatory, the fears are valid. It's goal post moving. We will be subject to "lower quality pixels" while seeing GPU vendors advertise these final frame outputs to lengthen their bar charts, embellish their performance gains, all while disregarding any pitfalls of the technology. In my opinion saying a 4090 is 4X faster than a 3090Ti while factoring in frame generation tech is a misleading way to represent the gen over gen leap.

This forum is a space where we will freeze frames, zoom in and examine pixels areas in chunks to champion certain render methods over others. So, there is precedence on being critical.

It is nice that we are moving towards ML/DL to assist with this area, but the idea of motion smoothing and inserting frames isn't new and it isn't above skepticism.
 
This is where I know you are in fact LYING to yourself, TAA image contain the same amount of artifacts that you describe, often more, yet I don't hear you complain about them.
Rendering is full of shortcuts, there are so many half resolution, quarter resolution, and 1/8 effects in any given scene, you can't even count them anymore, you consider those fake too?
You might want to go search my post history. I’ve complained about TAA except DLSS is like TAA but even worse. My disdain for TAA and other reconstructive techniques cannot be overstated.
 
What games and on what platforms do you play?
I own a PS5, Xbox Series X, Nintendo Switch, Rtx 3080/5900x pc, and I recently purchased a steam deck that’s awaiting delivery. In terms of games, the last thing I played was the modern warfare 2 beta but, I spend more time reading forums than playing games.
 
You might want to go search my post history. I’ve complained about TAA except DLSS is like TAA but even worse. My disdain for TAA and other reconstructive techniques cannot be overstated.

What do you believe is a currently existing method to deal with specular aliasing outside of TAA that is actually performant enough though? There are certainly poor TAA implementations out there, but more often than not their main detriment of texture blurriness can at least be addressed in some fashion by just adding sharpening.

Perfect solution? Of course not. There are always compromises, and TAA/DLSS just happens to be the latest attempt to address the problems of aliasing in a manner that is feasible, and does it far better than previous methods. I think DLSS has some detriments that haven't gotten enough attention personally, but it's always a case of compromises with anti-aliasing. I mean sure, I think TAA 'sucks' compared to 8X SSAA, but naturally that's not workable.

A title like Arkham Knight for example, one that currently has no temporal component, is basically unfixable with respect to remedying the tremendous pixel popping and shimmering, outside of rendering at 8k and downsampling, which is obviously not feasible for the majority of cards out there. Christ give me TAA or DLSS in that game, please.
 
Last edited:
While some of the posts here appear to be a bit inflammatory, the fears are valid. It's goal post moving. We will be subject to "lower quality pixels" while seeing GPU vendors advertise these final frame outputs to lengthen their bar charts, embellish their performance gains, all while disregarding any pitfalls of the technology. In my opinion saying a 4090 is 4X faster than a 3090Ti while factoring in frame generation tech is a misleading way to represent the gen over gen leap.

Exactly. This is not simply a SIGGRAPH paper, it's being hyped as part of a promotional product announcement, and one that at least in the short term, is only available in exorbitantly priced versions of said product. This technology does not exist in a vaccum, it is perfectly reasonable to be skeptical about advertising. This is not just skepticism about DLSS3 in of itself, but the rationale where it's used to signify a massive performance uplift in a new architecture by comparing it to something it doesn't quite do. It is not equivalent to 120fps 'native' rendering, full stop. It may be perfectly fine, and perhaps excellent - if that's the only way you would able to reach that performance otherwise! But it's being promoted as equivalent by reducing it to a simple performance metric vs a previous product without it.

My 3060 for example, is not the equivalent of a Radeon 6800 just because it can run DLSS performance mode in a game vs. native 4k of a 6800 with relatively the same FPS. It's a value-add for my 3060 no doubt, and deserves mention - it certainly factored into my purchase decision beforehand. But I wouldn't stick it up on Ebay and advertise it having 'equivalent 4K performance' to a 6800 because of it.

Remember that much of the early claims about DLSS also took 18+ months to actually start to be validated, and further along after that before some rather significant ghosting issues were addressed. DLSS 1.0 was accompanied by the same level of hype by Nvidia, even though it sucked. DLSS 2.0 wasn't just 'improved', it was like a wholly different technology by comparison.

I sincerely hope DLSS 3.0 impresses out of the gate, and I completely agree that AI methods for image reconstruction are the future. But I was also hoping to see DLSS advancement in quality too, not just another method to further increase performance at the cost of potentially more artifacting - I think there is definitely quite a bit of room for improvement for DLSS image quality myself even before adding in completely reconstructed entire frames.
 
Last edited:
Exactly. This is not simply a SIGGRAPH paper, it's being hyped as part of a promotional product announcement, and one that at least in the short term, is only available in exorbitantly priced versions of said product. This technology does not exist in a vaccum, it is perfectly reasonable to be skeptical about advertising. This is not just skepticism about DLSS3 in of itself, but the rationale where it's used to signify a massive performance uplift in a new architecture by comparing it to something it doesn't quite do. It is not equivalent to 120fps 'native' rendering, full stop. It may be perfectly fine, and perhaps excellent - if that's the only way you would able to reach that performance otherwise! But it's being promoted as equivalent by reducing it to a simple performance metric vs a previous product without it.

My 3060 for example, is not the equivalent of a Radeon 6800 just because it can run DLSS performance mode in a game vs. native 4k of a 6800 with relatively the same FPS. It's a value-add for my 3060 no doubt, and deserves mention - it certainly factored into my purchase decision beforehand. But I wouldn't stick it up on Ebay and advertise it having 'equivalent 4K performance' to a 6800 because of it.

Remember that much of the early claims about DLSS also took 18+ months to actually start to be validated, and further along after that before some rather significant ghosting issues were addressed. DLSS 1.0 was accompanied by the same level of hype by Nvidia, even though it sucked.

I sincerely hope DLSS 3.0 impresses out of the gate, and I completely agree that AI methods for image reconstruction are the future. But I was also hoping to see DLSS advancement in quality too, not just another method to further increase performance at the cost of potentially more artifacting - I think there is definitely quite a bit of room for improvement for DLSS image quality myself even before adding in completely reconstructed entire frames.
And the vocal Nvidia users still heralded it as amazing tech in the face of it being clearly terrible.
 
This is where I know you are in fact LYING to yourself, TAA image contain the same amount of artifacts that you describe, often more, yet I don't hear you complain about them.
Rendering is full of shortcuts, there are so many half resolution, quarter resolution, and 1/8 effects in any given scene, you can't even count them anymore, you consider those fake too?

How DLSS handles these lower resolution effects though can be a significant mark against it in some games. It's not simply a problem of displaying them 'blurrier' than their full resolution counterparts, that's what happens with TAA when playing in a native res. You can tell the quality degradation of those effects sure, but that's it - it's just lower res.

DLSS however, can basically 'break' sometimes when it encounters these. Not necessarily from trying to upscale the lower res effects, that's pretty evident in things like ray traced reflections, but usually it's not that noticeable for me - it's just a little blurrier, and that's fine for the performance I'm getting and relative to the rest of the image quality. It's when lower resolution buffers are combined with similar effects in certain ways that can cause some implementations to kind of freak-out. This is of course more evident with DLSS settings lower than quality, but it's still evident in quality mode too (I would argue regardless that DLSS though in particular has been promoted in some circles as providing far better results than earlier methods of reconstruction due precisely for it's ability to start from a lower native display res). I can see these types of issues in Spiderman, Horizon Zero Dawn, Wolfenstein Youngblood, and games like Rise of the Tomb Raider and Death Stranding, especially when motion blur is enabled (easy fix enough though to disable motion blur).

I don't think this is necessarily some inherent flaw with DLSS as a technology mind you - there are certainly games with tons of lower-res alpha blended effects that don't display these artifacts either, such as Crysis 2 Remastered and Observer: System Redux, at least so far in my experience. Those games have a constant stream of distorted transparency effects overlaid on one another and don't exhibit these glaring issues at all. They're not perfect either but their rendering flaws are exceedingly rare by comparison.

But yeah, there are definite cases where DLSS can introduce artifacts that don't exist in TAA, even when equalizing their pre-reconstruction starting res. The lower the starting res the less info DLSS has to base its interpretation on, and when an effect that's natively rendered at a quarter res is encountered and then it's blended with another one, and you're using DLSS performance mode on top of it - sometimes DLSS can get confused.
 
It doesn’t need to conjure anything. The game reads your input and the CPU updates the camera, animations, physics etc. The GPU renders the frame. This new frame is input to the OFA, frame generation process. There’s no guessing about what happens “next”. The neural network guessing is all about what happened between the last two rendered frames.
If I press right on analog stick to rotate camera towards right, how will the frame generator show the ~5-10% of screen area on the far right that was not present in the 2 previous frame that DLSS3 analyzed? That part of frame indeed does need to be conjured. And coincidentally, all current videos of DLSS3 with frame generation shows the player either traveling forward or video being cropped [Spidey jumping out of his window sideways in DF vid].
 
If I press right on analog stick to rotate camera towards right, how will the frame generator show the ~5-10% of screen area on the far right that was not present in the 2 previous frame that DLSS3 analyzed? That part of frame indeed does need to be conjured.
DLSS 3 doesn't use 2 previous frames to generate a new frame. It generates an intermediate frame between 2 rendered frames.

As you pan right, it doesn't need to generate stuff out of thin air because the latest rendered frame is a step ahead already and that information will be used in the frame generation process.
 
Back
Top