Nvidia DLSS 3 antialiasing discussion

I have seen several artifacts in that video Alex missed, but no doubt once they release their indepth analysis they will acknowledge and cover it.

One pretty important thing that I try and stress in the video at the end is the persistence of the images. When you are watching YouTube, you are seeing the images persist 2x the length. The subective experience of artefacting is really different here than a TAA solution. A TAA solution will have each and every consecutive frame show the same artefacts in a consistent line of motion, here, you are seeing an on/off flashing of artefacts. Kinda like how a light flickers... Or BFI works.

So part of actually analysing this is bringing into words and images which artefacts actually present as noticable at full speed with this flashing behaviour and then describing where they happen in a game.

As an example - I Highlight the artefact around Spiderman's feet in the Video. But in Game I literally could not see the artefact around Spiderman's feet in that cutscene I showed im the video when at full speed. I only noticed it when manually combing my footage. That artefact took Up a large Portion of the Screen, yet I did not notice it.

But conversely, a much smaller artefact, the lines that appear around Spiderman's Legs when running up the building were much more noticable to me at full speed. I saw them without combing footage. I saw them, even though those artefacts were smaller in screen space size and severity than the comparatively much larger artefact on Spiderman's feet in the cutscene.

This strobing behaviour at 120 FPS has a pretty intense effect on what artefacts are signifcant. It seems like one-off frame issues are not noted, yet issues that persist over multiple animation arcs produce "Flicker" and are noticable.

That is something I learned when making this and something I will integrated into the method of analysing this.
 

New DF video.

Good video. I like how DF approached it. Breaking down the fundamentals of the tech and it's benefits for the viewing audience.

I wonder if they will talk about how it differs from VR's interpolated frame doubling feature in future videos as opposed to programs such as Adobe and Topaz which aren't designed for this content and look bad and make DLSS 3 look much better in comparison.

Either way, I still consider it (and will call it) DLSS 3: Motion Smoothing as opposed to frame amplification.

I wonder when we will actually start using DL or ML for better AI in games
 

New DF video.

I think the many of the people who level the charges at DF for being 'Nvidia shills' are silly, and regardless fanboys of that nature are never really going to be dissuaded with anything. However...I also think taking 27 minutes into a 32 minute video before you even start to get into image quality criticisms isn't exactly going to help matters either.

I understand this is just a 'First Look', and wrt the restrictions in terms of equipment on hand and time, they couldn't do the deep dive that they really wanted to. You do though, have to sit through what feels like an extended promotion before you get to any semblance of analysis - that is not a function of restricted access. If the shackles placed upon you prevent you from doing the critical analysis you've staked your reputation on, maybe...don't do it quite yet? At the very least, edit it down so you've got 10 minutes of "High framerates are good!" material instead of the first ~20. I mean I get it, people are desperate to peek behind the hype curtain and this video will likely do numbers, but I was hoping for a little more than an upturned flap.

Still, there were at least - eventually - some somewhat interesting tidbits to draw from this at this early stage, unfortunately not all of it positive. Bear in mind you might as well mentally preface every observation here with "At this stage", as I understand this is pre-release and we've certainly seen improvements from Nvidia with DLSS before that can be significant.

Latency: For one, the wildly varying numbers demonstrate how much of latency is still in the hands of the engine/developer. Secondly, you can't necessarily just factor in framerate improvements and then calculate "Well if this was native and we double framerate, we should get half the latency" - doesn't always scale that way, as this demonstrates:

1664394022000.png

DLSS 2 here has improved framerates by +300%. However latency 'only' decreased by just around half, and less with Reflex enabled at both native + DLSS. So you can't always just scale a potential latency reduction linearly with framerate. DLSS 3 does relatively well here. There's not necessarily a reason to believe that if it were possible to get a 529% performance improvement without involving DLSS3 that we would be saving any significant latency regardless.

OTOH:

1664394027122.png

Yikes. I don't think this was necessarily expected, my understanding that DLSS 3 can't really reduce latency from native (when I say 'native' in the context of DLSS3 wrt latency, I mean DLSS2), but not that it would increase it appreciably. This is a heavily CPU limited game mind you, but still - DLSS performance is showing a significant drop in latency from non-reconstructed 4K. DLSS3's frame generation increases that to the point where it's like pure 4K, but without Reflex. 😟 Now look, 38ms latency is still good - it's certainly good enough to still feel very 'connected' to a 120+fps output. But I think I, and others, felt that perhaps some very slight latency increase over DLSS2 was the 'worst case' kind of scenario, not that it could ever reduce latency to the point where it was before the performance savings of DLSS were factored in entirely.

In terms of image quality, from a pure technology perspective it was impressive to see how well DLSS3 can perform its motion interpolation compared to far more expensive offline methods, and I get that's largely the point of looking at it in this way. It does speak to Nvidia's experience and the power of dedicated hardware, no doubt. From the perspective of a gamer though, I'm not quite sure of the utility in this context. I mean motion interpolation, offline or not, is relatively easy to detect in certain scenarios, hence the extreme skepticism from some camps about DLSS3's announcement. So comparing against such technologies is not exactly alleviating these concerns - yes, it's very impressive what it can do in a tiny fraction of the time! But ultimately it's a comparison no one will ever really encounter when using a 40X0 GPU, it's not a choice they have to even consider. Bear in mind as well as a youtube commentator reminded me, these offline interpolation methods also don't have access to the same motion vector data that games are providing to DLSS. So it's not really comparing the same thing regardless.

Like for example, the early shots from when Spidey is jumping out the window. Horrible ghosting with the Adobe/Topaz method. Yet the hand area, which from the video, "does not have that error" of those methods (and that's true - it certainly doesn't), still doesn't look that great:

1664394000952.png

Now, is that from DLSS2 or 3? Dunno. But we see further example further on of DLSS's specific errors, and they can be pretty egregious:

1664394714400.png

The argument that this kind of error is only on-screen for ~8ms is of course valid, in the greater context of a 120+fps game, the perceived temporal resolution increase from the higher framerate will further mask such errors. But clearly - these are not the same as 'native' frames. I get it as an argument specifically for Spiderman though, as the CPU limitation in that game makes DLSS3 the only way to achieve these frames on any system. But typically, "you would hardly notice the artifacts" isn't something you usually hear in the context of playing something on a high-end GPU, it's something you usually hear more when it's with a budget product where you don't really have any other option.

Speaking of "sacrifices", what's up with every comparison done using Performance DLSS? That's a mode I'm usually loathe to drop down to on my 3060, it is required for DLSS3 at all to function or does it just present it in the best light? DLSS3 frame artifacts when compared to fully native resolution are one thing, but it's another matter if it's being compared to DLSS performance - which have plenty of its own already. Do even 3080 users typically use performance mode DLSS?

Overall, to me this just ultimately further illustrates how utterly ridiculous Nvidia's marketing wrt the 4090's 'performance' has been. DLSS3, as a technology, is very interesting, if not solely for the fact that it can alleviate the CPU burden of reaching such framerates on certain games regardless, we're not getting massively better CPU performance anytime soon and as such, interpolation methods will likely remain the only option to reach these framerates in many cases. But as part of a product - which is why we're all interested in this in the first place - promoting it as providing a massive % increase over the previous gen was always suspect, and this small peek just makes that look even more laughable. You are getting that performance at considerable sacrifices, it's just comparing two different things.
 

Attachments

  • 1664394017572.png
    1664394017572.png
    145.3 KB · Views: 2
Last edited:
From their own capture card. Otherwise do you think that every Zen 4 review was sponsored by AMD?
No, it's called an embargo date. I'm not aware of the embargo date being lifted for the 4000 series RTX, there isn't even an embargo date known to the public yet. So DF were provided a 4090 early and granted permission to create this feature piece and release it prior to anyone else being able to do so, I assume following rather strict guidelines. So I consider it splitting hairs on what would be considered "sponsored". It's basically Nvidia using DF as a marketing tool to sell their product. And since it's not tagged as sponsored anywhere, Nvidia even got it for free!

Do I think this is great pre-release info for the tech prior to release? Yes, and I appreciate DFs work on this. But to think that Nvidia is doing this simply to inform gamers is being disingenuous. DF chose to enter an exclusive agreement with Nvidia to do a piece prior to anyone else, to the benefit of both.
 
@Flappy Pannus The reason they compare dlss3 to adobe etc is to make a technical comparison of frame generation. In the realm of frame generation those kinds of tools would be the gold standard, except maybe video processors built into tvs, but I don’t know if you can capture those. Sure those tools aren’t available for gamers but it’s qualitative comparison of frame generation methods.
 
From their own capture card. Otherwise do you think that every Zen 4 review was sponsored by AMD?
From their own capture card from card provided by NVIDIA long before rest of the press along with pre-release builds of games not available for others and that they already did one NVIDIA curated video?
As Flappy Pannus pointed out there's several reasons why it can and does look to some like that.
 
@Flappy Pannus The reason they compare dlss3 to adobe etc is to make a technical comparison of frame generation. In the realm of frame generation those kinds of tools would be the gold standard, except maybe video processors built into tvs, but I don’t know if you can capture those. Sure those tools aren’t available for gamers but it’s qualitative comparison of frame generation methods.

But those methods don't have access to the same data the game is providing to DLSS, so of course they look like shit by comparison. Their hands are tied behind their back from the start. It's a 'technical comparison of frame generation', but the starting point to determine the created frames is completely different.

From Eurogamer's own article

Richard from Eurogamer said:
Because DLSS 3 is integrated into the game, with access to crucial engine data and backed by specific hardware acceleration on the silicon, it achieves superior results.

I suspect without using any tensor cores, Adobe's method would still be glacial - but I also suspect with actually comparable data to work from, the results would be far closer.
 
Don't you every give up?

I reported those that cant behave mature. Console users can disrupt and be totally childish here, however in their section things get cleaned accordingly. There needs to be some balance. Seriously, why are you even here if you game on console anyway?
 
But those methods don't have access to the same data the game is providing to DLSS, so of course they look like shit by comparison. Their hands are tied behind their back from the start. It's a 'technical comparison of frame generation', but the starting point to determine the created frames is completely different.

From Eurogamer's own article



I suspect without using any tensor cores, Adobe's method would still be glacial - but I also suspect with actually comparable data to work from, the results would be far closer.

I don’t understand what you’re arguing here. Of course they work differently. It’s just a comparison of the quality of the output, and they even explain why dlss 3 can have relatively better output while being real time.
 
From their own capture card from card provided by NVIDIA long before rest of the press along with pre-release builds of games not available for others and that they already did one NVIDIA curated video?
As Flappy Pannus pointed out there's several reasons why it can and does look to some like that.
That happened when other outlets dont care about Raytracing, upscaling and other modern graphic features.
 
I don’t understand what you’re arguing here. Of course they work differently. It’s just a comparison of the quality of the output, and they even explain why dlss 3 can have relatively better output while being real time.

A comparison of DLSS to say, checkerboarding and spatial upscalers is completely understandable as those were the actual choices gamers had available to them - they were DLSS's competition in reconstruction. You can then look at the efficiency in terms of performance/image quality by comparing what had come before.

DLSS 3 isn't meant to replace the motion smoothing methods offline renders use, and the methods offline renders use were never going to be used for games. It's showing how it's an 'improvement' over something that will never be applicable to gamers. I guess I just don't see the value in comparing the 'advancement' over something in a different medium. The two methods are not in competition.
 
The DF preview confirmed that it is an intermediate frame generated from the current and previous frame, and not taking two frames and generating a future frame (this wouldn't make sense because it would have to "hallucinate" details when panning to new scenes).

It also kind of confirmed how the latency works. I think half a frame plus the cost of generating the intermediate is roughly what's going on. You can't generate the intermediate frame asynchronously because the current frame has to be completed.

End to end on your pc you'll have some input delay on your mouse/controller + game simulation/animation latency + game rendering/buffering latency + display latency. In some games the simulation, animation etc add a lot of latency, so even at high frame rates you'll still have high latency. Other games like esports games have incredibly low simulation/animation latency, so if you double your framerate you'll get reduce the game latency by almost half. I'm not sure how flexible Reflex is in its implementation, but it looks like we're going to see a big variety of results where DLSS 3 fares better or worse than native in terms of latency.

1664399617174.png
 
Last edited:
Back
Top