Nvidia DLSS 3 antialiasing discussion

By opting out of using frame interpolation, the third major iteration of DLSS is inviting itself to have more visual artifacts in comparison to it's previous iteration by the virtue of having less information to work with. Not buffering multiple frames also means that frame generation could increase the likelihood of introducing more inconsistent camera/scene transitions between frames as well due to non-linear animations, this can also be commonly described as "judder" too ...

While DLSS in this case improved the frame rate, the "end user experience" arguably may not be an improvement and may even be a net negative ...
 
By opting out of using frame interpolation, the third major iteration of DLSS is inviting itself to have more visual artifacts in comparison to it's previous iteration. Not buffering multiple frames also means that frame generation could increase the likelihood of introducing more inconsistent camera/scene transitions between frames as well due to non-linear animations, this can also be commonly described as "judder" too ...

While DLSS in this case improved the frame rate, the "end user experience" arguably may not be an improvement and may even be a net negative ...
Without adaptive-sync there is either tearing or judder.
 
But i dont see the problem. They can easily calculate how long it took for the previous frames to get rendered to put the generated frame in the "middle" position.
 
But i dont see the problem. They can easily calculate how long it took for the previous frames to get rendered to put the generated frame in the "middle" position.
Animations or movements can have varying speeds/directions of transitions within them so just using information from previous frames alone might not be able to prevent perceived "judder" ...
 
But that happens without DLSS 3, too. Rapid movement can break TAA or upscaling.
DLSS is just making it worse since animations and movements can be portrayed incorrectly. Building native or upscaled frames ensures that animations and movements can have more smooth transitions ...
 
Bit more info on DLSS 3 here:


Interestingly it looks like all 3 components can be separated and Nvidia recommends developers do just that. Do you can turn on DLSS 2, Reflex, and/or frame generation completely independently.

Sounds like it's really easy to implement too and already integrated into the major engines. Sounds pretty cool to me.
 
Bit more info on DLSS 3 here:


Interestingly it looks like all 3 components can be separated and Nvidia recommends developers do just that. Do you can turn on DLSS 2, Reflex, and/or frame generation completely independently.

Sounds like it's really easy to implement too and already integrated into the major engines. Sounds pretty cool to me.
Without Frame Generation it's not really DLSS 3 and with Frame Generation it's artifacts galore
 
Have you a problem with all these FSR 2.0 artefacts, too?
Yes I do, doesn't matter which brand you put on your scaler, I have issue with artifacts. DLSS3 is based on the DF preview in league of it's own currently though.
 
These artefacts are not worse than FSR 2.0. So, i think for most people it is not a problem.
Seriously?
 
Animations or movements can have varying speeds/directions of transitions within them so just using information from previous frames alone might not be able to prevent perceived "judder" ...

Yep, animation error should be noticeable and may increase judder in a way. All the animation will look like it's just continuing along a curve, farther than it would normally, before snapping to correct position next "real" frame.

It also won't help blur. I'm playing Fable 2 through MS's cloudplay (it was $1 so why not) and it's kinda amazing to see, even over cloud compression on an old game, how nice and sharp a non taa game looks. I'm so glad the Fable reboot guys are looking at using native res and relying on variable rate shading to claw perf back instead of upscaling.

All that being said, frame insertion does sound like the sort of thing that would be a selling point for some sort of mid-gen console upgrade. I can see a "PS5-S/Xbox Series X+" running RDNA4 with a Xilinx stacked inference accelerator. Imagine combining AI upscaling, AI frame insertion, and AI RT denoising (Intel is working on this) as a console upgrade. I'd think it'd be a relatively inexpensive 2024 upgrade, Bloodborne 2 running FSR3 at 8k(4x upscaled) 60(2x upscaled) with better RT(upscaled) is a pretty good headline grabber.
 
DLSS is just making it worse since animations and movements can be portrayed incorrectly. Building native or upscaled frames ensures that animations and movements can have more smooth transitions ...
While this can certainly be the case we really need to see the tech working in person to assess how much of an issue this would be. Not all games have erratic camera movements which cannot be properly predicted. And even in those which do the issue may be tolerable, based on the "native" framerate the game is running at.

Also - people sound almost as if all games use DLSS3 now and there are no option of not, you know, using frame generation feature.
 
Without Frame Generation it's not really DLSS 3 and with Frame Generation it's artifacts galore

I think we should be thinking of DLSS 3 as a suite of products rather than a better version of DLSS 2. In fact I'm surprised that Nvidia haven't pushed the connection between DLSS 3 and it's 3 elements more:
  1. DLSS 2 (upscaling)
  2. Reflex (Latency reduction)
  3. Frame Generation
So seeing DLSS 3 replace DLSS 2 in all instances is a definite win all round IMO because it brings the Reflex option to the table by default - benefiting most modern Nvidia users (not me) and places the option of frame generation on the table for those very few that can use it, and hopefully the many more that will be able to use it in the future.

I do have to say though that limiting the frame generation tech to the fastest 3 GPU's on the market seems a bit questionable from a GPU performance perspective. Although it's ability to get around CPU limitations could be a huge benefit. In my situation for example, if I got the 4080(70), I'd be able to run Spiderman Remastered at well over 100fps despite having "only" a 3700X. In fact I could likely lock it to 120fps for the vast majority of the time.

Obviously image quality needs to be considered in that equation though. I'm very much looking forward to @Dictator analysis on that.
 
Seriously?
Doesnt look worse than this:
 

Attachments

  • spider_FSR_P.jpg
    spider_FSR_P.jpg
    134.6 KB · Views: 36
  • spider_TAA.jpg
    spider_TAA.jpg
    146.6 KB · Views: 36
exciting stuff, the next step would be to get the dlss requirements IE "the motion vectors and sys call" embedded into the dx 12 api so every compliant dx12 game can use deep learning, without having the programmer code the diver call bc the api generates the motion vectors implicitly.
 
Back
Top