TopSpoiler
Regular
Previous B3D DLSS thread covering 4 years: https://forum.beyond3d.com/threads/nvidia-dlss-1-and-2-antialiasing-discussion-spawn.60896/
Last edited by a moderator:
ATM I am not working *directly* on a follow-up video but I already have done some performance capture and such on non-intel GPUs as to prep for such a video in the future. I also made some quality comp videos for the non-intel arc version of XeSS.
But to be blunt, I am a bit tired of working on IQ comparison videos. I feel like I have done soooo many in the last few months... I am looking forward to the eventual RTX 4*** series launch whenever that is because it hopefully means I can get away from all this IQ stuff for a bit lol
92 pages and 4 years is more than long enough.Ehm, I'm not sure that it's wise to create a separate DLSS3 thread as DLSS3 seem to be an extension of DLSS2 SDK - meaning that games with DLSS will still be "games with DLSS" but Ada frame interpolation feature will simply be greyed out on Turing and Ampere.
Then you should close the old thread and link to it from the OP in this one I think.92 pages is more than long enough.
But this should be where the new Frame Interpolation feature is focused. There has to be some balance so new posters can find a reasonable jump-in-point to a topic.
Then you should close the old thread and link to it from the OP in this one I think.
Optical Flow functionality in Turing and Ampere GPUs accelerates these use-cases by offloading the intensive flow vector computation to a dedicated hardware engine on the GPU silicon, thereby freeing up GPU and CPU cycles for other tasks. This functionality in hardware is independent of CUDA cores..
Can it? I'm under impression that you pretty much have to provide the same set of data as for DLSS2 (and DLAA?) for it to work.This can be used in games that don't have DLSS2 or even TAA for that matter. Probably under a different branding.
But would it work? Or does the interpolation require the data feeded into DLSS/DLAA to work?Should have been more clear, DLSS3 would be using motion vectors and optical flow both, but nvidia can expose only the latter in their drivers for any game like they did with DLDSR.
@yamaci17 There is a new hardware unit in the RTX 40 series called the optical flow generator. It is a requirement of DLSS 3.0. For some context:
Hm as I understand this it is basically required to supply motion vectors with the frame for this feature to work as it predicts movements based on motion vectors. This means that it is likely to use the same data as DLSS does and wouldn't really work without it - at least the predication approach wouldn't.It is entirely separate from the motion vectors needed for DLSS2, the link mentions that it's better for effects while motion vectors are better for geometry. Though it might not be as good without them, like the TAA implemented in one of the reshade shaders that tried to work without them but ended up just blurring the image in most cases.
edit: from the Ada thread, it should be quite good even as a standalone.
DLSS 3 is putting GPUs on a nice path. Work smarter, not harder. Increasing raw performance is becoming more and more ridiculous. On top of that, it looks like they've addressed (hopefully) some of the cpu-side issues by making BVH building faster and less memory intensive with the 40 series. DLSS now helps in cpu limited situations, so with nvidia reflex added in you no longer have to care about anything if you're playing something competitive. You turn on reflex, you turn on DLSS 3 and away you go. Hopefully they actually release a 4050/4060 with capable DLSS 3 support for playing on 1080p screens. I bet with DLSS3 1080p500 monitors could be doable for quite a few games with low/medium settings.
Edit: To clarify this a bit more, a lot of people play competitive games with low settings for visual clarity, but then you end up cpu limited. In those cases, DLSS did nothing for you so competitive gamers normally turned it off and considered it a useless feature. Now you can get an improvement in frame rate from DLSS 3 in that scenario, and you have reflex on top to handle the input latency situations when you're gpu gets near 100% because of particular visuals. It's a winning combo.
DLSS3 just makes input lag worse, so it's just bad for any competitive gaming.
That it gives more FPS is irrelevant when those extra FPS comes at cost of worse input lag.
Where did you find this out?DLSS3 just makes input lag worse