Nvidia DLSS 3 antialiasing discussion

Can you use a frame rate limiter to limit the framerate to say 70fps and then use DLSS3 to double it? Obviously that's probably a waste if you can get to near 144fps without frame generation anyway. Although if you throw on max DLDSR I assume you could restrain even the 4090 enough.
DLSS3 frames are "real" frames to the OS or any other tool. You can limit them but you can't control the "native" framerate this way. Possibly something for Nv to add to drivers though?
 
DLSS3 frames are "real" frames to the OS or any other tool. You can limit them but you can't control the "native" framerate this way. Possibly something for Nv to add to drivers though?
But not for the game... I found out that the ingame frame limiter in Bright Memory limits the game own rendered frames. So with 60FPS the game renders every 16,6ms a new frame and FG is doing the magic and creates the intermediate frame. Frame pacing was 100% perfect. Latency was 5ms higher (this time latency was much lower with 8ms at 60 FPS and 13ms with FG).

Latency with 60FPS and FG was the same as "nativ" 100FPS (more is not possible) with and without reflex.
 
I've had to disable it in plague tale, it creates very visible judder for whatever reason at 4K. Like uneven frame pacing.
 
Blur Busters posted in a thread on HardForum (didn't know it still existed) about how frame generation could incorporate mouse movement data for reprojecting frames, and basically have the Vulkan and DirectX apis provide frame generation helpers so they could do B-frames and I-frames like in video. I'm a little unclear on some of the things he's trying to explain.


What's a "parallax-reveal pixel"?

Edit: Oh, the parallax effect. Something in the foreground occluding something in the background and then the foreground moves.
 
After several hours playing Spider-Man with FG i like it. Works very good, visual artefacts are minimal and only the big ones (running up a wall) are noticeable.

And using it with adaptiv sync monitor, there are two more application for FG:
  1. Reducing the drawbacks of no adaptive overdrive. Instead of having 70fps, playing with 140fps helps the overdrive to work much better.
  2. Black frame insertion. Limiting the in game frame creation to 60 FPS allows FG to push 120FPS to the display. Should allow for much better BFI.
 
DLSS3 frames don't use the CPU at all (as expected), but fps pacing relies on the CPU still, the better the CPU the better the pacing.

do you mean framepacing or framerate speed? 'Cos except if a gamer is in the competitive side of gaming, any CPU from the last 3 years with 12 to 16 threads is going to rest in its laurels running games at 60fps like it was nothing.

Imho, when I see the new CPUs by AMD and Intel, I enjoy watching how they perform, although the speeds are so insane that if you got a 60 to 120Hz screen, most of that processing is wasted. In fact I run most games at 60fps, as I prefer to have RT enabled.
 
Last edited:
F1 22 got the DLSS 3 patch yesterday. I played a few hours and FG is really good. I must say after Spider-Man, Bright Memory, Superpeople and now F1 22 i find it more impressive than DLSS 2. Sound strange, i know, but DLSS 2 has still a few issues which can be annoying. With FG i get the performance uplift from at least DLSS@Quality (and much more in CPU limited scenarios) with the full native image quality.

In F1 22 FG is around 60% faster than nativ with Raytracing@Ultra - and i can get up to 170FPS for my 170Hz widescreen display. Game even supports a very good ingame frame limiter. So i limit my FPS to 85 and FG generates these other 85FPS. Frame paying is very good, too.
 
With FG i get the performance uplift from at least DLSS@Quality (and much more in CPU limited scenarios) with the full native image quality.
Seriously, you can be happy with it but stop these blatant lies. You don't have to raise it to some strange pedestal and pretend the artifacts don't exist to make it good for those who like it.
 
So you like artefacts? Dont really get your respond. Native rendering is full of artefacts. Having a fast IPS display means i want to use my 170Hz. There is only one question: What is the best way to archive it.
 
So you like artefacts? Dont really get your respond. Native rendering is full of artefacts. Having a fast IPS display means i want to use my 170Hz. There is only one question: What is the best way to archive it.
Artifacts, in context of games, is referring to different errors in rendering* visible on screen. They're not supposed to be there. DLSS FG creates extra artifacts, it doesn't offer native quality.
* (or in this case interleaving)
 
Artifacts, in context of games, is referring to different errors in rendering* visible on screen. They're not supposed to be there. DLSS FG creates extra artifacts, it doesn't offer native quality.
* (or in this case interleaving)

So, like playing a game without raytracing?
 

It's neat that the fake frames give the impression of more responsiveness in the right circumstances. Wasn't quite expecting that drop off in perceived visual quality from a blind test though, guess it is that bad even in Spiderman.

It's also neat to see the "native vs dlss2" visual quality graphs. While in motion the visual quality drop massively between native and upscaling, hey people's visual systems lose resolution when stuff moves so who cares! Thus why all this stochastic/taa stuff works to begin with. So concentrating on good static reconstruction/taa is valid.

I wish any of these reconstruction/taa systems did better with pixel sized details though. FSR does the best currently but even that has issues. If games are going to be rendering strands of hair/fuzz on clothing/whatever TAA needs to be able to handle these 1 pixel wide features without either extreme "judder/ghosting" or just plain AAing out the detail it currently exhibits

For those wondering: We lose resolution under movement so having stuff get blurry is fine. But sharp "wrong" features, like a double image from a strand of hair/fiber ghosting terribly, or from incorrect forward projection, still stand out enough to be pretty visible. To sum: Blurry/undersampled movement fine; Sharp incorrect movement bad.
 
Last edited:
It's neat that the fake frames give the impression of more responsiveness in the right circumstances. Wasn't quite expecting that drop off in perceived visual quality from a blind test though, guess it is that bad even in Spiderman.
If you watch the video, most subjects found it very difficult to discern between the three. And when running uncapped fps, most people preferred DLSS3 responsiveness, mistakenly associating increased smoothness with responivenes of course.
I wish any of these reconstruction/taa systems did better with pixel sized details though. FSR does the best currently but even that has issues
I thought DLSS enjoyed the advantage in this area?
 
That is probably the worst LTT video I have seen ever. What was the point of the comparative tests? The first one they cap the frame rate at 120 fps so DLSS3 is running at a real 60 fps while the others run at a real 120. That is crazy. Who would do that? Then they cap it at 60 fps so DLSS3 is running at a real 30 fps. And then they run Plague Tale Requiem with uncapped frame rates. So finally, something that might make sense. But a 4090 runs Plague Tail Requiem at 97fps at 4k according to Techpowerup. And DLSS2 will push that to 120fps easy, right?

Then you finally get to a situation that makes more sense. Here they cap native at 30fps and compare that to DLSS3 at 60hz. (30fps real.) Here DLSS3, unsurprisingly, crushes.

But that is the entire point of DLSS3! To double your frame rate. Under what other circumstance would you run it? In all of their tests except 1 (where DLSS3 wins hands down) the frame rate uplift is 0-20%.

I must not be understanding things, because their testing methodology is horrendous.
 
Back
Top