Nvidia DLSS 3 antialiasing discussion

That seems to be a major stumble. Seen a lot of people that really like HFR (as in, omg I need hfr even for work non hfr terrible, etc. etc.) feel that going beyond about 144-165hz really drops off in any appreciable benefit outside VR.

DLSS3 does seem neat, but it seems too limited for many to care much. Needing a $1k+ card to work at all(?), not workable at all on some titles as you really don't want to see it on moving foliage, for the most part it makes latency worse and not better, and works best on boosting games past what most monitors go and what even most hfr fans might care about.

It's cool that it exists, but I'm not sure it's a "killer app" that AMD and Intel will feel compelled to copy like with DLSS2.

I think in general for a really noticeable difference it's doubling of Hz 30 -> 60 -> 120 -> 240 -> 480 -> 960. All of the inbetween are not necessarily big noticeable jumps. A 165 Hz monitor won't necessarily be a huge difference to 120Hz for most people. The other problem is there are a lot of high refresh displays that don't actually have fast enough panels to take advantage of the refresh rate. When you're in 240Hz territory and higher slow pixel transitions can pretty much ruin the experience. It's why first generation 360Hz monitors really weren't worth it compared to really well implemented 240Hz monitors. It'll be the same when 480Hz screens come out compared to 360Hz.

Optimum Tech looks at this in his new benq 360Hz review.

Overall, I think the industry finding ways to make 240 - 960Hz displays more viable without mega computational power is a good thing. I'm hoping that in the future they can remove the cpu even further from rendering, and basically have gpu-driven rendering and maybe generate multiple intermediary frames with neural networks.
 
It's always fun to watch them fumble and go back and forth with their reasoning, I mean they are fine with sacrificing all of the ray tracing visual improvements for the sake of performance .. but they also refuse DLSS3 and DLSS2 because it sacrifices visual quality a little despite adding tons of performance.
It seems they cannot acknowledge the fact, that the balance between image quality / performance is either highly subjective, or strictly utilitarian...
It is still, a viable option, to have ridiculous resolutions with extreme IQ settings, running at 30 fps with the latency that comes with it.
I don't personally like it, but I don't have to... And neither do reviewers.
It is a balancing act, you cannot have it both ways.
If you play competitively, you adjust settings accordingly.
If you are prone to dizziness because of low frame rates, you adjust settings accordingly.
If your brain sees light, where there should be none, well, you adjust your settings accordingly. :D
There is an argument to be made, whether the image quality of Control for example, is better native, with no RT, vs DLSS upscaled, with RT on (for me it's hands down the later).
It doesn't mean my opinion is correct, just that this, is what works for me, considering the alternatives.
What is certain is, that rarely do max IQ settings, make sense.

I will say though, that we should all acknowledge that all these options, make the life of reviewers extremely hard.
Frame generation is another nightmare on top of image reconstruction, making things even more complex.
 
I just want to put my own thought into this: Some of you "automatically" assume that we or, in general, users play non-Reflex games with high latency already. That is so wrong. Having a nice and comfortable frame cap is able to achieve what Reflex essentialy can achieve. Acting like only way to get Reflex-like latency is with Reflex capable games is misleading.

The biggest latency reduction that Reflex creates depends on heavily GPU bound situations. But it is possible to avoid GPU bound latency by using more tuned settings and a nice framecap. That's about it. Say, Cyberpunk has no Reflex (it does not). And a 5700xt user can easily achieve Reflex-like input lag by capping their framerate, or simply using dynamic resolution scaling in many games (dynamic resolution scaling will also usually leave a %5-10 portion of GPU power as spare).

To me, I'm accustomed to Reflex-like input lag due to using frame capping in almost every game. I hated GPU bound input lag before Reflex was a thing, so naturally people like me are already accustomed to low input lag at 60-70-80 frames.
 
Reflex lets you unlock your fps, so you have better latency through higher fps, and also offers much less latency than frame capping.
Can you provide some evidence for this? Battlenonsenses findings contradict your claim:

Screenshot 2022-10-17 at 15-04-22 NVIDIA Reflex Low Latency - How It Works & Why You Want To U...png

Ingame FPS Cap=60 without Reflex is very close to the Reflex scenarios on the right side, identical to the 60fps capped Reflex one.
 
Can you provide some evidence for this? Battlenonsenses findings contradict your claim:


Ingame FPS Cap=60 without Reflex is very close to the Reflex scenarios on the right side, identical to the 60fps capped Reflex one.
I think David may be thinking of the scenario where your GPU is not always capable of hitting the target frame-rate. Like on a 144hz monitor. Reflex will maintain GPU load at a high percentage, but below "Full load" regardless.

So ca. 120 fps on a 144hz monitor with Reflex will have better input latency than ca. 120 fps on a 144hz monitor with an FPS cap of 141.

Reflex is automated FPS capping, kinda, if it works.
 
The frame capping results are quite an eye opener. I had no idea frame rate capping had such an impact on latency although looking back I have certainly experienced this myself without realising it. Weird that the same doesn't seem to hold true under DLSS 3 though. I don't think the additional 10-20ms latency from DLSS 3 is much of a big deal but the effective inability to cap your framerate for perfectly consistent frame delivery might well be a problem.
 
I think David may be thinking of the scenario where your GPU is not always capable of hitting the target frame-rate. Like on a 144hz monitor. Reflex will maintain GPU load at a high percentage, but below "Full load" regardless.

So ca. 120 fps on a 144hz monitor with Reflex will have better input latency than ca. 120 fps on a 144hz monitor with an FPS cap of 141.
Certainly, but in a very typical competitive gaming scenario, players play with low graphics settings and the GPU will never hit close to 100%, rather the monitor or occasionally the CPU is the limiting factor. Reflex won't do anything here and David's generalization completely omitted this.
 
I just want to put my own thought into this: Some of you "automatically" assume that we or, in general, users play non-Reflex games with high latency already. That is so wrong. Having a nice and comfortable frame cap is able to achieve what Reflex essentialy can achieve. Acting like only way to get Reflex-like latency is with Reflex capable games is misleading.

The biggest latency reduction that Reflex creates depends on heavily GPU bound situations. But it is possible to avoid GPU bound latency by using more tuned settings and a nice framecap. That's about it. Say, Cyberpunk has no Reflex (it does not). And a 5700xt user can easily achieve Reflex-like input lag by capping their framerate, or simply using dynamic resolution scaling in many games (dynamic resolution scaling will also usually leave a %5-10 portion of GPU power as spare).

To me, I'm accustomed to Reflex-like input lag due to using frame capping in almost every game. I hated GPU bound input lag before Reflex was a thing, so naturally people like me are already accustomed to low input lag at 60-70-80 frames.
A cap, along with sensible settings will certainly give amazing results. :)

Unfortunately, GPU reviewers are entering nightmare level difficulty, because of all these options, especially when they themselves, are not actually playing games.
Taking into account every piece of information available in this thread, how can anyone evaluate a feature, when in practice the only objective metric they focus on, is raw performance.

There is a difference between testing something for metrics, and actually using it for what it was intended, and although "higher is better", sometimes it isn't.
To reach a conclusion without any of the experience of use, you need to combine huge amounts of different metrics.
And that would be a good way to reach conclusions, because, although you are choosing what you are measuring, you are at least excluding most of your biases out of the combined results you actually did measure.
Perhaps I'm wrong, but I feel this is not very feasible for small teams of people, given the dataset.
Having on the other hand the experience of using the device you are testing, at least gets you on the path to ask the right questions.

Anyway, what I'm trying to say is, that it's a tough job, and it's OK to make mistakes.
No need for pitchforks and/or crucifictions people!
 
@yamaci17 I also cap all of my games if they don’t support reflex. It can be tricky with some games if the have highly variable performance. Reflex just makes it automatic and easy.

Unfortunately a lot of people don’t know how much of an improvement capping can make. Battlenonsense was a gem.

Reflex must be doing more than simply capping GPU utilisation though or else we wouldn't see the big improvements shown in Richards video when vsync is engaged. And also in Alex's video where it reduces latency under DLSS3 while an RTSS frame limiter doesn't. Although I guess DLSS 3 itself may be changing the behaviour there.
 
I was kinda just thinking that a cool use case for DLSS3 could be a "cutscene only" mode... basically since cutscenes can be very demanding, having DLSS3 engage for cutscenes where input lag is of no importance, could be a cool way of boosting performance in taxing scenes while not affecting input lag during gameplay.
 
Looks like A Plague Tale: Requiem may end up being somewhat of a poster child for DLSS3, considering it may be CPU limited on pretty much any previous platform from reaching 60fps, let alone 100+. The 4090/DLSS3 just absolutely stomp this game at 4K with DLSS3, while every other platform completely struggles with it at 1440p.

Considering this is even without the RT patch yet(!), this title might take The Medium as one of the most poorly optimized games in recent memory - the visuals are impressive, but they just don't justify this insane demand imo - if this was with RT GI that would be on thing, but not without. The PS5 and SX versions run at 1440p/30 with drops below, as well as a "40fps" mode at 120hz - but that's more like "30 fps unlocked" than a true 40fps mode it seems.

Still, a pretty obvious win for DLSS3 and the 4090 already. Even in most other demanding games you can usually lower settings/res to ~100fps, this is one where even getting 60 looks to be a struggle without DLSS3.

 
Did you...watch the video? The one that's hyperlinked in the post you quoted? It's a 30fps game on consoles.

It does 40 FPS if you pick 120 Hz VRR, from what I saw posted about Series X.
 
Did you...watch the video? The one that's hyperlinked in the post you quoted? It's a 30fps game on consoles, with a 40fps mode (with drops) for 120hz screens.
Yes so? As I've said it's targeting 60 on XSS. The fact that it drops down to 30 (and below) doesn't mean that it's CPU limited, in fact it is most likely GPU limited on consoles.
 
Back
Top