Nvidia DLSS 1 and 2 antialiasing discussion *spawn*

In this case, it's the DLSS image that's noticeably dimmer compared to the original image in the results above since we're missing patches of high intensity lights on the couch and the and on the blanket if we take a closer look.
https://forum.beyond3d.com/threads/...g-discussion-spawn.60896/page-95#post-2200999
https://imgsli.com/NTIxMzM
It's dimmer because image with DLSS + RTXGI converges faster proportionally to frame rate, which is quite expected, right?:D

You are talking only about spatial convergence while temporal convergence matters as much as spatial if not more.

It is true that we don't have reference path traced results but I would at least think that the "unfiltered result" (DLSS disabled)
RT results are filtered in both cases - with and without DLSS, it's just that filtering in higher res space would likely introduce less bias.
 
Last edited:
At 1440p I got the same FPS, RTX on or off, something else is limiting the perfs... (around 100fps with DLSS). But DLSS is making a big impact...
 
Ray-count per-pixel scales with resolution? That seems odd. Of course I’m only talking about actual rendering resolution here.
He, much like myself, never said "per pixel"

But it's possible it could. Perhaps the lower resolution allows them to cast more rays per pixel.

But assuming the same amount of rays per pixel are being cast, then yes, ray count scales with resolution.
 
He, much like myself, never said "per pixel"

Right hence my reply that the lower resolution alone doesn’t explain the difference in scene luminance assuming rays per pixel is constant.

Total number of rays cast in the scene isn’t really meaningful as luminance is calculated per pixel.
 
Right hence my reply that the lower resolution alone doesn’t explain the difference in scene luminance assuming rays per pixel is constant.

Total number of rays cast in the scene isn’t really meaningful as luminance is calculated per pixel.
Denoising biases results in the final image in some way - less rays, more rays, change the result of denoising.
 
Right hence my reply that the lower resolution alone doesn’t explain the difference in scene luminance assuming rays per pixel is constant.
Yep, this and it seems people don't realize that RTXGI doesn't even have any dependencies on resolution (only on frame rate) since lighting probes are updated separately from pixels in screen space, so energy losses can't exist in case of RTXGI with DLSS, as a side joke - there can only be energy gains, lol
 
Yep, this and it seems people don't realize that RTXGI doesn't even have any dependencies on resolution (only on frame rate) since lighting probes are updated separately from pixels in screen space, so energy losses can't exist in case of RTXGI with DLSS, as a side joke - there can only be energy gains, lol

Then why is the image not only less bright but also blockier (the light streaming in from the window) when DLSS is enabled in that NV example? Instead of nice straight edges on the light stream in with DLSS disabled it becomes blocky once DLSS is enabled? This leads to a significantly worse looking image in NV's own example once DLSS is enabled.

Regards,
SB
 
Then why is the image not only less bright but also blockier (the light streaming in from the window) when DLSS is enabled in that NV example?
The image with DLSS is not less bright.
Apparently, whoever made those screenshots captured the screen without DLSS right after switching DLSS Off, but it takes some time for the RTXGI solution to fully converge so that there are no more changes in scene brightness, especially with DLSS Off since frame rate is 3 times lower, hence 3x more time is required to accumulate the same amount of rays in probes without DLSS.
As for lower res 3D texture volumes, they seem to be linked to screen resolution, i.e. lower res volumes are used for lower screen resolution.
 
https://imgsli.com/NTIxMzM
After a few seconds of waiting for lighting to converge, as you can see, no more difference in GI now (DLSS takes way less time to converge as expected due to 3x higher framerate), just a subtle difference in shadows.

The image with DLSS is not less bright.
Apparently, whoever made those screenshots captured the screen without DLSS right after switching DLSS Off, but it takes some time for the RTXGI solution to fully converge so that there are no more changes in scene brightness, especially with DLSS Off since frame rate is 3 times lower, hence 3x more time is required to accumulate the same amount of rays in probes without DLSS.
As for lower res 3D texture volumes, they seem to be linked to screen resolution, i.e. lower res volumes are used for lower screen resolution.
Your own comparison, where you waited for few seconds, still has brightness difference, without DLSS being brighter (see especially couches end/endtable and the right side of that blanket fort thingy). Also the blanks between "godrays" or whatever you call the light beams coming through windows are a blocky, barely even there mess with DLSS.
 
The image with DLSS is not less bright.
Apparently, whoever made those screenshots captured the screen without DLSS right after switching DLSS Off, but it takes some time for the RTXGI solution to fully converge so that there are no more changes in scene brightness, especially with DLSS Off since frame rate is 3 times lower, hence 3x more time is required to accumulate the same amount of rays in probes without DLSS.
As for lower res 3D texture volumes, they seem to be linked to screen resolution, i.e. lower res volumes are used for lower screen resolution.

So without DLSS you have immediate brightness, but with DLSS ON you have to wait a few seconds for it to match the brightness without DLSS? And this is better? And even waiting it never achieves the same brightness as with DLSS OFF.

I mean they are constantly switching back and forth in that screen animation, so it doesn't matter when they started the screen capture.

If this isn't representative of DLSS, then NV shouldn't have chosen this scene as an example of why DLSS is good.

Regards,
SB
 
So without DLSS you have immediate brightness, but with DLSS ON you have to wait a few seconds for it to match the brightness without DLSS? And this is better? And even waiting it never achieves the same brightness as with DLSS OFF.

I mean they are constantly switching back and forth in that screen animation, so it doesn't matter when they started the screen capture.

If this isn't representative of DLSS, then NV shouldn't have chosen this scene as an example of why DLSS is good.

Regards,
SB
Who is "they" with regard to switching back and forth in the animation? It's just the Youtuber doing the On/Off switching.
In the demo, you can toggle ray-traced reflections, ray-traced translucency, DLSS, RTX Direct Illumination, and RTX Global Illumination to visualize the difference in real-time.
NVIDIA Unreal Engine 4 RTX & DLSS Demo | RTX ON (OFF) - YouTube
 
So without DLSS you have immediate brightness, but with DLSS ON you have to wait a few seconds for it to match the brightness without DLSS? And this is better? And even waiting it never achieves the same brightness as with DLSS OFF.
Sigh, DLSS does not affect brightness at all, if you have nvidia gpu, just download the demo and play with it.
Also, having more brightness doesn't mean anything, bright image can be as biased as dark one.
Whenever you reset the scene by switching DLSS On or Off, brightness resets and initial image after the reset is brighter (at least it was the case from the starting point of view in this demo) then image becomes darker after a few seconds.
Scene becomes darker faster with DLSS On since there are more frames (bounces are accumulated over time) and rays per probe resolution is fixed and independent of screen resolution - see RTXGI presentations for details.
Rays are being accumulated in probes over hundreds and probably thousands of frames, after a while scene becomes darker/brighter - this actually depends on lights positions, surrounding objects, etc. Scene in this demo is never 100% static, there are small moving point lights, so you will never achieve 100% the same brightnes between different frames and it doesn't depend on whether DLSS is Off or On.
 
So without DLSS you have immediate brightness, but with DLSS ON you have to wait a few seconds for it to match the brightness without DLSS? And this is better? And even waiting it never achieves the same brightness as with DLSS OFF.

No, this is not what's happening. With both DLSS on and off the scene starts off brighter and converges to the correct lower brightness over many frames. It just converges faster with DLSS on for obvious reasons. The end result is the same. It's easy to see what's happening if you run the demo.

If this isn't representative of DLSS, then NV shouldn't have chosen this scene as an example of why DLSS is good.

It certainly isn't the most impressive demo of DLSS but it has nothing to do with brightness. The problem with DLSS in this scene is a very visible amount of shimmering and aliasing on the floor boards. If this was a game I wouldn't play it with DLSS on.
 
https://www.metrothegame.com/news/the-metro-exodus-pc-enhanced-edition-arrives-may-6th

Interesting response in the Metro Exodus FAQ for their new RT version:
Will you be adding in AMD Super resolution later?

We will not be adding specific support for this, as it is not compatible with our rendering techniques. However we have our own Temporal based reconstruction tech implemented that natively provides the same or better image quality benefits for all hardware.

Who knows if they've seen the final version, but at least from this they give the impression this is perhaps not something that will ultimately take on DLSS from a quality perspective.
 
https://www.metrothegame.com/news/the-metro-exodus-pc-enhanced-edition-arrives-may-6th

Interesting response in the Metro Exodus FAQ for their new RT version:


Who knows if they've seen the final version, but at least from this they give the impression this is perhaps not something that will ultimately take on DLSS from a quality perspective.
Nah.. that doesn't appear to be a knock on AMD's tech at all, but rather them talking up their own tech.

I wouldn't read into it too much.
 
Nah.. that doesn't appear to be a knock on AMD's tech at all, but rather them talking up their own tech.

I wouldn't read into it too much.

The Digital Foundry video shows that the native Temporal upscaling in the game, while good still falls well short of DLSS though. So if the developers do have access to FSR (and they must have some level of access to know its incompatible with their pipeline) and believe their solution to be at least as good, then thats quite telling.

We already know FSR isn't machine learning based so perhaps it is just a temporal upscaling solution that devs can plug in to their game rather than develop their own? So really just an evolution of CAS based upscaling like that seen in CB2077.
 
We already know FSR isn't machine learning based so perhaps it is just a temporal upscaling solution that devs can plug in to their game rather than develop their own? So really just an evolution of CAS based upscaling like that seen in CB2077.
We do? That's news to me, last time I heard anyone from AMD commenting it was that they don't know yet whether it'll be utilizing machine learning or not as they were at least at the time still exploring different options.

edit: just googled how they supposedly "confirmed" it, but Herkelman didn't actually confirm one way for another in the interview.
He said you don't need machine learning to do it, you can do it many different ways and they're evaluating those many different ways (which quite surely do include machine learning ways, it's just not a requirement for all the methods they could use)
 
Back
Top