Nvidia DLSS 1 and 2 antialiasing discussion *spawn*

I'm just wondering why you would do that anyway.
You run DLSS2 for a reason, I've not seen anyone say they would run it if they was getting everything they required at native resolution. Not yet anyway, although can be situations where may look better.

It's also still early days even if it doesn't feel like it.

I can bring you examples of some people who don't like DLSS because they prefer to not risk having artifacts at all. Or because not everytime having the highest FPS number is the most important thing.
 
DLSS 2.0 isn't better than native. In some games it's better in some aspects of the image. The quality still seems to be fairly variable from title to title.
 
I can bring you examples of some people who don't like DLSS because they prefer to not risk having artifacts at all. Or because not everytime having the highest FPS number is the most important thing.
This is why options are good.

I'm sure that some people would prefer having native non RT at 30fps, compared to DLSS RT at 60.
Guess depends on what you think is the better overall experience is.
But it's early days and all smart upscaling techniques will continue to improve.
Non native is definitely the direction of the industry one way or another.
 
This is why options are good.

I'm sure that some people would prefer having native non RT at 30fps, compared to DLSS RT at 60.
Guess depends on what you think is the better overall experience is.
But it's early days and all smart upscaling techniques will continue to improve.
Non native is definitely the direction of the industry one way or another.

I am not questioning options nor the utility for many to have more FPS or having a title playable at an higher resolution thanks to upscaling.
I am questioning people saying "it's better than native" instead of "it is a very reasonable compromise between IQ and FPS numbers that sometimes is almost indistiguishable from native"
 
Is that list gonna hold? None of the promised RT/DLSS lists NVIDIA outed themselves did.
In October, Nvidia listed 12 games that would be available before year-end. Many have already been released and expect more unannounced, released games like "War Thunder" to include support.
Ghostrunner - launching on October 27 with ray-traced reflections & shadows, and DLSS.
Pumpkin Jack - launching on October 23 with ray-traced reflections & shadows, enhanced lighting, and DLSS.
Xuan-Yuan Sword VII - launching on October 28 with ray-traced global illumination and DLSS.
Edge of Eternity - out now in early access, the game is adding DLSS support.
Mortal Shell - with ray-traced shadows and DLSS.
Mount & Blade II: Bannerlord - with DLSS.
World of Warcraft: Shadowlands - with ray traced shadows.
Enlisted - closed beta kicks off in November with ray-traced global illumination and DLSS.
Ready or Not - ray-traced reflections, shadows & ambient occlusion and DLSS will be available when the game launches in early access later in 2020.
Watch Dogs: Legion – launching on October 29 with ray-traced reflections and DLSS.
Call of Duty: Black Ops Cold War – launching on November 13 with ray tracing and DLSS.
Cyberpunk 2077 – launching November 19 with ray traced reflections, ambient occlusion, shadows & global diffuse illumination and DLSS.
 
Fidelity CAS is just a sharpening filter that tries to avoid over sharpening high contrast pixels. The new nvidia sharpening that can be turned on at the driver level does the same thing. Does Fidelity CAS not run on nvidia GPUs anyway, since it’s part of gpu open?
 
Last edited:
Fidelity CAS is just a sharpening filter that tries to avoid over sharpening high contrast pixels. The new nvidia sharpening that can be turned on at the driver level. Does Fidelity CAS not run on nvidia GPUs anyway, since it’s part of gpu open?
You can use it with Nvidia gpu's (via Reshade Mod), though Nvidia has it's own sharpening filter accessible via the driver control panel and can be enabled on a per game basis.
 
Last edited by a moderator:
You take it up with Digital Foundry then?
Their stance is DLSS 2.0 gives better than native image quality (overall) and CAS gives worse (overall) and they present good examples of why.

You do not deny that, but you try a "whataboutism", that is not a "debate" I have any desire to go into (aka waste of time).

Lets hope he doesnt take it up with DF because they get enough over their heads these days.
 
I am not questioning options nor the utility for many to have more FPS or having a title playable at an higher resolution thanks to upscaling.
I am questioning people saying "it's better than native" instead of "it is a very reasonable compromise between IQ and FPS numbers that sometimes is almost indistiguishable from native"
And I'm questioning the framing of the point.

I would say yes DLSS can be better than native.
Native can have the following artifacts for example, SSR, shadowing.
As unless your only doing photo mode you are configuring the settings for native resolution and a framerate.

Otherwise the comparison is set up to favour one side.
I think, you think i don't understand your view or I'm being purposely awkward.
I'm not, I'm just saying imo the framing matters as that's the point of DLSS.
 
I am not questioning options nor the utility for many to have more FPS or having a title playable at an higher resolution thanks to upscaling.
I am questioning people saying "it's better than native" instead of "it is a very reasonable compromise between IQ and FPS numbers that sometimes is almost indistiguishable from native"

Here is what one of the ue5 developers has to say about TAA or machine learnt variant of TAA which could be called dlss. Thread is longer, click to see full thread

 
And I'm questioning the framing of the point.

I would say yes DLSS can be better than native.
Native can have the following artifacts for example, SSR, shadowing.
As unless your only doing photo mode you are configuring the settings for native resolution and a framerate.

Otherwise the comparison is set up to favour one side.
I think, you think i don't understand your view or I'm being purposely awkward.
I'm not, I'm just saying imo the framing matters as that's the point of DLSS.

If native has artifacting then there is something inherently wrong with how it's programmed. Thus, DLSS "looking better" than native is only a way of saying that a game is programmed with the arse.
 
If native has artifacting then there is something inherently wrong with how it's programmed. Thus, DLSS "looking better" than native is only a way of saying that a game is programmed with the arse.

Temporal accumulation assuming you can figure out how to use the samples can have more information available than native. Another thing is that native might not have enough samples leading to shimmering like seen in death stranding. 4k and ray tracing is expensive, there is no good brute force way to solve this while maintaining performance. The rendering is adjusted so that every frame is sampled at slightly different position. This leads to increased details even when rendering exact same frame multiple times(better than native).

Some games choose to use TAA to fight this by having more information available to construct more stable image. Depending how good the TAA is the end result can lead to blurry mess or better than native or anything in between. See the link above to tweet from brian karis.
 
Temporal accumulation assuming you can figure out how to use the samples can have more information available than native. Another thing is that native can have not enough samples leading to shimmering like seen in death stranding. 4k and ray tracing is expensive, there is no good brute force way to solve this while maintaining performance.

Some games choose to use TAA to fight this by having more information available to construct more stable image. Depending how good the TAA is the end result can lead to blurry mess or better than native or anything in between. See the link above to tweet from brian karis.

Again, this is something that is completely a problem of how the game is implemented. Not having enough samples? Laughable, if you think where the samples are taken (hint: the frames are produced by the game engine). I will look at the thread when I will have again access to Twitter, don't worry. And again, you are taking samples from a downsampled image.
So instead of having

- frame production + postprocessing + eventually gathering more frames samples

I have

- frame production at a lower resolution + postprocessing + gathering more frames + upsampling again

At least two of these passages introduce more approximation to the final result.
Again, is it an acceptable compromise? For many people yes, of course. But a lot of comparisons ot there say that IQ is NOT the same like marketing wants you to believe ,at least on today's titles.

And of course it depends on the title, some keep it up better with DLSS.
But i.e. let's look at this:

https://www.dsogaming.com/pc-perfor...ops-cold-war-dlss-2-0-ray-tracing-benchmarks/

"Unfortunately, DLSS noticeably blurs the image, even on Quality Mode. Below you can find some comparison screenshots. As you can clearly see, the native screenshots appear sharper than the DLSS ones. Needless to say that all the other DLSS modes degrade the image quality even further, so we suggest avoiding them (at least for now)."

Watch Dogs: Legion had similar issues.
 
Last edited:
Again, this is something that is completely a problem of how the game is implemented. Not having enough samples? Laughable, if you think where the samples are taken (hint: the frames are produced by the game engine).

It's a real issue if you want to do gi,lights,shadows etc. with ray tracing. You can cast only so many rays per frame. You need temporal accumulation to collect enough samples. There is more temporal accumulation going on than just dlss does. Though dlss allowing lower rendering resolution is great as halving resolution cuts samples produced to 1/4th. Higher framerate gives some of this back as difference between frames is smaller and hence easier to solve from temporally accumulated samples.

Time of brute force solutions is over. HW is not going to get magically order of magnitude faster and being smarter is the game where triple a engines go.

as far as dlss goes the performance uplift has to be taken into account when comparing quality,... Or one would have to somehow otherwise compensate 4k and dlss 4k performance to be same before doing comparisons.
 
It's a real issue if you want to do gi,lights,shadows etc. with ray tracing. You can cast only so many rays per frame. You need temporal accumulation to collect enough samples. There is more temporal accumulation going on than just dlss does. Though dlss allowing lower rendering resolution is great as halving resolution cuts samples produced to 1/4th. Higher framerate gives some of this back as difference between frames is smaller and hence easier to solve from temporally accumulated samples.

Time of brute force solutions is over. HW is not going to get magically order of magnitude faster and being smarter is the game where triple a engines go.

as far as dlss goes the performance uplift has to be taken into account when comparing quality,... Or one would have to somehow otherwise compensate 4k and dlss 4k performance to be same before doing comparisons.

This is a completely different topic (Ray tracing being too hard on hardware without upscaling). And I don't get at all the matter about "you need more temporal accumulation". It depends on the game, and how the scene is rendered. Not having enough horsepower to ensure a frame is rendered with acceptable frame rate is a complete different thing that saying "upsampling from a lower sample source will look better even if there would be enough horsepower".
Granted, there are games that would be unplayable on some cards at certain resolutions and upscaling is for now a good compromise for using it. That does not mean IQ is magically better "because you upscale from a downsample" even in the case you have enough horsepower.
 
This is a completely different topic (Ray tracing being too hard on hardware without upscaling). And I don't get at all the matter about "you need more temporal accumulation". It depends on the game, and how the scene is rendered. Not having enough horsepower to ensure a frame is rendered with acceptable frame rate is a complete different thing that saying "upsampling from a lower sample source will look better even if there would be enough horsepower".
Granted, there are games that would be unplayable on some cards at certain resolutions and upscaling is for now a good compromise for using it. That does not mean IQ is magically better "because you upscale from a downsample" even in the case you have enough horsepower.

It's mostly ray tracing titles that require dlss like solution. Anything else is plenty fast without enabling dlss. Games like minecraft, control, metro exodus, tomb raider and so on come to mind. Cyberpunk2077 perf with ray tracing on we will know in about week when the reviews come out. There also is some odd legacy games like death stranding where dlss is the best compromise as none of the modes barring using ssaa are "perfect". Native shimmers, taa blurs, dlss has some bugs/issues in death stranding.
 
And of course it depends on the title, some keep it up better with DLSS.
But i.e. let's look at this:

https://www.dsogaming.com/pc-perfor...ops-cold-war-dlss-2-0-ray-tracing-benchmarks/

"Unfortunately, DLSS noticeably blurs the image, even on Quality Mode. Below you can find some comparison screenshots. As you can clearly see, the native screenshots appear sharper than the DLSS ones. Needless to say that all the other DLSS modes degrade the image quality even further, so we suggest avoiding them (at least for now)."

Watch Dogs: Legion had similar issues.

I remember reading that article when it first released and as with then, I still fail to see any difference in blurriness. If that's the output of DLSS quality then as far as I'm concerned, that's as a good as native.
 
Back
Top