Nvidia DLSS 1 and 2 antialiasing discussion *spawn*

There are review sites who are disabling raytracing because of the performance impact. So DLSS is a different way to increase performance. IQ doesnt need to be equal, because that is a cop out to disable features for a subjective parity.
DLSS 2.0 is great tech. I'm just trying to understand the logic behind a statement that claims any comparison not using RT and DLSS is moot. I suspect most would be quite vocally against such a sentiment if it wasn't in favor of Nvidia.
 
We already know from some devs that DXR is limiting what they can do on PC (on both amd and nvidia) versus what they're allowed to do on PS5 (maybe xbox too, I don't remember). In that case it's, for now, an API "battle", but I guess DXR will evolve to, like DX did / does.
Did any of them try Vulkan? Or rather talk about whether or not those DXR limitations are gone with the vendor-specific Vulkan extensions?
 
DLSS 2.0 is great tech. I'm just trying to understand the logic behind a statement that claims any comparison not using RT and DLSS is moot. I suspect most would be quite vocally against such a sentiment if it wasn't in favor of Nvidia.

You could use frames instead of image quality. What is the image quality at a certain FPS point or how many FPS do you need to archive similiar quality to DLSS.
 
Did any of them try Vulkan? Or rather talk about whether or not those DXR limitations are gone with the vendor-specific Vulkan extensions?

For the most part there's nothing interesting about ray tracing on Vulkan. The Nvidia extension is mostly a match for DXR 1.0. The multi-vendor KHR extension is equivalent to DXR 1.1.

The only remotely interesting ray tracing vendor extension on PC is on D3D12 with AMD AGS. It's already being used in titles like Dirt 5, Godfall, WoW: Shadowlands, and likely the upcoming RE Village and Far Cry 6.

I have no idea if NVAPI have functionality extending DXR like AGS does ...
 
So-called "native" resolution is just another approximation at producing a ground-truth image (which would be a hypothetical infinitely-supersampled render). It suffers from aliasing and temporal instability -- problems that TAA attempts to smooth over. Everything is trying to approximate the ground-truth. That's no different from what any of the reconstruction techniques (be it checkerboarding, TAAU, DLSS or anything else) are trying to do. So in general I would say it's fair to compare them, but of course they aren't the "same".

Now, we may observe that one technique consistently produces either (a) subjectively better-looking results or (b) objectively closer-to-ground-truth results than another. I would say this is true of checkerboarding vs. native/TAA. Therefore, the consistent inferiority of checkerboarding must be taken into account in any comparisons. However, DLSS-vs-native/TAA is a different story. They produce different results. Both are approximations, neither is perfect. Whether one appears "better" than the other is subjective (and none of us have objective mean-square errors vs. ground-truth images) and depends on the observer. And so perhaps one can make the argument that it's reasonable to consider them iso-quality while focusing on performance differences.
I can't follow your logic.
How do you justify allowing one scaled guessworking technology to be counted as "approximation", but discard other scaled technologies?
Checkerboarding is just as much approximation as DLSS is even if it's IQ is in many ways worse.
Counting native as approximation is just BS, it was considered the ground truth (TAA or not) in game IQ discussions until the day DLSS came and certain people started forcing it as equivalent despite it's clear issues (even with DLSS 2.0) because "it sometimes looks as good or even better" - none of which changes the fact it's upsampled guesswork with it's own issues and drawbacks.

The moment you allow one scaling and/or guessworking method in, you need to allow them all and it becames a subjective playingfield. Counting DLSS as native equivalent is just intellectual dishonesty, either you allow all and come up with your own personal IQ/FPS-grading system, or you allow none.
 
For the most part there's nothing interesting about ray tracing on Vulkan. The Nvidia extension is mostly a match for DXR 1.0. The multi-vendor KHR extension is equivalent to DXR 1.1.

The only remotely interesting ray tracing vendor extension on PC is on D3D12 with AMD AGS. It's already being used in titles like Dirt 5, Godfall, WoW: Shadowlands, and likely the upcoming RE Village and Far Cry 6.

I have no idea if NVAPI have functionality extending DXR like AGS does ...
Yes, of course. DXR is pretty closely tailored to Nvidia, being the first and only ones by two years or so to have hardware supporting RT. But it was posted, that developers do not face the same restrictions on console hardware, which basically is AMD. So, it would be interesting to see, whether or not this is mitigated by vendor-specific vulkan extensions where each vendor can do as it best suits their hardwares' capabilities.

Speaking of AGS and it's use in games: Is Dirt 5 RT out of private/closed Beta yet? Last time I checked, it required a special key in order to activate it.
 
This is the point I've been trying to make all along. Unfortunately some believe that it's only fair to compare the upscaling technologies when it's the PC's upscaling tech.

@pjbliverpool I'll prepare a response, but it'll likely be during the mid week. If you're unable to see the PS5 genuinely performing better here, then I doubt think we can have a rational discussion any longer.

I'm not getting involved in the argument of whether DLSS should or shouldn't be be comparable to native, there have already been some excellent posts with regards to that in this thread. I'm only commenting in response to your post that did just that in comparing a 2060S with DLSS to PS5 performance. With respect to that particular comparison the information that we have available is:

  • PS5 with higher LOD @ 1440p - 2160p, avg 1728p - mostly locked 60fps
  • PS5 with same LOD and lower shadows @ 1440p - 2160p, avg 1876p - mostly locked 60fps
  • 2060 (not S) @ DLSS 2160p - seems to vary from 50-60fps with occasional drops into the 40's.
And in addition your original comparison was to the 2060S, not the 2060 which is around 12% faster. So take the 2060 performance above, add 12% for the S version and then add a 2% more according to the DF for the change in shadow quality (so about 7fps more in total) and you're looking at performance that's generally around 60fps with dips into the 50's at a resolution that ranges from 33% higher on average to 125% higher during the most demanding scenes.

I really don't see how you can draw a conclusion from this that the PS5 is "flat outperforming" the 2060S.
 
I can't follow your logic.
How do you justify allowing one scaled guessworking technology to be counted as "approximation", but discard other scaled technologies?
Checkerboarding is just as much approximation as DLSS is even if it's IQ is in many ways worse.
Counting native as approximation is just BS, it was considered the ground truth (TAA or not) in game IQ discussions until the day DLSS came and certain people started forcing it as equivalent despite it's clear issues (even with DLSS 2.0) because "it sometimes looks as good or even better" - none of which changes the fact it's upsampled guesswork with it's own issues and drawbacks.

The moment you allow one scaling and/or guessworking method in, you need to allow them all and it becames a subjective playingfield. Counting DLSS as native equivalent is just intellectual dishonesty, either you allow all and come up with your own personal IQ/FPS-grading system, or you allow none.

DLSS looks better than native. I guess this makes sense because native just uses one sample per pixel. So, do you really mean native or just something you have called "native"?
 
DLSS looks better than native. I guess this makes sense because native just uses one sample per pixel. So, do you really mean native or just something you have called "native"?
For you, it could look better than native, but that's a subjective opinion, not objective fact based on some established ruleset.

I mean what was considered and called native before DLSS and it's promotors came marching in claiming the native should suddenly be some hypothetical infinitely supersampled version of the image that could never really exist in a game.
So no, not just what I have called "native".
 
I really don't see how you can draw a conclusion from this that the PS5 is "flat outperforming" the 2060S.

The PS5 is categorically and objectively outperforming the 2060S by a substantial margin.

A lot of this thread has discussion from many folks on why the we should expect the PS5 to outperform it. Due to tflops, etc. And why it shouldn't be considered the as PS5 punching above its weight, because the 2060S is in a lower weight class. My impression of reading the thread is that most are in agreement, just a bit of arguing over the semantics.

I'll perform all of the calculations for you and break down all of your points later. I need to wait for a slow day at work to write everything out.
 
For you, it could look better than native, but that's a subjective opinion, not objective fact based on some established ruleset. For many others it's not.

How many of these "many others" play games without any kind of AA?

I mean what was considered and called native before DLSS and it's promotors came marching in claiming the native should suddenly be some hypothetical infinitely supersampled version of the image that could never really exist in a game.
So no, not just what I have called "native".

So like playing games without raytracing and just with screen space effects?
 
Yes, of course. DXR is pretty closely tailored to Nvidia, being the first and only ones by two years or so to have hardware supporting RT. But it was posted, that developers do not face the same restrictions on console hardware, which basically is AMD. So, it would be interesting to see, whether or not this is mitigated by vendor-specific vulkan extensions where each vendor can do as it best suits their hardwares' capabilities.

Speaking of AGS and it's use in games: Is Dirt 5 RT out of private/closed Beta yet? Last time I checked, it required a special key in order to activate it.

Theoretically, vendors can expose any hardware feature on Vulkan by implementing extensions. D3D can also have extensions too through driver hacks like AGS or NVAPI.

Ray tracing is still not in the public branch of Dirt 5.
 
How many of these "many others" play games without any kind of AA?
Probably many, you'd be surprised how big portion of people never even open graphics settings but instead run at whatever the game sets as default so long as it runs well enough for them, so they're only using AA if the game is setting AA on by default.
As for those, also many, who do use AA, one might prefer TAA, one injected SMAA, while the third guy is yelling Quincunx for life. They could each be subjectively better than the other or native without any AA. If game doesn't give the option to run it without AA, I would consider 'native' whatever the developer has chosen as default.

So like playing games without raytracing and just with screen space effects?
Huh?
 
Probably many, you'd be surprised how big portion of people never even open graphics settings but instead run at whatever the game sets as default so long as it runs well enough for them, so they're only using AA if the game is setting AA on by default.
As for those, also many, who do use AA, one might prefer TAA, one injected SMAA, while the third guy is yelling Quincunx for life. They could each be subjectively better than the other or native without any AA. If game doesn't give the option to run it without AA, I would consider 'native' whatever the developer has chosen as default.


Huh?

Luckily many people then start to get dlss+ray tracing by default in new games. For example cp2077 defaults those settings on for nvidia rtx cards.
 
it was considered the ground truth (TAA or not) in game IQ discussions until the day DLSS
TAA was NEVER considered the ground truth, we just accepted it and moved on because it was the only available AA option in most games for the past 4 years, FXAA and MLAA were never considered ground truth either. You can't ground truth something that is introducing so much blur in motion.

The moment you allow one scaling and/or guessworking method in, you need to allow them all and it becames a subjective playingfield.
This just pure BS, AI upscaling is NOT the same as any basic upscaling filter .. I can't believe we are are still debating such details on a technical forum. It just doesn't guess blindly like any run of the mill basic upscaler, it uses info from a higher ground truth image, previous frames and motion vectors to reconstruct the image, AMD wouldn't be having such a hard time coming up with one if it was that simple.

You want it to be equal to checkerboarding or simple upscalers, then that's your subjective prerogative, but this technically couldn't be further from the truth.
 
TAA was NEVER considered the ground truth, we just accepted it and moved on because it was the only available AA option in most games for the past 4 years, FXAA and MLAA were never considered ground truth either. You can't ground truth something that is introducing so much blur in motion.
That "TAA" was there just for the games which simply won't allow you to disable it.

This just pure BS, AI upscaling is NOT the same as any basic upscaling filter .. I can't believe we are are still debating such details on a technical forum. It just doesn't guess blindly like any run of the mill basic upscaler, it uses info from a higher ground truth image, previous frames and motion vectors to reconstruct the image, AMD wouldn't be having such a hard time coming up with one if it was that simple.

You want it to be equal to checkerboarding or simple upscalers, then that's your subjective prerogative, but this technically couldn't be further from the truth.

No-one said they are the same or even equal, just that if you let one in, you need to let all of them in, because they all produce results which can be subjectively better than the other because different people prefer different things - method of scaling isn't relevant.

Even DLSS itself has gone from NN learning from supersampled game images to approximation what it probably would look like (algorithm designed to emulate results the neural network would come up with, no actual NN involved) to some generic teaching model which brought the neural network back without game specific images.
Do all those count as AI or just the first and last? And since we're here, don't forget we already know simple scalers can produce similar quality* as DLSS1 did, which definitely counts as AI as much as 2.0 does, why shouldn't they be included? Or is it a moving goalpost which moves along with DLSS whenever they come up with some new version of it, and if so, we're back at the "what makes DLSS so special it gets treated different"?

*I don't think anyone thought they were worse in certain games, but there can always be few who think otherwise, and certainly many agreed they were in fact better by notable margin in some games.
 
Last edited:
Theoretically, vendors can expose any hardware feature on Vulkan by implementing extensions.
And exactly that was my point (and what I wrote, btw): Did developers who praised the greater possibilities on console Hardware try Vulkan, where there is no one-size-fits all (yes, caveat, there's slight differences) solution but individual extensions.

Ray tracing is still not in the public branch of Dirt 5.
:(
 
Regarding DLSS, TAA and ground truth:

I think many argue from their respective approaches. From a comparison approach, from a gamers approach, from a fairness approach. We've been accustomed over the past 2.x decades to only compare performance achieved at the same resolution, because everyone had at least to render the same amount of pixels, even if they did not with necessarily the same effort at all times. For this, there are a lot of examples. Multisampling as a harmless economization of supersampling, as I have read it being described, different color depths, different methods of determining the degree of anisotropy and many more.

You can of course throw this aside and start from an approach, where the ground truth would be the image of the game rendered at infinite resolution with no articfacts whatsoever. You then can follow the path to this ideal image and roughly point out how far on this scale each rendering device is for a given performance target.

This of course is much more prone to subjective bias, than the "objective" average fps. But remember: Average fps were thought to be the be all end all, until frametime, percentiles and smooth gameplay observation were done by more and more folks.

You could do the same with different upscaling techniques, but rendered resolution would eventually be a non-issue at some point, where you just judge a sequence of images by its result, not how the result is obtained, i.e. removing the once-equalizer "same resolution".
 
Last edited:
Back
Top