Nvidia DLSS 1 and 2 antialiasing discussion *spawn*

Am I correct to assume this is horizontal resolution?
Meaning that setting the slider to 50% actually means rendering at 25% internal render resolution, and setting it to 66% means ~44% internal render resolution.

I get the lower threshold at 50%, but the 66% highest threshold seems low. I wonder if the cases where DLSS2 isn't working so well could gain with a higher resolution (at the cost of lower performance, of course).

50% = Performance mode and 66% = Quality mode so that scale would make sense. Interesting that it doesn't go down to 33% for Ultra performance mode.
 
People need to get out of the mindset of trying to make apples to apples comparisons

Would it be fair to compare Xbox Series X at native resolution to PS5 at checkboard 4k from some undetermined lower resolution and say they're the same?
 
Would it be fair to compare Xbox Series X at native resolution to PS5 at checkboard 4k from some undetermined lower resolution and say they're the same?
I'd say it's not relevant to the context. For some reason, perhaps my mistake of scrolling through posts too quickly, I was under the assumption that people assumed that the 2060 was the benchmark for PS5 to beat, thus by beating it was punching above it's weight. But the PS5 naturally is significantly beefier than a 2060, so this was the confusion I had with the claim or rather that it was outperforming the DLSS version, without the understanding that DLSS in itself still has fixed costs on the rendering pipeline.

Checkerboarding is inferior graphically to 4K native. I don't think there has been a comparison where with respect to image quality that checkerboarding is preferred to native.
With respect to frame rate, that is a different story.

The same cannot be necessarily said with DLSS to 4K native, it would be a case by case basis of course.

But when I wrote apples to apples comparisons, I was speaking to being of the same vendor and of the same architecture to narrow claims made. ie: people need to stop trying to make claims of it being apples to apples comparisons when it's quite apples to oranges.

PS5 and XSX cost the same, are of the same vendor, and the same architecture, generally share the same specs. Comparing the 2 has more to gain from deduction (by eliminating the common elements) than say comparing PS5 vs 2060 with DLSS on (which the two have very little in common)
 
You would have to add Tensor Core TF's on top of that compute when running DLSS right?

This show that compute run all the time but Raytracing and DLSS run concurrently on top of that...and add performance on top of the normal compute performance:
Desktop-Screenshot-2020.09.01-11.17.07.05-1030x579.png


Pure FLOPS were always a "rubber" metric...but now it just go a whole lot worse
There's not a lot of value in making that type of comparison if one is using hardware the other is not, there's nothing to compare except price points and resulting performance.

You can't factor in tensor flops into the equation. It's only being used for 1 function only.

Hard to make a statement of much else really at least on a technical level.
 
Would it be fair to compare Xbox Series X at native resolution to PS5 at checkboard 4k from some undetermined lower resolution and say they're the same?
So-called "native" resolution is just another approximation at producing a ground-truth image (which would be a hypothetical infinitely-supersampled render). It suffers from aliasing and temporal instability -- problems that TAA attempts to smooth over. Everything is trying to approximate the ground-truth. That's no different from what any of the reconstruction techniques (be it checkerboarding, TAAU, DLSS or anything else) are trying to do. So in general I would say it's fair to compare them, but of course they aren't the "same".

Now, we may observe that one technique consistently produces either (a) subjectively better-looking results or (b) objectively closer-to-ground-truth results than another. I would say this is true of checkerboarding vs. native/TAA. Therefore, the consistent inferiority of checkerboarding must be taken into account in any comparisons. However, DLSS-vs-native/TAA is a different story. They produce different results. Both are approximations, neither is perfect. Whether one appears "better" than the other is subjective (and none of us have objective mean-square errors vs. ground-truth images) and depends on the observer. And so perhaps one can make the argument that it's reasonable to consider them iso-quality while focusing on performance differences.
 
Difficult to argue with that.

DLSS definitely is the best of the upsampling techniques, I do wonder whether the difference between DLSS and other upscaling techniques are noticeable at normal viewing distances. I guess it depends on the technique that's used.

Hopefully over the course of the generation we'll see less reliance on requiring native 4k and instead use more scaling techniques based on platform.

As per usual, Nvidia are well ahead of the pack with this technology from what I'm able to tell. Especially since its rendering requires only 1/4 base resolution.

My only hesitation would be saying that one upscaling technique is allowed to be compared to native resolution, while others are not (not accusing you of this btw). Despite there being a quality difference, we'd need to at least have consistent messaging, even if it's with the proviso that x technique isn't as good as y.

So-called "native" resolution is just another approximation at producing a ground-truth image (which would be a hypothetical infinitely-supersampled render). It suffers from aliasing and temporal instability -- problems that TAA attempts to smooth over. Everything is trying to approximate the ground-truth. That's no different from what any of the reconstruction techniques (be it checkerboarding, TAAU, DLSS or anything else) are trying to do. So in general I would say it's fair to compare them, but of course they aren't the "same".

Now, we may observe that one technique consistently produces either (a) subjectively better-looking results or (b) objectively closer-to-ground-truth results than another. I would say this is true of checkerboarding vs. native/TAA. Therefore, the consistent inferiority of checkerboarding must be taken into account in any comparisons. However, DLSS-vs-native/TAA is a different story. They produce different results. Both are approximations, neither is perfect. Whether one appears "better" than the other is subjective (and none of us have objective mean-square errors vs. ground-truth images) and depends on the observer. And so perhaps one can make the argument that it's reasonable to consider them iso-quality while focusing on performance differences.
 
Last edited by a moderator:
My only hesitation would be saying that one upscaling technique is allowed to be compared to native resolution, while others are not (not accusing you of this btw).
I don’t think there are any rules for comparison, people are welcome to make the case for any sort of comparisons provided the context makes sense to.

the context of determining if a device is punching above of its weight is more likely to compare its output relatively to its own graphical prowess.

But the context of comparing pure graphical muscle between devices would have to require them to run the same software, rendering pipelines and loads; and this would be normal discourse for any sort of benchmarking.
 
I don’t think there are any rules for comparison, people are welcome to make the case for any sort of comparisons provided the context makes sense to.

the context of determining if a device is punching above of its weight is more likely to compare its output relatively to its own graphical prowess.

But the context of comparing pure graphical muscle between devices would have to require them to run the same software, rendering pipelines and loads; and this would be normal discourse for any sort of benchmarking.

My thoughts are that it needs to be contextually consistent, regardless of the platform. If we compare DLSS to native resolution, then we compare checkerboard to native resolution. It's why we have standardisation principles across all industries. It takes personal bias out of the equation.

I think most users of this forum are platform independent, but almost everyone slips into some "choice supportive bias" at times.
 
My thoughts are that it needs to be contextually consistent, regardless of the platform. If we compare DLSS to native resolution, then we compare checkerboard to native resolution. It's why we have standardisation principles across all industries. It takes personal bias out of the equation.

I think most users of this forum are platform independent, but almost everyone slips into some "choice supportive bias" at times.
Once again, I think it comes down to the claim.
If the claim is X using data point A and someone has done the hard work and walked people through logical conclusions already for you, you just link the evidence.

If the claim X using data point B and people are not able to understand how you got to your conclusion, then one should be prepared to explain how they got to that conclusion or at least be prepared to defend it.

I am okay to accept that perhaps my English comprehension is not up to snuff, this is an international forum after all, but if I read the claim that something is punching above it's weight - you are making 2 claims that need to be resolved. The first is that the thing you're comparing 2 has to weigh more. And secondly you've outperformed it at the same task.

If the claim is to say that something with less GPU power is performing above it's weight as long as checkerboard rendering is enabled, then I'd logically ask to see how it compares against other devices running checkboard rendering because what's at comparison is not the rendering technique, but the weight of the cards.

If you want to compare rendering techniques, DLSS vs Native vs CBR, then the claim should be about image quality vs the amount of loss performance to achieve it. GPU hardware should not be a factor at all.
 
Once again, I think it comes down to the claim.
If the claim is X using data point A and someone has done the hard work and walked people through logical conclusions already for you, you just link the evidence.

If the claim X using data point B and people are not able to understand how you got to your conclusion, then one should be prepared to explain how they got to that conclusion or at least be prepared to defend it.

I am okay to accept that perhaps my English comprehension is not up to snuff, this is an international forum after all, but if I read the claim that something is punching above it's weight - you are making 2 claims that need to be resolved. The first is that the thing you're comparing 2 has to weigh more. And secondly you've outperformed it at the same task.

If the claim is to say that something with less GPU power is performing above it's weight as long as checkerboard rendering is enabled, then I'd logically ask to see how it compares against other devices running checkboard rendering because what's at comparison is not the rendering technique, but the weight of the cards.

If you want to compare rendering techniques, DLSS vs Native vs CBR, then the claim should be about image quality vs the amount of loss performance to achieve it. GPU hardware should not be a factor at all.

I don't think I've said anything about any hardware "punching above its weight'? I'm happy to express my opinion on it though if you'd like?

There have been a number of assumptions in the past that the latest consoles are both approximately proportional to a Nvidia 2060S, despite both machines having more tflops available to them. It was reiterated in several DigitalFoundry videos, usually around games using ray tracing.

I get the impression that a few people were surprised (myself included) that the same comparison wasn't repeated in a pure rasterisation test when a console flat out outperformed the 2060S even when the best upscaling solution within the industry was used. As others have stated, we shouldn't be too surprised, since the console has significantly more tflops available to it.

The point I'm trying to make - and I don't think it should be a controversial one - is that we need to be consistent. If one comparison point is made, we shouldn't then ignore data that contradicts it.

As I've stated previously, some analyses are unidirectional, that is they're reporting on the data when it fits the previously defined hypothesis and ignoring conflicting data.
 
I don't think I've said anything about any hardware "punching above its weight'? I'm happy to express my opinion on it though if you'd like?
Sorry, I was following that entire claim the whole time; I know you didn't do it. I saw it, and I had to respond, and I was under the assumption I had to defend my claim from that point forward.

There have been a number of assumptions in the past that the latest consoles are both approximately proportional to a Nvidia 2060S, despite both machines having more tflops available to them. It was reiterated in several DigitalFoundry videos, usually around games using ray tracing.

I get the impression that a few people were surprised (myself included) that the same comparison wasn't repeated in a pure rasterisation test when a console flat out outperformed the 2060S even when the best upscaling solution within the industry was used. As others have stated, we shouldn't be too surprised, since the console has significantly more tflops available to it.
Interesting, I guess I never felt this undertone coming from DF. I respect that there is going to be variation, but I don't think either GPU has consistently performed around 2060S performance. I mean, generally speaking, I think the reason DF doesn't do those tests was because I thought it was obvious that the 2060S cannot compete unless under certain scenarios that highlight it's advantages with hardware respectively: ray tracing, and the other being tensor cores for DLSS.

The videos are an attempt to showcase how even a particularly weaker GPU, with the right accelerators, can compete with stronger ones. But I never saw that as a slight towards either console. but rather a showcase of rendering efficiency for exotic pipelines.
 
I don't think I've said anything about any hardware "punching above its weight'? I'm happy to express my opinion on it though if you'd like?

There have been a number of assumptions in the past that the latest consoles are both approximately proportional to a Nvidia 2060S, despite both machines having more tflops available to them. It was reiterated in several DigitalFoundry videos, usually around games using ray tracing.

I get the impression that a few people were surprised (myself included) that the same comparison wasn't repeated in a pure rasterisation test when a console flat out outperformed the 2060S even when the best upscaling solution within the industry was used. As others have stated, we shouldn't be too surprised, since the console has significantly more tflops available to it.

The point I'm trying to make - and I don't think it should be a controversial one - is that we need to be consistent. If one comparison point is made, we shouldn't then ignore data that contradicts it.

As I've stated previously, some analyses are unidirectional, that is they're reporting on the data when it fits the previously defined hypothesis and ignoring conflicting data.
https://www.eurogamer.net/articles/digitalfoundry-2020-assassins-creed-valhalla-ps5-vs-pc
"Based on tests with a 2080 Ti, it looks like a 2080 Super or RTX 3060 Ti would be required to match or exceed PlayStation 5's output. However, based on my tests with a Navi-based RX 5700, I'd expect a 5700 XT to get within striking distance of the console's throughput."
Call of Duty Black Ops: Cold War - what PC hardware do you need to match PS5? • Eurogamer.net
"Regardless, at the top end, an RTX 3090 delivers an 81.2 per cent boost to performance in this segment at equivalent settings, while RTX 3070 is just 8.6 per cent faster. An RTX 2070 Super can't match PlayStation 5 - in fact, it's 20 per cent slower. On the AMD side of things, I found the RX 6800 XT's result to be off-pace - it has 72 compute units vs the 36 inside PlayStation 5, it's based on the same architecture, and clock speeds are broadly equivalent, yet it delivered just 29.4 per cent of extra performance."
---
Each time I show a range of GPUs and the 2060S is always among the pack and I describe which GPU it is performing closest to? Not exactly sure what inconsistency you are talking about?
 
https://www.eurogamer.net/articles/digitalfoundry-2020-assassins-creed-valhalla-ps5-vs-pc
"Based on tests with a 2080 Ti, it looks like a 2080 Super or RTX 3060 Ti would be required to match or exceed PlayStation 5's output. However, based on my tests with a Navi-based RX 5700, I'd expect a 5700 XT to get within striking distance of the console's throughput."
Call of Duty Black Ops: Cold War - what PC hardware do you need to match PS5? • Eurogamer.net
"Regardless, at the top end, an RTX 3090 delivers an 81.2 per cent boost to performance in this segment at equivalent settings, while RTX 3070 is just 8.6 per cent faster. An RTX 2070 Super can't match PlayStation 5 - in fact, it's 20 per cent slower. On the AMD side of things, I found the RX 6800 XT's result to be off-pace - it has 72 compute units vs the 36 inside PlayStation 5, it's based on the same architecture, and clock speeds are broadly equivalent, yet it delivered just 29.4 per cent of extra performance."
---
Each time I show a range of GPUs and the 2060S is always among the pack and I describe which GPU it is performing closest to? Not exactly sure what inconsistency you are talking about?

My comment was more reference to the Nioh analysis and some of the Hitman analyses. After Hitman you stated that it was the truest measurement of performance and that you didn't believe the COD and Assassin Creed analyses could have been accurate since they yieled different results and were likely due to settings differences between console and PC.

You created a vs video for Nioh, but it left me puzzled afterwards, since there wasn't a similar analysis of which PC GPU was most similar in performance. I could only elaborate that the 2060S was a lot worse and the 3080 was a lot better, which is quite a broad range.

I think your work is generally excellent. Some of the very best videos have been yours.
 
PS5 sits a tad above 5700XT, where it belongs. It doesnt 'punch above its weight', its the xsx that underperformed for some early titles. Theres a good reason for the comparison to dGPUs to gauge performances.

Talking NV, thats aroound 2070/2070S, if its so important to compare to Turing and Ampere products (more so then amd variants for some reason).

Alex, dont take the critics people stealthly give you here not personal, its the platforms that some care about, not the creator of tech videos.
 
Last edited:
My comment was more reference to the Nioh analysis and some of the Hitman analyses. After Hitman you stated that it was the truest measurement of performance and that you didn't believe the COD and Assassin Creed analyses could have been accurate since they yieled different results and were likely due to settings differences between console and PC.

You created a vs video for Nioh, but it left me puzzled afterwards, since there wasn't a similar analysis of which PC GPU was most similar in performance. I could only elaborate that the 2060S was a lot worse and the 3080 was a lot better, which is quite a broad range.

I think your work is generally excellent. Some of the very best videos have been yours.
I think it was for that video we did not spend the time doing the full analysis because PC does not have DRS for one, and because we initially started the Nioh 2 PC video knowing ahead of time we could not invest more than 2.5 days into the capture, scripting, production as we knew the video would do poor in terms of numbers. If I had 4 to 5 days to work on it, then yeah it could have been a possibility.
In the end, not every PC video will actually end up doing a performance comparison to consoles as we may deem it as not worth it due to viewership return. Sadly people do not care about nioh 2!
 
I think it was for that video we did not spend the time doing the full analysis because PC does not have DRS for one, and because we initially started the Nioh 2 PC video knowing ahead of time we could not invest more than 2.5 days into the capture, scripting, production as we knew the video would do poor in terms of numbers. If I had 4 to 5 days to work on it, then yeah it could have been a possibility.
In the end, not every PC video will actually end up doing a performance comparison to consoles as we may deem it as not worth it due to viewership return. Sadly people do not care about nioh 2!
I'll take the time to plug DF subscription here: but if people here wanted full in-depth stuff, people should consider paying. Otherwise I think the realities are clear that they will cut content to get their videos out earlier because there is so much increasing competition in this space and there are so many games to get through. If DF aren't given the heads up, or there is no embargo for these types of things, this is sort of what happens: it's a rush to just deliver content.
 
Back
Top