Nvidia DLSS 1 and 2 antialiasing discussion *spawn*

Gen 5 TAA is a term Epic uses themselves, so Tom Looman is clearly talking about the 4.26 TAA ... and upscaled during the TAA step, so what Epic also calls TAAU.

I'm just wondering where he gets TXAA from, could be the source, I've never bothered to get access. Which wouldn't necessarily mean it has similarity to NVIDIA's TXAA, acronyms get reused for different things all the time.
 
TXAA is originally a technique developed by NVIDIA that has TAA + MSAA components, it preceded TAA during the Kepler architecture era, it had 2X, 4X and 8X levels, and was featured in games such as GTA V, Assassin's Unity, and Crysis 3.

Imo the best implementation of TXAA was in Watch Dogs 2, I think that was the latest version we'll see on any released title.

The temporal stability offered by DLSS is very impressive though, hard to make a case for any other type of AA when DLSS is an option nowadays (considering the performance uplift).
 
Well, DLSS 2.0 any way.

Talking about temporal stability, I saw suggestions DLSS originally didn't use motion vectors at all, is that confirmed true or just supposition?
 
Imo the best implementation of TXAA was in Watch Dogs 2, I think that was the latest version we'll see on any released title.

The temporal stability offered by DLSS is very impressive though, hard to make a case for any other type of AA when DLSS is an option nowadays (considering the performance uplift).
It wasn't temporally stable at all in WD2. There wasnt much difference between just using MSAA+SMAA. I think it was at its best in Crysis 3.
 
It wasn't temporally stable at all in WD2. There wasnt much difference between just using MSAA+SMAA.
No, it is the most stable option available in that game. There were other bad implementations though, like Far Cry 4 and The Crew, the rest of the implementations were very solid in regards to temporal stability, you really can't play Assassin's Creed 3, Black Flag, Unity, or Syndicate without TXAA, otherwise shimmering and pixel crawling would be too much. It also came at a time where most PC games had little to no TAA whatsoever, instead resorting to basic FXAA or MLAA or the useless MSAA.

Of course when TAA became available and widespread, TXAA was retired as it was performance intensive compared to TAA, with a mild IQ advantage.
 
No, it is the most stable option available in that game. There were other bad implementations though, like Far Cry 4 and The Crew, the rest of the implementations were very solid in regards to temporal stability, you really can't play Assassin's Creed 3, Black Flag, Unity, or Syndicate without TXAA, otherwise shimmering and pixel crawling would be too much. It also came at a time where most PC games had little to no TAA whatsoever, instead resorting to basic FXAA or MLAA or the useless MSAA.

Of course when TAA became available and widespread, TXAA was retired as it was performance intensive compared to TAA, with a mild IQ advantage.
It was technically very slightly more stable than MSAA+SMAA but in practice the difference is so marginal it hardly matters. The accumulation buffer/stage was quite clearly broken in WD2 implementation, as it was in FC4, The Crew, GTA V, Syndicate and in the majority of games with TXAA TBH. The only benefit you received from TXAA in these titles is the custom resolve it used, which provided a very slight improvement over just combining MSAA with some post process AA. Ubisoft actually patched in a TAA option for WD2 late in its life that has working accumulation and the image stability is night and day over even the highest TXAA setting. Performance is also dramatically better as you would expect.
 
I don't remember how MSAA + SMAA worked in WD2 but TXAA was a massive improvement, I did a comparison during motion a while ago between that and temporal filtering.

Temporal filtering: https://abload.de/img/watchdogs212.02.2016-mtse6.jpg
TXAA 3.0: https://abload.de/img/watchdogs212.02.2016-9qs2g.jpg
Looking better than a very poor form of checkerboarding with little to no AA at all isn't much of an accomplishment. It's also of little use to compare AA in still images IMO. Especially when the focus is on image stability in motion.
 
as it was in FC4, The Crew, GTA V, Syndicate and in the majority of games with TXAA TBH.
Strongly disagree with that statement. Far Cry 4 and The Crew were the only two implementations that are clearly bad, the rest provided a solid AA option.
It was technically very slightly more stable than MSAA+SMAA
Besides, you can't have MSAA + SMAA in these games without going through a lot of hassles, most of these games don't even offer MSAA to begin with, so this point is hardly relevant anyway.
 
TXAA has gone through some weird evolution over the years peaking at being a pure TAA at some point I think. Direct comparisons of games using "TXAA" may not be apples to apples comparisons.
 
Well, DLSS 2.0 any way.

Talking about temporal stability, I saw suggestions DLSS originally didn't use motion vectors at all, is that confirmed true or just supposition?
I seem to recall it did not and it was later introduced. But that could have been supposition.

We can look back at DLSS 1.0 and look at the clarity of the image during motion.
Without motion vector support the edge detection algorithm would have a harder time with it, so I suspect without it, less detail for it to process AA on.

ie, looking at a brick wall and scrolling back and forth would be a decent example to test.

But I'm still not entirely sure what DLSS does in terms of what aspects of the image it's using as inputs (everything? or just edges). As it does not do just edge detection. we see it add detail where previously there is none. So I'm hesitant to make any claims here.
 
Last edited:
I seem to recall it did not and it was later introduced. But that could have been supposition.

We can look back at DLSS 1.0 and look at the clarity of the image during motion.
Without motion vector support the edge detection algorithm would have a harder time with it, so I suspect without it, less detail for it to process AA on.

ie, looking at a brick wall and scrolling back and forth would be a decent example to test.

But I'm still not entirely sure what DLSS does in terms of what aspects of the image it's using as inputs (everything? or just edges). As it does not do just edge detection. we see it add detail where previously there is none. So I'm hesitant to make any claims here.

dlss1.0 is quite unknown. DLSS2.0 we know pretty well based on nvidia gdc presentation. Basically the neural net is trained to pick/discard samples from previous frames + compensate for motion. Rendering is done jittered so that frames are sampled slightly different positions making super resolution possible. It's a fancy way to do TAA. Post processing is done on the upscaled image.

 
Strongly disagree with that statement. Far Cry 4 and The Crew were the only two implementations that are clearly bad, the rest provided a solid AA option.

Besides, you can't have MSAA + SMAA in these games without going through a lot of hassles, most of these games don't even offer MSAA to begin with, so this point is hardly relevant anyway.
I meant the MSAA + SMAA combo in WD 2 specifically. You should go back and check the TXAA in those games I listed as broken. I think you may be remembering them incorrectly.
 
I really can't follow the DLSS hype. Just tested it yesterday with Control. After activating DLSS the Render Resolution is in sub-1080p territory (I think I must activate DRS to allow higher resolutions than my monitor (1440p screen) supports) and the image looks really blurred. Using at least 1080p for a 1440p screen would be nice from the start.
It runs better, but the image is so blurry that it reminded me at the "qnix" times (or how that blur-filter from was named). Native 1080p was way sharper (the RTX 3070 can't hold stable 60 with 1440p with all Settings maxed out).
What is really still missing is the DLSS "Downsampling" nvidia promised. Running the game in 4k DLSS Mode on a 1440p resolution. This should also add some sharpness to the image instead of making it more blurry.
 
I really can't follow the DLSS hype. Just tested it yesterday with Control. After activating DLSS the Render Resolution is in sub-1080p territory (I think I must activate DRS to allow higher resolutions than my monitor (1440p screen) supports) and the image looks really blurred. Using at least 1080p for a 1440p screen would be nice from the start.
It runs better, but the image is so blurry that it reminded me at the "qnix" times (or how that blur-filter from was named). Native 1080p was way sharper (the RTX 3070 can't hold stable 60 with 1440p with all Settings maxed out).
What is really still missing is the DLSS "Downsampling" nvidia promised. Running the game in 4k DLSS Mode on a 1440p resolution. This should also add some sharpness to the image instead of making it more blurry.

dlss works better when scaling to 4k. Resolution used by dlss is a ratio of the target resolution. Exact ratio is dependent on dlss quality setting. DLSS is a way to get more fps. Quality vs. performance. DLSS can approach or even overtake native but it can also look worse. Quality is scene and game dependent. You might want to try out the different dlss settings as they provide different quality/perf trade off.

Some games like cp2077 don't have native. In those cases it's dlss vs. taa. TAA has it's own downsides. It's a compromise you have to make for yourself that which setting and performance you like best. If you are cpu bound dlss wouldn't give more perf and things get muddier as you would loose the additional performance given by dlss.

I think the hype is about how much better can dlss get? It's software, maybe it can get even better, maybe it cannot. dlss2.0 is nice as it's upgradeable without rebuilding and rereleasing game. There is hope that future dlss versions would improve also existing games. Another vector for hype is that gpu's aren't getting that much faster anymore. If we want more perf and quality software has to get smarter. DLSS is one way to try to be smarter instead of relying in better silicon. It's probably more than 4 years that it takes new gpu's to double the perf of existing gpu's. So 4 years to go with new hw from 30fps to 60fps. It looks like 4 years might be optimistic though looking at tsmc roadmap. At least while staying in same price range.
 
Last edited:
I really can't follow the DLSS hype. Just tested it yesterday with Control. After activating DLSS the Render Resolution is in sub-1080p territory (I think I must activate DRS to allow higher resolutions than my monitor (1440p screen) supports) and the image looks really blurred. Using at least 1080p for a 1440p screen would be nice from the start.
It runs better, but the image is so blurry that it reminded me at the "qnix" times (or how that blur-filter from was named). Native 1080p was way sharper (the RTX 3070 can't hold stable 60 with 1440p with all Settings maxed out).
What is really still missing is the DLSS "Downsampling" nvidia promised. Running the game in 4k DLSS Mode on a 1440p resolution. This should also add some sharpness to the image instead of making it more blurry.
but for sure dlss2 is better in scaling 1440p to 4k than checkerboarding
 
I really can't follow the DLSS hype. Just tested it yesterday with Control. After activating DLSS the Render Resolution is in sub-1080p territory (I think I must activate DRS to allow higher resolutions than my monitor (1440p screen) supports) and the image looks really blurred. Using at least 1080p for a 1440p screen would be nice from the start.
It runs better, but the image is so blurry that it reminded me at the "qnix" times (or how that blur-filter from was named). Native 1080p was way sharper (the RTX 3070 can't hold stable 60 with 1440p with all Settings maxed out).
What is really still missing is the DLSS "Downsampling" nvidia promised. Running the game in 4k DLSS Mode on a 1440p resolution. This should also add some sharpness to the image instead of making it more blurry.

How does Control look to you at native resolution? Admittedly I've only tried it on PS5 and I know Remedy use an internal scaling tech, but to my eyes it looks a lot worse than other 1440p games. I'm not sure if that's because of lower resolution effects on lightshafts, reflections, etc.

As much as I loved the story I didn't think the game was a looker.
 
I really can't follow the DLSS hype. Just tested it yesterday with Control. After activating DLSS the Render Resolution is in sub-1080p territory (I think I must activate DRS to allow higher resolutions than my monitor (1440p screen) supports) and the image looks really blurred. Using at least 1080p for a 1440p screen would be nice from the start.
It runs better, but the image is so blurry that it reminded me at the "qnix" times (or how that blur-filter from was named). Native 1080p was way sharper (the RTX 3070 can't hold stable 60 with 1440p with all Settings maxed out).
What is really still missing is the DLSS "Downsampling" nvidia promised. Running the game in 4k DLSS Mode on a 1440p resolution. This should also add some sharpness to the image instead of making it more blurry.

Cyberpunk 2077 with DLSS looks damn good. A must requirement if you plan on maxing out IQ and RT settings @4K/60fps.
 
seeing rtx2060 struggle with stable 60fps with dlss on ps5 performance is quite good in this game
The 60fps modes are definitely quite well optimized for Nioh2 on the PS5 (120fps modes are perhaps another story, too strong a visual hit imo), certainly not the best PC port but when RTX isn't used a 2060 is usually not able to keep up with a PS5 regardless in most games from what I've seen.

Albeit bear in mind the lack of dynamic res hurts the PC in this comparison - the PS5 version can drop down to 1440p according to VG Tech in spots, and usually hovers around 80% of 4k (3072x1728). 1440p native on the 2060 can't lock to 60 either but it gets far closer to it than 4k with DLSS performance, DLSS doesn't give you the performance of the native res it's rendering from - and that hit is greater on the lower-end cards.
 
Back
Top