Nvidia DLSS 1 and 2 antialiasing discussion *spawn*

Are we really that removed from multi-sampling solutions worked into the engine rendering pipeline and having great options like TXAA?
NVIDIA pushed TXAA heavily, it was featured in 20 games at the very least (23 by my counting), but developers found TAA easier to integrate while costing a lot less performance. So they used it in a wide manner.

Anyway, here are a punch of comparisons between TAA and DLSS, TAA adds a lot more blur to the scene. DLSS appears sharper with more details.

Interactive screenshots:
https://www.nvidia.com/en-us/geforce/news/nvidia-dlss-3dmark-port-royal-benchmark/

Video made by 3DMark:

Also some performance comparisons:

3dmark-port-royal-nvidia-dlss-geforce-rtx-performance-results.png
 
The "1440p" DLSS is rendered at 1080p native
Yeah. It's labelled as the upsacled resolution, so 'visually the same as 1440p', but of course the higher framerates come from rendering less pixels. If the output is closer to 1440p than 1080p, it may be fair. People just need to learn to interpret it, as ever with marketing values.
 
I've been sort of away from 3D tech stuff for a while, so I just ran across this. The idea is fascinating! I have many detailed thoughts on this that I want to get back to later (work now!), but wanted to post to remind myself to come back to this soon.

Short version: I don't think it's as simple as rendering at a lower resolution and intelligently upscaling. I think it's dynamically selecting the resolution to render different components of the scene using a contrast-trained learning algorithm. The lowest resolution is specifically selected to not be a power of two of the view resolution to limit the worst-case anti-aliasing (learning algorithms are notorious for bad tail behavior).

Anyway, hopefully I'll remember come back to this this evening, because I think I know how to test my idea.
 
Yeah. It's labelled as the upsacled resolution, so 'visually the same as 1440p', but of course the higher framerates come from rendering less pixels. If the output is closer to 1440p than 1080p, it may be fair. People just need to learn to interpret it, as ever with marketing values.

I have to ask once again, who buys an RTX2080ti to game at 1440p? Tests seem flawed.

Logically, most people who game at 1440p would not be buying a $1,200 GPU. (Those "mid-level" gamers center mostly around the $350 ~ $750 for high performance gaming (ie: 90Hz ~ 144Hz+) at 1440p.) I understand the point, in which they used the 2080ti for all the comparisons. But, DLSS uses GPU resources & time, so a GTX2080, or 2070 would be a better platform to "compare" these results. It would be more typical of 1440p case use.
 
I have to ask once again, who buys an RTX2080ti to game at 1440p? Tests seem flawed.
I did. I replaced my 2080 with a 2080Ti because some games are just too heavy to render even at 1440p. Few people game at 4K actually. 1440p with high refresh rate is more common.
 
I have to ask once again, who buys an RTX2080ti to game at 1440p? Tests seem flawed.
People who want greater than 60 fps for one.
Logically, most people who game at 1440p would not be buying a $1,200 GPU. (Those "mid-level" gamers center mostly around the $350 ~ $750 for high performance gaming (ie: 90Hz ~ 144Hz+) at 1440p.) I understand the point, in which they used the 2080ti for all the comparisons. But, DLSS uses GPU resources & time, so a GTX2080, or 2070 would be a better platform to "compare" these results. It would be more typical of 1440p case use.
As their charts show, the 2060 gets a significant boost as well.

The only problem I have with this of course is we're still using fixed demos for this stuff ~5+ months from launch for the feature that was supposedly relatively easy for devs to support, at least compared to ray tracing. Really this is getting pretty ridiculous.
 
Okay, here's what I'm pretty sure is happening. Clues:
1) nVidia says they're using machine learning and super sampling.
2) Some screenshots demonstrate upscaling, e.g. 1080p source for a 1440p display.
3) Other screenshots demonstrate better detail even than same-resolution with no AA.

What I suspect is going on here is that, in the example of a 1440p output resolution, they use a 1080p minimum resolution. However, each individual pixel in the 1080p framebuffer might actually include data for a bunch of different sub-pixels. They could be entirely flexible with this: each pixel would be saved to a framebuffer to store up to a maximum number of samples (say, 8). They could reduce the storage size using some compression too. And when they don't need to render a 1080p image before producing the final 1440p output: they can take the image with all the extra samples for some pixels so that you get the full benefit of super-sampled 1440p when you need it.

They would also output a set of numbers representing the rasterization inputs. These could be things like identifiers for both input data and shaders applied (or even just the number of them), colors of lights applied, distance to the location, etc. They don't have to output these values for every single pixel, just a representative sample of them. There would be a tradeoff between storing more data per pixel and storing the data for more pixels. This set of inputs is important, as it's necessary to train the learning model.

The final step is rescaling the image to 1440p. During this step, the rescaler has access to the colors of neighboring pixels, and is able to create an estimate of how much aliasing was found in the final image. A very simple score for aliasing would be color contrast. But they might do something a little different to ensure that more detail means a higher score.

The two data outputs from this process are combined each frame to update the learning model: the set of inputs and the per-pixel score are used to update the learning model. The learning model then takes the set of inputs to estimate how many sub-samples should be used for each pixel. This calculation is probably going to be the biggest limitation on the number of inputs they actually use. The actual calculation performed here is basically a matrix multiplication, which these cards are good at. But too many inputs and it will overwhelm the other rasterization calculations.

Finally, why 1080p? Why not have the minimum resolution be 720p? Or keep it at 1440p for quality?

Performance is surely part of the answer. But I think the bigger answer is simply that learning models always have problems with tail effects. Learning models make ridiculous errors, and it seems to be pretty much impossible to avoid them entirely. Performance suggests the minimum should be a lower resolution. The tail error issue with learning algorithms suggests it should not be 1440p, because some areas of the scene are going to end up with no anti-aliasing. And making the resolution too low will have the same issue only worse. So going down by a half-resolution step to 1080p is perfect: performance should be good, and you get a little bit of automatic anti-aliasing no matter how badly the ML algorithm fucks up.

Finally, the nature of this kind of algorithm is such that it would probably benefit greatly from pre-baked learning models for each game. Which might explain why game support is important.
 
As I understand it, this does just boil down to baked trained date per-game. They train it with low res versions and a high res supersampled versions of different frames, and that's it. I guess they use Z and Velocity buffers as well as color.
 
As I understand it, this does just boil down to baked trained date per-game. They train it with low res versions and a high res supersampled versions of different frames, and that's it. I guess they use Z and Velocity buffers as well as color.

I can't imagine how they would use any buffers like that since the supercomputer never has access to anything beyond the source and "ideal output" images to create the algorithm.
 
Okay, here's what I'm pretty sure is happening. Clues:
1) nVidia says they're using machine learning and super sampling.
Yep.
2) Some screenshots demonstrate upscaling, e.g. 1080p source for a 1440p display.
All DLSS screenshots demonstrate upscaling, unless it's DLSS X2 (which is only available in couple specific demos which aren't in open circulation)
3) Other screenshots demonstrate better detail even than same-resolution with no AA.
This can simply never be true, at best (DLSS X2) it can be equal and "SSAA" make things look smoother, but that's it. If it's DLSS and not X2 it is never even equal
 
As I understand it, this does just boil down to baked trained date per-game. They train it with low res versions and a high res supersampled versions of different frames, and that's it. I guess they use Z and Velocity buffers as well as color.
If it were as simple as that, then they wouldn't have the issue where the quality in the first few frames of a scene is lower than later on.

Pre-generated models can probably help some. But aren't likely to be the whole thing. Also, the game-specific stuff may be more about selecting which variables are good for the learning algorithm, rather than training an actual model.
 
This can simply never be true, at best (DLSS X2) it can be equal and "SSAA" make things look smoother, but that's it. If it's DLSS and not X2 it is never even equal
For areas of high detail, supersampling increases the amount of visible detail pretty substantially. You can see this effect in play in the last screenshot on Tom's Hardware's DLSS article from last October:
https://www.tomshardware.com/reviews/dlss-upscaling-nvidia-rtx,5870.html

The foliage in the background looks dramatically clearer in the DLSS image than in either of the other two (no AA or TAA).
 
Intresting, hope this feature lands in PS5 so 4k doesnt become too taxing again.
Is it better/more efficient than the existing very effective reconstruction techniques used on consoles? One tooted plus for DLSS was it seemed to be 'drop in' and work on any game, but that no longer seems to be the AFAICS.
 
Certainly hope so cause its not near native 4k on consoles with their "effective reconstruction". Like techspot mentions, on pc that tech wont suffice.
Since 4k is harder to achieve on consoles, things like dlss are more needed there.
 
For areas of high detail, supersampling increases the amount of visible detail pretty substantially. You can see this effect in play in the last screenshot on Tom's Hardware's DLSS article from last October:
https://www.tomshardware.com/reviews/dlss-upscaling-nvidia-rtx,5870.html

The foliage in the background looks dramatically clearer in the DLSS image than in either of the other two (no AA or TAA).
I'm pretty sure in case of that FFXV shot it's not about SSAA bringing more detail, it's about DLSS breaking DoF which happens elsewhere in the FFXV demo too.
In the very same show you can see easily from the license plate for example how much 1440p DLSS "4K" actually loses details
 
3D Mark released a new video using a free camera system to simulate game camera and expose DLSS to new scenes.

This video compares the image quality of TAA and DLSS using a special version of the test that has a unique camera path that hasn't been shared with NVIDIA. The free camera movement simulates a game where the player can move freely around the environment producing new, unique images that the DLSS model cannot possibly have seen before in its training.


This can simply never be true, at best (DLSS X2) it can be equal and "SSAA" make things look smoother, but that's it. If it's DLSS and not X2 it is never even equal
Wrong. See Port Royal benchmark.
 
Back
Top