Nvidia DLSS 1 and 2 antialiasing discussion *spawn*

3D Mark released a new video using a free camera system to simulate game camera and expose DLSS to new scenes.
Looks like DLSS algorithms have been worked on and actually improved rendering compared to the initial FFVX demos.
 
Wrong. See Port Royal benchmark.
The TAA is horrible in Port Royal, take it out of equation and DLSS won't be equal. Also there's few anomalies where things get really blurry in DLSS, which makes you wonder if it's DoF issue like in FFXV, does DLSS break DoF by nature in specific cases?
 
I don't know where got this idea of breaking DoF with DLSS, because it's not true. DoF work as intended in FF15 with DLSS.
The background in the car shot is blurry without AA just like with TAA, which clearly indicates that's the supposed depth look they were going for, DLSS version doesn't follow the suit. Are you saying that the game renders wrong on every other setting?
 
Certainly hope so cause its not near native 4k on consoles with their "effective reconstruction". Like techspot mentions, on pc that tech wont suffice.
Since 4k is harder to achieve on consoles, things like dlss are more needed there.
You shouldn't make assertions that 'DLSS is more needed on consoles' unless you can be sure it's better in quality than other options. Better reconstruction may be required (although I hear very few complaints from those gaming on PS4 Pro such as HZD) - it needs to be proven DLSS is the way to go about it.
 
Is it better/more efficient than the existing very effective reconstruction techniques used on consoles? One tooted plus for DLSS was it seemed to be 'drop in' and work on any game, but that no longer seems to be the AFAICS.

This is mostly related to not being a work on any game AA technique. I too would like to know how it compares to reconstruction.

A few years ago I read an article about machine learning and video compression, in it they took thousands and thousands of videos and trained various machine learning algorithms of various sorts which all had variable performance and quality output. It seems likely that Nvidia is using similar technology in DLSS.

This is evidenced by its requirement to be trained either by the developers or at Nvidia with supersampled renders, and hence isn't "drop in". That would also explain the much better performance than the more general purpose encoder/decoder that the compression folks were getting, although im sure the tensor cores help.

So my question would be how well suited would the tensor cores be toward encoding or decoding a frame, say from 1080p using their version of this ML compression and the decoding it into a higher res?

The background in the car shot is blurry without AA just like with TAA, which clearly indicates that's the supposed depth look they were going for, DLSS version doesn't follow the suit. Are you saying that the game renders wrong on every other setting?

I think that's evidence that it's doing exactly what they say it's doing, so you get a representation of details that were present in supersampled gameplay data. So it might be breaking DOF, because its adding in detail that didn't exist in the say 1080p render that was performed locally.
 
Hard to compare HZD since theres no native 4k comparison, other then its noticeble its not 4k.
RDR2 one x 4k looks much better then the pro's IQ.
DLSS has a more promising future.
 
We have seen a few native / checkerboard comparisons already. Based on what we have seen on the best cases done by a few third parties, mainly The Witcher 3 and the Tomb Raider games, (So CBR could look better done by 1st parties but we obviously can't compare), 4K CBR has a perceptual resolution of minimum 1800p for the cost of about 1530p.

What about DLSS now ? I haven't seen much objective comparisons, just PR benchmarks.

And RDR2 isn't 4K CBR on Pro. The CBR solution is completely broken (probably because of their post process pipeline, but who knows) and the game is most of the time native 2160x1920 on Pro.
 
The background in the car shot is blurry without AA just like with TAA, which clearly indicates that's the supposed depth look they were going for, DLSS version doesn't follow the suit. Are you saying that the game renders wrong on every other setting?
That was the benchmark, the DLSS implementation in the full game is decidedly better, and doesn't exhibit this problem. One thing about DLSS is that it's an improving algorithm.
 
Hard to compare HZD since theres no native 4k comparison, other then its noticeble its not 4k
In motion when you're playing it - is the lack of 4K jarring - or only when looking at stills in comparison tools? What are the aspects that are noticeable and how does DLSS compare? eg. Does DLSS have less dithering, or what?
RDR2 one x 4k looks much better then the pro's IQ.
RDR2 isn't using the best possible reconstruction, or even close AFAIK. Look at this Spider Man frame. What's wrong with it that DLSS fixes?

Image1.png

(pieced together from Digital Foundry Spider Man images)

DLSS has a more promising future.
Why?
 
Maybe one reason to be excited about dnn's for graphics processing are things like this:


In essence draw metadata and let dnn imagine the content. Maybe question becomes is the dnn imagined details correct, incorrect or perhaps correctness is irrelevant if users prefer how graphics look?
 
I'm pretty sure in case of that FFXV shot it's not about SSAA bringing more detail, it's about DLSS breaking DoF which happens elsewhere in the FFXV demo too.
In the very same show you can see easily from the license plate for example how much 1440p DLSS "4K" actually loses details
The expectation is that DLSS is going to increase details in some parts of the scene, and decrease them in other parts. Learning models in general have this problem of inconsistent results.

I suppose breaking DoF effects is one possible cause, but I doubt it. Other tests also show increased detail in parts of the scenes.

Summary of things to look for:
1) Some parts of the scenes will simply be one resolution step lower then upscaled. These should usually be low-contrast parts of the scene, but not always. Sometimes it'll fuck up.
2) Usually, the most detailed parts of the scene will show more detail than any non-supersampled rendering method (including TAA, MSAA, or no AA).
 
As I understand it, this does just boil down to baked trained date per-game. They train it with low res versions and a high res supersampled versions of different frames, and that's it. I guess they use Z and Velocity buffers as well as color.
Color isn't likely, because the color isn't known until the pixel is rendered, and the colors of the input textures are likely to contain too much info for good performance. Colors of the lights applied to the scene are possible, though.
 
What's wrong with it that DLSS fixes?
This is just a really, really bad question. Improvements in rendering these days are typically not noticeable until you see disabled vs. enabled side by side. And even then they're not visible in every scene.

But in this specific case, because much of the scene is rendered at a lower resolution, part of the benefit is straight-up performance. The higher-detail parts can't really be apparent in a shot like this unless you can see a comparison scene that actually shows that detail. Ideally the lower resolution rendering of most of the scene won't be visibly apparent, but there may be some artifacts created.
 
Color isn't likely, because the color isn't known until the pixel is rendered, and the colors of the input textures are likely to contain too much info for good performance. Colors of the lights applied to the scene are possible, though.
What? Isn't this a post process done at the end of of the rendered frame. I feel like one of us completely misunderstood how this technology works and what it's attempting to do.
 
What? Isn't this a post process done at the end of of the rendered frame. I feel like one of us completely misunderstood how this technology works and what it's attempting to do.
It can't be done post-process, not if they're doing what I think they're doing, which is deciding when to apply or not apply super-sampling. The decision has to be made right at the start.

Besides, if it were post-process, deep learning would be completely unnecessary. They could simply measure the scene contrast and re-sample as needed.
 
DLSS requires the engine to support motion vectors and separate UI rendering pipeline from the game resolution. So it's not a post processing solution at all.
 
Pretty sure it's in similar location in pipeline to where TAA / temporal upsampling would be. (In UE4 it is after DoF before motion blur.)
It can't be done post-process, not if they're doing what I think they're doing, which is deciding when to apply or not apply super-sampling. The decision has to be made right at the start.

Besides, if it were post-process, deep learning would be completely unnecessary. They could simply measure the scene contrast and re-sample as needed.
Not sure what you mean with supersampling.
Training data has supersampled input as target, but DLSS doesn't have supersampled buffers to work with during runtime.

So for training they most likely use velocity, color, normal?, specularity? etc and target is 64xSSAA color. (perhaps color of previous frame?.)
During runtime DLSS they get same inputs and ask it to create the same result.
 
This is just a really, really bad question.
You're aware I'm using an upscaled image to enquire about alternative reconstruction methods, right? Vipa889 made some assertions that DLSS was better, more important for consoles, and had more room to grow. I tried to engage Vipa in a discussion to explain his thinking behind that - why is DLSS better than other consoles upscaled games? Given an example of an upscaled game, what are the issue of Insomniac's implementation that DLSS solves?

Because at present, it appears other upscaling systems as just as capable as DLSS and its unclear what, if any, advantages DLSS has over other reconstruction techniques.
 
Back
Top