Nvidia DLSS 1 and 2 antialiasing discussion *spawn*

Just out of curiosity, does DLSS somewhat eliminate aliasing on it's own? Is the image anti-aliased before upscaling with DLSS, or does it substitute for AA entirely? I guess what I'm asking is will it reconstruct aliasing if it's present in the original image.

Also why does DLSS disable variable-rate shading in wolfenstein?
 
Also why does DLSS disable variable-rate shading in wolfenstein?
likely because it's not been trained with VRS as an input.
DLSS will do the anti-aliasing step first, the AI will estimate what it thinks the anti-aliasing should look like (the distinct difference between an algo and an AI) if we applied a Super Sampling AA algorithm on it.
After that step is completed it runs the upscaling to a higher resolution.
 
Is the image anti-aliased before upscaling with DLSS, or does it substitute for AA entirely?
Total AA substitution, TSSAA for example is completely disabled when using DLSS in the case of Wolfenstein.

Also why does DLSS disable variable-rate shading in wolfenstein?
Two possibilities, either the ground truth images that DLSS was based on was devoid of VRS.

Or that the VRS algorithm is too variable to allow for DLSS to work flawlessly, as VRS is working based on 3 or 4 factors that I think includes frame rate as well, some of them is also motion based variable shading.
 
I thought the new DLSS method doesn't use per-game training?
 
I thought the new DLSS method doesn't use per-game training?
Yeah, you are right. Training is continually fine tuned and done on a variety of game images. Per game training isn't necessary though I imagine at some point it will train images from missing games. I wouldn't be surprised if Nvidia is working on a process to extract images without the need for developer submissions.
 
Last edited by a moderator:
I thought the new DLSS method doesn't use per-game training?
That would be quite an advancement. Normally I would think that they would create a base solution; and add specialized layers at the end for each title. But if this isn’t required, that’s quite an AI that they have now.
 
New I thought the new DLSS method doesn't use per-game training?
All the more reason for missing the VRS input, as the global method doesn't take into account VRS that is massively variable across different games.
That would be quite an advancement. Normally I would think that they would create a base solution; and add specialized layers at the end for each title. But if this isn’t required, that’s quite an AI that they have now.

According to NVIDIA, it isn't required anymore.
 
As I've said before, the amount of data you can capture in a net of limited size is pretty limited. Trying to do much more with it than fix up edges is just going to result in it making up bullshit ... which is unlikely to be temporally stable.
 
Irrespective of aspect ratio and RTX model?
I suspect aspect ratio would be one of many tweakable training model variables changed over the course of continuous training.
I really look forward for a deep reveal on the process.
 
Is there anything official around on these supposed claims on the new DLSS? Or is it just smoke and mirrors to hide some of the drawbacks still?
 
Is there anything official around on these supposed claims on the new DLSS? Or is it just smoke and mirrors to hide some of the drawbacks still?
Nothing out except tech site reviews on the new DLSS 2.0. The competition (Intel and AMD) looks to have their work cut out in coming up with an alternative tech.
 
Nothing out except tech site reviews on the new DLSS 2.0. The competition (Intel and AMD) looks to have their work cut out in coming up with an alternative tech.
I hope they don't bother, I don't want some algorithm (tensors or not) guesstimating what I should see on my screen, I want artist deciding what should be on the screen. DLSS X2 on the other hand is another story, that would be more than welcome addition to AA-options.
 
I hope they don't bother, I don't want some algorithm (tensors or not) guesstimating what I should see on my screen, I want artist deciding what should be on the screen. DLSS X2 on the other hand is another story, that would be more than welcome addition to AA-options.
It honestly shouldn't matter at range. As long as up close things aren't disappearing. The realities of the human mind is that we see or don't see a lot of things when we look at a distance anyway. Not with the level of detail that you'd photographically memorize everything and expect to see it exactly in that way up close.

I think I agree that to comfort your heart and feelings, you want the perfect experience. But the mind is actually far from perfectly interpreting all the information it receives.

tl;dr; I think in a blind test between AI and not AI, you wouldn't care or be hard pressed to know the difference. You'll likely just prefer the one that looks better to you, and not really understand whether or not graphics are being hallucinated or not.
 
New Is there anything official around on these supposed claims on the new DLSS?
NVIDIA gave these information to Hardware Unboxed, as a response to their inquiry.

I want artist deciding what should be on the screen
That's no longer possible since the age of TAA, Checkerboard Rendering and Temporal Reconstruction.

The next console cycle will feature heavy use of reconstruction techniques, VRS, AI upscaling and smart Sharpening to maneuver around the demanding requirements of 4K at 60fps.
 
NVIDIA gave these information to Hardware Unboxed, as a response to their inquiry.
Encouraging, but we all know the reality of PR vs accuracy. God knows we get enough Nvidia marketing here let alone tech sites.
 
God knows we get enough Nvidia marketing here let alone tech sites.
Thank God for that. Currently there is little interesting competitive features can be discussed, though it might present itself in a different context to this thread in the near future.
 
Last edited by a moderator:
Back
Top