Nvidia DLSS 1 and 2 antialiasing discussion *spawn*

I'm taking about dlss 3.0, and how can they push the usage of the tech with it.

There is no "3.0". It's just 2.0, it's constantly updated and is their universal go to for all upscaling.

That being said, I wonder what it looks like running on video? It doesn't need to be retrained for specific targets anymore so it should do video just as well. I kinda wanna see, at least for the hell of it. And heck maybe they could make it a new feature "deep learning realtime video upscaling"? That sounds kinda rad actually, just throw whatever video files you have at it and see what it does. No waiting to see if the studio does it for you at the very least.

Hells, now that I think about it MS is doing deep learning HDR conversion, in realtime. Combine the two and you could just throw it at any video you want to see what kind of UHD conversion it gets. That'd be awesome, I'd definitely spend way too long playing with that.
 
I think DLSS on its own will make AMD irrelevant.
I seriously can't understand the fanaticism behind DLSS, yes, it scales images well, but it's still artifact ridden and in places outright broken (like Birds Stranding -case)
 
I think DLSS on its own will make AMD irrelevant.
Just overclock your N21 and get more performance across most stuff instead of select few.
I seriously can't understand the fanaticism behind DLSS, yes, it scales images well, but it's still artifact ridden and in places outright broken (like Birds Stranding -case)
The new sacred cow, nvidia edition.
 
I seriously can't understand the fanaticism behind DLSS, yes, it scales images well, but it's still artifact ridden and in places outright broken (like Birds Stranding -case)

It's likely to be game engine issue where motion vectors for birds is not provided. Missing/incorrect motion vectors then causes error in prediction/picking samples from previous frames.
 
You will understand it when you use it.
I have used it (enough to test, few hours), includind DLSS 2.0 titles, and those ringing artifacts are still an eyesore among other issues
It's likely to be game engine issue where motion vectors for birds is not provided. Missing/incorrect motion vectors then causes error in prediction/picking samples from previous frames.
DLSS is later added add-on, you can't blame engine for not planning ahead for some proprietary scaling algorithms that require certain information to be utilized. Normally people would tell whoever made the add-on to either adapt it to work with what's available or don't implement it at all, but instead we're getting image quality praising comments completely ignoring the issue (and the other typical DLSS issues)
 
I seriously can't understand the fanaticism behind DLSS, yes, it scales images well, but it's still artifact ridden and in places outright broken (like Birds Stranding -case)
It's a combination of far viewing distances (4K TV gaming off PC), performance increases it provides and the simple fact that native+TAA is no less "artifact ridden and in places outright broken" than DLSS.
 
Just overclock your N21 and get more performance across most stuff instead of select few.
Select few, becomes torrent over next year.
The new sacred cow, nvidia edition.
Gameworks? This is better than Gameworks, because AMD just disappears off the bottom of the benchmark graphs. NVidia x60 beats AMD x800. And that's before turning on ray tracing.
 
DLSS is later added add-on, you can't blame engine for not planning ahead for some proprietary scaling algorithms that require certain information to be utilized. Normally people would tell whoever made the add-on to either adapt it to work with what's available or don't implement it at all, but instead we're getting image quality praising comments completely ignoring the issue (and the other typical DLSS issues)

Any kind of reconstruction algorithm trying to use data from multiple frames will benefit from correct motion vectors. It's a good thing faults are found and engines/algorithms are improved.
 
Gameworks? This is better than Gameworks, because AMD just disappears off the bottom of the benchmark graphs. NVidia x60 beats AMD x800. And that's before turning on ray tracing.
The moment when these kind of tricks are allowed on benchmark graphs, which don't specifically only test said trick(s), is the moment the graphs lose all their meaning and become just another tool to practically lie to customers.
Image quality is subjective matter and so long as a feature affects IQ and can't be enabled on all participants, it should never be used. Otherwise you would need to allow every other trick in the book and outside the book too to the point where 240i bilinear scaling to 4K needs to get it's spot in the graphs too. Sure it looks horrible to everyone but the blind probably, but there's no objective scale for IQ (unless deviation from reference image is used, but even then it comes down to whatever each prefer and even something everyone would agree looks better would still be rated lower because it alters the reference image) so it needs to be all or nothing.
 
@Kaotik So if a game supports TAA and DLSS, why is DLSS any less of a "reference" image? You're not modding the game or hacking something into it that wasn't intended.

Edit: Put another way, every single option in a graphics menu is a tradeoff between image quality and performance. Why is DLSS different than any other option? I think anyone that's reviewing GPUs that doesn't include DLSS when it's supported is doing a disservice to their readers. They can offer screenshots and video and let the reader decide for themselves. But to exclude it altogether is absolutely the wrong choice.
 
Last edited:
Any algorithms that can do more with a lot less ought to be celebrated. Brute force by way of sampling every subpixel every frame was fine when we were chugging along at 2^n transistors and 2^n clockspeeds every 18 months, but those days are probably never coming back. If we ever have a hope of achieving something that approaches the 'rendering equation' for light transport, it's going to require some aggressive algorithms that are inevitably lossy. If my brain can generate the world I perceive from a few megabits/s worth of bandwidth over the optic nerve, then conceivably there's room DL-based methods that reduce the amount of raw samples that need to be taken. Worrying about ruining the sanctity of video card benchmarks in this context is straight up retarded.
 
Back
Top