Nvidia DLSS 1 and 2 antialiasing discussion *spawn*

New Metro patch that improves DLSS sharpening and RTX performance. I tried it myself, DLSS @1440p now looks indistinguishable from native 1440p. It's a major improvement sharpness wise.

Its a new tech, like i said before, the idea is intresting with a supercomputer and tensor cores helping out the reconstruction. I think nvidia has something there, lets hope next consoles will see something similar, to offload to supercomputers much faster then local hardware can ever be, but i would priotize RT hardware :D
 
Its a new tech, like i said before, the idea is intresting with a supercomputer and tensor cores helping out the reconstruction. I think nvidia has something there, lets hope next consoles will see something similar, to offload to supercomputers much faster then local hardware can ever be, but i would priotize RT hardware :D
You make it sound as though it's using the supercomputers real-time whilst playing... I hope that's not what you mean.
 
Comparison screenshots showing the much more improved sharpness, as I said, other than the occasional shimmering, the native vs DLSS is almost indistinguishable.

https://www.dsogaming.com/articles/...improves-dlss-quality-comparison-screenshots/

Thankfully, NVIDIA and 4A Games have significantly improved it, making DLSS now a viable option for those interested in it.

Below you can find some comparison screenshots between native 4K and 4K DLSS. As you will see, there is a significant performance boost and the game looks now almost as sharp in 4K DLSS as in native 4K resolution.

You mean 10% faster, right?
He probably meant 10% slower than the previous DLSS performance before the patch. It's not illogical to assume NVIDIA increased the load on the Tensor Cores to achieve better quality.
 
^^ Yeah I was just looking over some back to back screenshots today of 1440p linear upscaled to 4K, 4K reconstructed from DLSS, and native 4K and found the 4K DLSS shot after the patch comparable and, in general pleasing, to the eye. These are from ISee on Resetera (thanks!)

1440p -> 4K
tMNzI3B.jpg


4K DLSS
gECKUPx.jpg


4K Native
mJKcILE.jpg


I would say they need to add a sharpening control for the user - some may prefer one looks vs. another. I imagine it is using some pos-process sharpening of some kind. Quality should be getting better with a newer trained data set that NV promised in that post as well, should need new drivers for that though.

Sorry for compression in the images, it messes it up a bit - anyone know a good 4K image host for PNG?
 
Last edited:
^^ Yeah I was just looking over some back to back screenshots today of 1440p linear upscaled to 4K, 4K reconstructed from DLSS, and native 4K and found the 4K DLSS shot after the patch comparable and, in general pleasing, to the eye. These are from ISee on Resetera (thanks!)

1440p -> 4K
tMNzI3B.jpg


4K DLSS
gECKUPx.jpg


4K Native
mJKcILE.jpg


I would say they need to add a sharpening control for the user - some may prefer one looks vs. another. I imagine it is using some pos-process sharpening of some kind. Quality should be getting better with a newer trained data set that NV promised in that post as well, should need new drivers for that though.

Sorry for compression in the images, it messes it up a bit - anyone know a good 4K image host for PNG?
4K DLSS is very good compromise and it can still improve further.

Thank you for the repost here, this is working out better than my expectations in both speed and quality, I had hoped ML could take us there, but I had stay conservative to be rational.
 
The native 4k shot looks odd. Lighting is different (i.e. windows, door) and some textures are quite blurry (i.e. chimney, parts of roof).
 
^^ Yeah I was just looking over some back to back screenshots today of 1440p linear upscaled to 4K, 4K reconstructed from DLSS, and native 4K and found the 4K DLSS shot after the patch comparable and, in general pleasing, to the eye. These are from ISee on Resetera (thanks!)
DLSS definitely better than 1440p upscale, but notably softer than native. Edge smoothing is great, but textures sand small detail is still getting lost.
 
The native 4k shot looks odd. Lighting is different (i.e. windows, door) and some textures are quite blurry (i.e. chimney, parts of roof).
I imagine it is because the sun moved as to why something looks different - and maybe the textures did not load perfectly on that chimney after the resolution change in menu. I will post some after the next driver update here when NV adds more finalised trained NN weights (current Metro DLSS is still not the trained one with late game content).
 
It's not going to be a very good experience for gamers if ongoing training is required post-release. For those that play it a year from now? They'll get the best experience. Those who bought it immediately? Sorry.
 
It's not going to be a very good experience for gamers if ongoing training is required post-release. For those that play it a year from now? They'll get the best experience. Those who bought it immediately? Sorry.
That’s true. But no other algorithm improves over time. Once it’s shipped that’s it.
 
You will probably see less post-release training as studios and Nvidia (or internal Studio AI training groups) start to collaborate much earlier on during the game development cycle.
 
That’s true. But no other algorithm improves over time. Once it’s shipped that’s it.
DLSS improves over time only if NVIDIA crunches more screens and updates the dataset for players, so it's more or less comparable to devs releasing a patch to a game with tweaked effects / new effects etc
 
Talk about distributed workload opportunity/cryptocurrency: create a software suit that allows Turing owners to run distributed training network, pay them with NVtokens good for future hardware/game purchases.
 
DLSS improves over time only if NVIDIA crunches more screens and updates the dataset for players, so it's more or less comparable to devs releasing a patch to a game with tweaked effects / new effects etc
Isn't it closer to just nvidia releasing new drivers for the game? The developers aren't patching the game further for improved DLSS.
 
Isn't it closer to just nvidia releasing new drivers for the game? The developers aren't patching the game further for improved DLSS.
Isn't that what he's saying? He's just comparing it to devs patching a game for improved visuals.
 
You will probably see less post-release training as studios and Nvidia (or internal Studio AI training groups) start to collaborate much earlier on during the game development cycle.

Only in cases where NV determines it is in their best interest to train it for DLSS. IE - only for the biggest titles or for titles that NV think will get benchmarked on a review site. And once the game is out of the limelight it's unlikely it'll get any further training. So a game with a lot of pre-launch hype will get training, but if sales for the game are disappointing, I can see further training coming to an abrupt end.

Otherwise, you're SOL.

Regards,
SB
 
Isn't that what he's saying? He's just comparing it to devs patching a game for improved visuals.
i guess for me it's where the responsibility lies. The studio ships with its own versions of antialiasing and upscaling that do work well. They aren't on the gun to fix DLSS or improve it because they've done all they can, it's really hands off and on nvidia.

It being an nvidia feature, means they have to keep pushing the boundaries of their DLSS as a marketing point.
 
There were also "AI upscaling" projects which trained with huge amount of high resolution movies so they can upscale older movies with lower resolution. I am curious about in what way different games requires different training? I mean, I'd imagine that most games have somewhat similar diversity in frames rendered which should make them similar in the training result, no?

If that's true, maybe in the future there will be some generic model which is good enough for practically all games and only small improvements will be required for specific games.
 
Back
Top