Nvidia DLSS 1 and 2 antialiasing discussion *spawn*

Minus the fact that you likely need an entire team dedicated to training the model and a supercomputer (DLSS 2.0 doesn't change this requirement at all) to go with it ...

No, you don't...

(DLSS 2.0 doesn't change this requirement at all)

Yes it does. No training is required, no ML expertise, juat a single guy capable of providing the low resolution frames and motion vectors as inputs for DLSS.
 
This is literally what it takes to get DLSS into a game. They needed an entire team consisting of machine learning specialists and a supercomputer to train the DLSS model after a game is released. That's tons of resources going down the drain all just to get a better hack for image upscaling.
What you mention is pre DLSS 2.0, and there is a big difference between pre and post DLSS 2.0

Based on the March 2021 release of "The Fabled Woods" and "System Shock, implementing DLSS seems to take no more than a few hours, or at most over a weekend.
“The Unreal Engine 4 plugin makes light work of adding NVIDIA DLSS to your game, in fact we dropped it in over the weekend,” said Matthew Kenneally, Lead Engineer at Night Dive Studios. “Bringing System Shock to a new generation of gamers has been a labor of love for our team, and the impact NVIDIA DLSS will have on the player’s experience is undeniable.”

"Implemented in less than a day with no assistance from NVIDIA.
...
Adding NVIDIA DLSS to The Fabled Woods was easy thanks to the Unreal Engine 4 plugin, and the impact it makes on performance is substantial.” Joe Bauer, Founder CyberPunch Studios. “With the Unreal Engine 4 plugin, adding DLSS to The Fabled Woods was a no-brainer; it really opens DLSS up to a whole new world of developers.”
System Shock Demo and The Fabled Woods Add NVIDIA DLSS At Lightning Speed Using Unreal Engine 4 DLSS Plugin | GeForce News
 
Maybe I'm misreading things, but I thought what Lurkmass was getting at is we likely won't be getting a DLSS equivalent from individual game developers and there is a high cost of entry should anyone else try. So it may be some time until AMD/Microsoft/Sony can deliver on their version of it. Its not about what it costs for Nvidia to do refinements or game developers to use Nvidia's DLSS.

It seems he’s saying (incorrectly) that DLSS is expensive for Nvidia to maintain and therefore they will abandon it.

Several people have tried to correct his misunderstanding but he doesn’t appear to be interested.
 
No, you don't...



Yes it does. No training is required, no ML expertise, juat a single guy capable of providing the low resolution frames and motion vectors as inputs for DLSS.

https://www.nvidia.com/en-us/geforce/news/nvidia-dlss-2-0-a-big-leap-in-ai-rendering/

In short, yes you absolutely do need a supercomputer for training the model as Nvidia themselves say even for DLSS 2.0 ...

What you mention is pre DLSS 2.0, and there is a big difference between pre and post DLSS 2.0

Based on the March 2021 release of "The Fabled Woods" and "System Shock, implementing DLSS seems to take no more than a few hours, or at most over a weekend.


System Shock Demo and The Fabled Woods Add NVIDIA DLSS At Lightning Speed Using Unreal Engine 4 DLSS Plugin | GeForce News

DLSS plugin still needs to be updated regularly by Nvidia for it to work optimally. You can use the plugin as is but getting the best results could involve retraining the model to be more accurate ...

There's a risk that the model won't be very accurate if it's not being trained against your game ...
 
They provide the plugin for public UE4 branch. You think they will be training the model for every UE4 project out there? You're still thinking of DLSS1.
Yea he doesn't understand the difference between them.
So he believes that the training Nvidia does on the models for DLSS2 is on a per game basis.

They are continually training the general models, but that's not for individual games, it's like most things, it's to improve it.
 
There's a risk that the model won't be very accurate if it's not being trained against your game ...
Instead of training weights per game, it should be perfectly doable to tune gamma, etc. in a game to fit the generalized DLSS weights.
Though, for corner cases, some additional training might be required, but that's not something which happens ever so often at this point I guess.
 
It seems he’s saying (incorrectly) that DLSS is expensive for Nvidia to maintain and therefore they will abandon it.

'Expensive' is a relative term and I don't think I've ever explicitly stated that it would be expensive for Nvidia to maintain the solution ...

Supporting ML upscaling technology is very far from being trivial as you would seemingly imply since no one other Nvidia has done the same so when are you going to concede on this point ?

Yea he doesn't understand the difference between them.
So he believes that the training Nvidia does on the models for DLSS2 is on a per game basis.

They are continually training the general models, but that's not for individual games, it's like most things, it's to improve it.

I'm well aware of the differences. It still doesn't guarantee that you'll get the same amount of accuracy across all games compared to the data set you trained on ...
 
I'm well aware of the differences. It still doesn't guarantee that you'll get the same amount of accuracy across all games compared to the data set you trained on ...
As I've said, it's as much of a guarantee as your ordinary TAA is - there may be edge cases but it's not a big issue.
 
I'm well aware of the differences. It still doesn't guarantee that you'll get the same amount of accuracy across all games compared to the data set you trained on ...
And that's fine. Such is the reality with such things and ML in general. They can have a base model that works for a majority of use cases (where it's 'good enough') and for some titles there may need transfer learn from that base and optimize by including training for the title. At the least, it's not as big as an endeavour as training from 0.

The major pain point for developers is likely avoiding the re-training and needing to shuffle their render pipeline to produce the aliased frames at the precise time in which DLSS2.0 will produce the best result, and then continue afterwards. That's likely where the majority of help is required, and in the case that they cannot re-organize the pipeline, you can still hyper tune the model itself within the new pipeline to create a slightly better output than leaving it at generic values.

All of this is super complicated whether we talk about it from a high level or low level, I think as the technology matures in the space, the pain points for implementation will be less. But there is no doubt that if a company were to support this type of thing, nvidia and it's workforce is well suited to do it in perpetuity.

Microsoft is in a unique position because of it's ownership over DirectX - antialiasing techniques like MSAA are built directly into the API, and therefore the pipeline. And I wonder if it's possible to do something similar if Microsoft can build directly into the pipeline a directML type upscale and reduce the painpoints to implement something like that. But that will also not be usable for all engines however
 
'Expensive' is a relative term and I don't think I've ever explicitly stated that it would be expensive for Nvidia to maintain the solution ...

Oh, I definitely misunderstood your last few posts then. You certainly appears to be saying that it was expensive for them (team of ML experts, supercomputers etc).

Supporting ML upscaling technology is very far from being trivial as you would seemingly imply since no one other Nvidia has done the same so when are you going to concede on this point ?

No one claimed that the bar of entry was low. The point is that Nvidia already has a massive ML ecosystem and DLSS is just a tiny part of it.
 
https://www.nvidia.com/en-us/geforce/news/nvidia-dlss-2-0-a-big-leap-in-ai-rendering/

In short, yes you absolutely do need a supercomputer for training the model as Nvidia themselves say even for DLSS 2.0 ...

Who said that you don't need to train it in a supercomputer? But individual devs don't need to do anything, nor does Nvidia need to update it further for it to work on future games as perfectly as it does on today's games. The model they have right now, already works wonderfully in games as widely different in art direction as CP2077, Mount&Blade: Bannerlord, Fortnite and Minecraft.

DLSS plugin still needs to be updated regularly by Nvidia for it to work optimally.

Not true.

You can use the plugin as is but getting the best results could involve retraining the model to be more accurate ...

As said in previous posts, you are guessing. And wrong.

There's a risk that the model won't be very accurate if it's not being trained against your game ...

A fear completely invalidated by the plethora of different games with DLSS 2.0 already in the wild, some of which I mentioned above. DLSS 2.0 most likely works on shapes, color and contrast differences, etc. Art styles and/or whatever is rendering are completely irrelevant at this point.
 
As I've said, it's as much of a guarantee as your ordinary TAA is - there may be edge cases but it's not a big issue.

The results need to start paying off to reflect this because it's not good sign if we're still seeing DLSS support being added post-game launch a year later ...

The pattern so far is that DLSS 2.0 is mostly added after the game's initial launch which suggests that they are constantly retraining their model ...
 
The pattern so far is that DLSS 2.0 is mostly added after the game's initial launch which suggests that they are constantly retraining their model ...
or is not a priority to create a secondary pipeline just for turing and above users and would rather get the game out first to the majority of the population and not have this niche feature create delays.
the consoles are likely the largest market.
 
The results need to start paying off to reflect this because it's not good sign if we're still seeing DLSS support being added post-game launch a year later ...

The pattern so far is that DLSS 2.0 is mostly added after the game's initial launch which suggests that they are constantly retraining their model ...
DLSS is being added post launch for the same reason why any PC exclusive feature - let alone a single PC IHV exclusive feature - tends to be added post launch - because it's not in scope for console launch frame.

I also wonder if there is such a pattern even. I'd say that most games which have DLSS 2.0 have launched with it - excluding those which has launched prior to DLSS being a thing even.
 
I'm well aware of the differences. It still doesn't guarantee that you'll get the same amount of accuracy across all games compared to the data set you trained on ...
I don't think overfitting network is desirable in such thing as DLSS. In DLSS 1.0, they had no choise but to overfit the network since it was working in low res space and was doing so many things at once - AA, image warping, etc, etc.
In DLSS 2.0, the NN's field of work was restricted, instead of doing everything by NN, it is responsible now just for picking details from history, this way they were able to make the NN way more general.
Judging by how DLSS 2.0 works in many games, the patterns are clearly the same in every game - it's capable of finding thin details and combining them into continuous higher res details, etc, it's way more predictable than DLSS 1.0 ever was.
 
DLSS is being added post launch for the same reason why any PC exclusive feature - let alone a single PC IHV exclusive feature - tends to be added post launch - because it's not in scope for console launch frame.

I also wonder if there is such a pattern even. I'd say that most games which have DLSS 2.0 have launched with it - excluding those which has launched prior to DLSS being a thing even.
Also DLSS2.0 is relatively new in context of game development. It's a nice to have feature that isn't required for launch as you say. Especially if not doing RTRT.
The pattern so far is that DLSS 2.0 is mostly added after the game's initial launch which suggests that they are constantly retraining their model ...
no it doesn't mean that at all. Your whole premise is wrong.
 
Also DLSS2.0 is relatively new in context of game development. It's a nice to have feature that isn't required for launch as you say. Especially if not doing RTRT.
no it doesn't mean that at all. Your whole premise is wrong.
I think both Unity and UE4 will be checkbox features for DLSS2.0 now.

So it can't be optimized per title. I suspect we will likely see DLSS2.0 in a lot more new titles being shipped.
 
I think both Unity and UE4 will be checkbox features for DLSS2.0 now.

So it can't be optimized per title. I suspect we will likely see DLSS2.0 in a lot more new titles being shipped.
Yep, agree with you.
But up to this point it was first DLSS1, then having to implement it into engine, now as you say, checkbox feature in the two biggest engines.
 
The pattern so far is that DLSS 2.0 is mostly added after the game's initial launch which suggests that they are constantly retraining their model ...
It seems you are still confined to the DLSS1 era, again your statement is wrong, the majority of DLSS2 titles gets the support at launch.

Call Of Duty Cold War Black Ops, Cyberpunk 2077, Pumpkin Jack, The Fabled Woods, Watch Dogs Legion, Bright Memory, Ghostrunner, Xuan-Yuan Sword VII, The Medium, Death Stranding, Outriders, Control, System Shock Remake .. etc, all got released with DLSS at launch, once an engine pays the upfront cost of implementing DLSS2, future iterations of games using that same engine will get DLSS2 easily.

Some games got DLSS2 a mere month after release, such as Marvel's Avengers and NIOH 2. Heck, Atomic Hearts and Boundary have DLSS support even before they launch.
 
Last edited:
Back
Top