Nvidia DLSS 1 and 2 antialiasing discussion *spawn*

Has anyone done blind subjective testing between DLSS, checkerboarding, TAAU etc? Otherwise how do you distinguish strong from weak? Comparing against half a decade old TAA is a bit of a joke.
No blind test as far as I know.
As for what is strong or weak, I suppose the easiest method is just how close to source it should get. But that would only work for upscaling. I think AA is generally preference. I did read some people hating on TAA and some people love it - that's generally where we could do a blind test to see what people prefer.
 
I believe the devs are provided a tool to hook into the game engine which generates the massive SSAA shots to be submitted to Nvidia for training.

I'm confused as to how DML compares to DLSS. Who is doing the training with DML? Is it using a general trained model that hopes to be used across all games and DML is used for client to apply a model for termporal upscaling? Surely this would be significantly inferior to a game-specific trained model like DLSS?
So the structure of Direct ML is to run pre-trained models with the lowest amount of overhead.
Models are pre-trained ahead of time and converted likely into a common format in which most applications. The most common open source one at this moment I think is ONNX format. So there needs to be some conversion of how you build your Neural network models (depending on which library, say you use Keras or Tensorflow vs Pytorch) and they have to be converted to this open source format for any 'NN application' to be able to just 'run it'.

so in this case, if Nvidia builds a Neural Network to do super resolution; they train it then they save the model. They send the model to Microsoft to use. Microsoft uses Direct ML to interface with the model and the inputs from screen buffer get passed to the model and the model sends the results back to Direct ML. So it's the model that needs to be trained. Direct ML is just the interface.

In this sense if Nvidia shared a model for say Metro Exodus, and they so happen to tell me what the makeup of the neural network is, then I should be able to recreate the neural network in Direct ML and leverage their trained model and basically we're running DLSS.

What nvidia does differently with DLSS is that its likely written in Cuda and thus assessable directly by their drivers, perhaps a proprietary nvidia model. But aside from that, if we could read the trained model using Direct ML and I was provided the layout of how that model works, we should be able to generate the same output just using Direct ML since we're using the same trained model.
 
so in this case, if Nvidia builds a Neural Network to do super resolution; they train it then they save the model. They send the model to Microsoft to use. Microsoft uses Direct ML to interface with the model and the inputs from screen buffer get passed to the model and the model sends the results back to Direct ML. So it's the model that needs to be trained. Direct ML is just the interface.
What's the impetus for Nvidia to share a compatible model with Microsoft for DirectML that can be used on competing products? Nvidia don't exactly have a track record of sharing anything.
 
What's the impetus for Nvidia to share a compatible model with Microsoft for DirectML that can be used on competing products? Nvidia don't exactly have a track record of sharing anything.
they wouldn't, it's more a discussion of whether it would work.
but that doesn't stop other companies, like 3P ones, to do the same thing as nvidia and just package it under Direct ML
 
Has anyone done blind subjective testing between DLSS, checkerboarding, TAAU etc? Otherwise how do you distinguish strong from weak? Comparing against half a decade old TAA is a bit of a joke.
Not sure if the Metro-updates made it that much better, but at least initially just turning down the render scale (aka rendering at lower resolution) to something like 75-80% yielded not only similar or better performance, but also better IQ than the DLSS option
 
I'm not sure game specific training will do all that much good unless you use some kind of classifier first so you can use a huge amount of different NNs.

MLPs aren't magic, those weights can only store so much data.

Maybe even the features themselves are game specific, as a generic solution to this problem may have been yet to present itself.
Otherwise i do agree it seems kinda odd.
 
I'm not sure game specific training will do all that much good unless you use some kind of classifier first so you can use a huge amount of different NNs.

MLPs aren't magic, those weights can only store so much data.
It’s likely transfer learning where they are getting efficient results. Take a generic AA
Or SR algo that has been trained on a generic and massive dataset. Perhaps nail like 70% of the cases very well. Take that model as a base and begin layering on additional convulsions/weight changes for each specific game to get you the rest of the way.

There are only so many ways game
Should be aliased. If the images are coming in without AA and say at always 1440p for SR; I can’t see how the cases will always need massive retraining title to title. It’s clear I’m overlooking something important though. It wouldn’t be efficient at least from a cost perspective.
 
I could imagine that with a couple 100 MB worth of codebook an algorithm could learn the most common textures and high frequency geometry for interpolation/hallucination. MLP in and of itself can't scale like that though, a network storing that amount of data would be unusable.
 
I could imagine that with a couple 100 MB worth of codebook an algorithm could learn the most common textures and high frequency geometry for interpolation/hallucination. MLP in and of itself can't scale like that though, a network storing that amount of data would be unusable.
MLP is fairly general. The goals for CNN is to remove as much noise as possible so that the algorithms can focus in on the areas it needs to do work.

I suspect the game is balancing how aggressive the algorithm is. Too lightly
To capture all the aliased parts of the image and it’s too slow; or introduces too much noise. Too much and you may start missing out on capturing everything. I suspect they are still working out their weights here.

To give you an idea though, the SR model was only 6 layers. IIRC. AA is probably where we are seeing more tuning happening I suspect.
 
It’s likely transfer learning where they are getting efficient results. Take a generic AA
Or SR algo that has been trained on a generic and massive dataset. Perhaps nail like 70% of the cases very well. Take that model as a base and begin layering on additional convulsions/weight changes for each specific game to get you the rest of the way.
This I where I imagine hybrid solutions would be better. Take an ML solution to feed into a reconstruction algorithm.
 
Take that model as a base and begin layering on additional convulsions/weight changes for each specific game to get you the rest of the way.
.

That sounds to me like encountering overfitting and trying to make it work regardless. Which would be a (if not `the`) textbook mistake.
 
That sounds to me like encountering overfitting and trying to make it work regardless. Which would be a (if not `the`) textbook mistake.
You can account for that while using transfer learning. Your data pool determine whether you are biasing or overfitting. Some changes you might make make changes to the network is called dropping out, in which we drop out neutrons to keep the network from overfitting.

The goal is to teach the algorithm how to infer a variety of situations. The second goal is to do it as cheaply and as quickly as possible.

Training only against a specific title would be overfitting regardless of how much data you have it; it would only work for that title.
 
This I where I imagine hybrid solutions would be better. Take an ML solution to feed into a reconstruction algorithm.
Yes. This is where creativity will matter more than the underlying technology itself. It’s one thing where we know ML can do this or that; it’s another if it should be doing this or that. There are cases in which DLSS might make a lot of sense for some games; namely older already released titles. Looking to gain some uplift with minimal change to their code.

And there are probably a great deal of other cases where a custom interweave will likely perform better if your game is not shipped yet.
 
MLP is fairly general.

That's the problem, being general is generally equivalent to being inefficient in a specific domain. MLPs in and of themselves can't efficiently handle hierarchy in classification, of course a hierarchical classifier is suboptimal, but it does speed up things. Hierarchical VQ with 100s of MBs of codebook is not necessarily a problem, 100s of MBs of weights for a MLP is.
 
Last edited:
That's the problem, being general is generally equivalent to being inefficient in a specific domain. MLPs in and of themselves can't efficiently handle hierarchy in classification, of course a hierarchical classifier is suboptimal, but it does speed up things. Hierarchical VQ with 100s of MBs of codebook is not necessarily a problem, 100s of MBs of weights for a MLP is.
but i don't think they are using MLP is what I'm saying. Most computer vision is handled by RNN or CNN now, likely the case here as well as images have sequential ordering. I can't see how a MLP network would outperform RNN or CNN in this task; therefore i don't think they are using it. Perhaps i'm not understanding your point here.
 
A CNN is a MLP.
Yes but a vanilla MLP network could require significantly more training/and or layers or more in attempting to produce similar results to a smaller CNN setup.
I'm not necessarily sure on the size of the weights here. I can go back check the model sizes but I doubt they are 100s of MB for this SR model.
 
Back
Top