Nvidia DLSS antialiasing discussion *spawn*

Discussion in 'Architecture and Products' started by DavidGraham, Sep 19, 2018.

Tags:
  1. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    7,898
    Likes Received:
    6,184
    No blind test as far as I know.
    As for what is strong or weak, I suppose the easiest method is just how close to source it should get. But that would only work for upscaling. I think AA is generally preference. I did read some people hating on TAA and some people love it - that's generally where we could do a blind test to see what people prefer.
     
  2. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    7,898
    Likes Received:
    6,184
    So the structure of Direct ML is to run pre-trained models with the lowest amount of overhead.
    Models are pre-trained ahead of time and converted likely into a common format in which most applications. The most common open source one at this moment I think is ONNX format. So there needs to be some conversion of how you build your Neural network models (depending on which library, say you use Keras or Tensorflow vs Pytorch) and they have to be converted to this open source format for any 'NN application' to be able to just 'run it'.

    so in this case, if Nvidia builds a Neural Network to do super resolution; they train it then they save the model. They send the model to Microsoft to use. Microsoft uses Direct ML to interface with the model and the inputs from screen buffer get passed to the model and the model sends the results back to Direct ML. So it's the model that needs to be trained. Direct ML is just the interface.

    In this sense if Nvidia shared a model for say Metro Exodus, and they so happen to tell me what the makeup of the neural network is, then I should be able to recreate the neural network in Direct ML and leverage their trained model and basically we're running DLSS.

    What nvidia does differently with DLSS is that its likely written in Cuda and thus assessable directly by their drivers, perhaps a proprietary nvidia model. But aside from that, if we could read the trained model using Direct ML and I was provided the layout of how that model works, we should be able to generate the same output just using Direct ML since we're using the same trained model.
     
  3. pharma

    Veteran Regular

    Joined:
    Mar 29, 2004
    Messages:
    2,930
    Likes Received:
    1,626
    Did link to this in another thread?
     
  4. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    7,898
    Likes Received:
    6,184
    heh yea, ESRGAN is pretty slick. When you get down to the Qualititative results and you see it blow by all the other AI based algorithms it's pretty awesome. Unfortunately the amount of time it takes to run this sucker can be a long time ;)
     
    pharma likes this.
  5. Malo

    Malo Yak Mechanicum
    Legend Veteran Subscriber

    Joined:
    Feb 9, 2002
    Messages:
    7,031
    Likes Received:
    3,101
    Location:
    Pennsylvania
    What's the impetus for Nvidia to share a compatible model with Microsoft for DirectML that can be used on competing products? Nvidia don't exactly have a track record of sharing anything.
     
  6. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    7,898
    Likes Received:
    6,184
    they wouldn't, it's more a discussion of whether it would work.
    but that doesn't stop other companies, like 3P ones, to do the same thing as nvidia and just package it under Direct ML
     
    Malo likes this.
  7. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    8,183
    Likes Received:
    1,840
    Location:
    Finland
    Not sure if the Metro-updates made it that much better, but at least initially just turning down the render scale (aka rendering at lower resolution) to something like 75-80% yielded not only similar or better performance, but also better IQ than the DLSS option
     
    Silent_Buddha, CaptainGinger and BRiT like this.
  8. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    6,806
    Likes Received:
    473
    I'm not sure game specific training will do all that much good unless you use some kind of classifier first so you can use a huge amount of different NNs.

    MLPs aren't magic, those weights can only store so much data.
     
    entity279 and pharma like this.
  9. entity279

    Veteran Regular Subscriber

    Joined:
    May 12, 2008
    Messages:
    1,229
    Likes Received:
    422
    Location:
    Romania
    Maybe even the features themselves are game specific, as a generic solution to this problem may have been yet to present itself.
    Otherwise i do agree it seems kinda odd.
     
  10. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    7,898
    Likes Received:
    6,184
    It’s likely transfer learning where they are getting efficient results. Take a generic AA
    Or SR algo that has been trained on a generic and massive dataset. Perhaps nail like 70% of the cases very well. Take that model as a base and begin layering on additional convulsions/weight changes for each specific game to get you the rest of the way.

    There are only so many ways game
    Should be aliased. If the images are coming in without AA and say at always 1440p for SR; I can’t see how the cases will always need massive retraining title to title. It’s clear I’m overlooking something important though. It wouldn’t be efficient at least from a cost perspective.
     
  11. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    6,806
    Likes Received:
    473
    I could imagine that with a couple 100 MB worth of codebook an algorithm could learn the most common textures and high frequency geometry for interpolation/hallucination. MLP in and of itself can't scale like that though, a network storing that amount of data would be unusable.
     
  12. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    7,898
    Likes Received:
    6,184
    MLP is fairly general. The goals for CNN is to remove as much noise as possible so that the algorithms can focus in on the areas it needs to do work.

    I suspect the game is balancing how aggressive the algorithm is. Too lightly
    To capture all the aliased parts of the image and it’s too slow; or introduces too much noise. Too much and you may start missing out on capturing everything. I suspect they are still working out their weights here.

    To give you an idea though, the SR model was only 6 layers. IIRC. AA is probably where we are seeing more tuning happening I suspect.
     
  13. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,725
    Likes Received:
    11,196
    Location:
    Under my bridge
    This I where I imagine hybrid solutions would be better. Take an ML solution to feed into a reconstruction algorithm.
     
  14. entity279

    Veteran Regular Subscriber

    Joined:
    May 12, 2008
    Messages:
    1,229
    Likes Received:
    422
    Location:
    Romania
    That sounds to me like encountering overfitting and trying to make it work regardless. Which would be a (if not `the`) textbook mistake.
     
  15. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    7,898
    Likes Received:
    6,184
    You can account for that while using transfer learning. Your data pool determine whether you are biasing or overfitting. Some changes you might make make changes to the network is called dropping out, in which we drop out neutrons to keep the network from overfitting.

    The goal is to teach the algorithm how to infer a variety of situations. The second goal is to do it as cheaply and as quickly as possible.

    Training only against a specific title would be overfitting regardless of how much data you have it; it would only work for that title.
     
    pharma likes this.
  16. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    7,898
    Likes Received:
    6,184
    Yes. This is where creativity will matter more than the underlying technology itself. It’s one thing where we know ML can do this or that; it’s another if it should be doing this or that. There are cases in which DLSS might make a lot of sense for some games; namely older already released titles. Looking to gain some uplift with minimal change to their code.

    And there are probably a great deal of other cases where a custom interweave will likely perform better if your game is not shipped yet.
     
  17. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    6,806
    Likes Received:
    473
    That's the problem, being general is generally equivalent to being inefficient in a specific domain. MLPs in and of themselves can't efficiently handle hierarchy in classification, of course a hierarchical classifier is suboptimal, but it does speed up things. Hierarchical VQ with 100s of MBs of codebook is not necessarily a problem, 100s of MBs of weights for a MLP is.
     
    #517 MfA, Jul 18, 2019
    Last edited: Jul 18, 2019
  18. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    7,898
    Likes Received:
    6,184
    but i don't think they are using MLP is what I'm saying. Most computer vision is handled by RNN or CNN now, likely the case here as well as images have sequential ordering. I can't see how a MLP network would outperform RNN or CNN in this task; therefore i don't think they are using it. Perhaps i'm not understanding your point here.
     
  19. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    6,806
    Likes Received:
    473
    A CNN is a MLP.
     
  20. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    7,898
    Likes Received:
    6,184
    Yes but a vanilla MLP network could require significantly more training/and or layers or more in attempting to produce similar results to a smaller CNN setup.
    I'm not necessarily sure on the size of the weights here. I can go back check the model sizes but I doubt they are 100s of MB for this SR model.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...