Nvidia DLSS antialiasing discussion *spawn*

Discussion in 'Architecture and Products' started by DavidGraham, Sep 19, 2018.

Tags:
  1. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    8,257
    Likes Received:
    1,948
    Location:
    Finland
  2. pharma

    Veteran Regular

    Joined:
    Mar 29, 2004
    Messages:
    3,003
    Likes Received:
    1,687
    Control with RTX & DLSS in the test: Raytracing, DLSS and the conclusion
    [Google Translation]
    The real problem of DLSS is simply that the AI algorithm seems to make the graphics weaknesses that are already present in the game even worse.



    https://www.computerbase.de/2019-08...-test/3/#abschnitt_die_bildqualitaet_von_dlss
     
    #562 pharma, Aug 29, 2019
    Last edited: Aug 29, 2019
    Lightman likes this.
  3. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,837
    Likes Received:
    2,670
    NVIDIA claims they developed a new algorithm for Control, and intend to use it for future titles.

    https://www.nvidia.com/en-us/geforce/news/dlss-control-and-beyond/
     
    pharma and techuse like this.
  4. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    41,054
    Likes Received:
    11,670
    Location:
    Under my bridge
    Most importantly, it's not DLSS but a 'conventional' image-processing algorithm. "Hand engineered algorithms are fast and can do a fair job of approximating AI." Their improved DLSS system is too slow using ML so they're using a compute based solution that approximates the AI results.

    The plan is to get the AI fast enough to give better results, but Control's implementation isn't ML and is much lower impact, which is by-and-large what games need. They don't need perfect, but good enough at fast enough speeds. So Control is actually a +1 for algorithmic reconstruction methods, with ML contributing to the development of the algorithm.
     
  5. JoeJ

    Regular Newcomer

    Joined:
    Apr 1, 2018
    Messages:
    568
    Likes Received:
    655
    So humans learn from machine learning. Sure, makes total sense.
    The message behind this twisted marketing way of saying things indirectly is quite interesting, but no surprise :D
     
    iroboto likes this.
  6. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    8,149
    Likes Received:
    6,417
    They wanted the effect of DLSS but wanted to bypass the NN for speed. Interesting to say the least.
     
    pharma likes this.
  7. JoeJ

    Regular Newcomer

    Joined:
    Apr 1, 2018
    Messages:
    568
    Likes Received:
    655
    Do you still plan to experiment with ML upscaling?
    I thought about how i would do it. Idea is to generate a line field from image brightness / hue, and then use it to calculate an elliptic filter kernel for sampling. Should give nice AA at least, and not much work to try.
    Currently i'm lost in other work, but if you ever get at it, i might take the challenge for a comparison... :)
     
  8. Malo

    Malo Yak Mechanicum
    Legend Veteran Subscriber

    Joined:
    Feb 9, 2002
    Messages:
    7,130
    Likes Received:
    3,186
    Location:
    Pennsylvania
    Yeah that Nvidia article is really focused on promoting "DLSS" and Turing's tensor cores whilst dancing around the fact that Control doesn't seem to use either.
     
  9. BRiT

    BRiT (╯°□°)╯
    Moderator Legend Alpha

    Joined:
    Feb 7, 2002
    Messages:
    12,829
    Likes Received:
    9,182
    Location:
    Cleveland
    They trained a neural net ( https://xkcd.com/2173/ )
    [​IMG]
     
    Lightman, Kej, w0lfram and 2 others like this.
  10. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    6,841
    Likes Received:
    481
    I think it's more the case that hand engineered algorithms can use more efficient non-linear operators. MLP really puts limits on how efficient image filters can be.

    For huge filters obviously a human can't condense all the data into an image filter but even there I'd say MLPs have a huge efficiency handicap compared to hierarchical approaches. The MLP craze will die down, it always does.
     
  11. JoeJ

    Regular Newcomer

    Joined:
    Apr 1, 2018
    Messages:
    568
    Likes Received:
    655
    Likely we could get there without taking a step back.
    My current thought is: "I don't want DirectML. I'm no ML expert, so i don't care." But is the API finally ready at all, or still WIP? Any plans on Khronos as well?
    What i want instead is to expose tensor functionality to all regular shader stages, but not knowing what it is exactly i base this on those assumptions:
    It has instructions for mul/add, dot, matrix multiply?
    Control flow comes from regular shader cores.
    Actual compilers will try to utilize those instructions automatically, for any kind of shader (fp16).

    So, all we want is support for all low precision data types and some instruction abstraction?
    Maybe this is what we'll get, with future RDNA supporting low prec. dot products too. And i would not wonder if NV silently steps back with hardware support if some features remain underutilized (e.g. matrix multiply, which will be still fast using dot product)?

    There may well be a future for ML, but i just fail to see the presence. But no matter if that's right or wrong, exposing the hardware without enforcing what we do with it shoudn't be a mistake?
     
  12. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    8,149
    Likes Received:
    6,417
    I'll continue on it ;)
    I've actually been lazy, but planning to open their driver pack to see fi they have a model I can take a peek at and see if I can replicate it
     
    JoeJ likes this.
  13. JoeJ

    Regular Newcomer

    Joined:
    Apr 1, 2018
    Messages:
    568
    Likes Received:
    655
    Uhhh... that hackery skills :) I'd only do it with C++ on a still image :)
     
  14. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    8,149
    Likes Received:
    6,417
    oh, well, I'd probably start there as well, but I'm curious to see how they implemented their NN.
     
    pharma likes this.
  15. Remij

    Newcomer

    Joined:
    May 3, 2008
    Messages:
    31
    Likes Received:
    11
    I take the article as them stating that the Image Processing Algorithm is what DLSS currently is. They are taking images from developers processing them, and developing their NN algorithm to run on the GPU. Their AI Research Algorithm is a more advanced form to show what is possible given a longer frame budget... however it's not at the point yet where they can run it in real-time on the tensor cores at the speed they want. Their goal is to get the AI Research algorithm optimized to the point where they can maintain that detail and quality, while fitting within their performance budget to run on the tensor cores.

    Where in the article does it say they aren't running Control's DLSS solution on the tensor cores, or that this isn't using Machine Learning? Machine learning means training the algorithm. This isn't a compute based solution. It's still DLSS.
     
  16. techuse

    Newcomer

    Joined:
    Feb 19, 2013
    Messages:
    97
    Likes Received:
    31
    "Deep Learning: The Future of Super Resolution
    Deep learning holds incredible promise for the next chapter of super resolution. Hand engineered algorithms are fast and can do a fair job of approximating AI. But they’re brittle and prone to failure in corner cases."

    They show several examples of their approximation breaking. They also end the article by stating that Turing's tensor cores and ready and waiting to be used.
     
  17. Remij

    Newcomer

    Joined:
    May 3, 2008
    Messages:
    31
    Likes Received:
    11
    You take that at meaning they aren't being used in Control? lol no. They're saying those cores are there and they're capable of the next round of improvements coming to DLSS which is a more optimized version of their AI research model. It's also a way of reassuring people that they wont need a next gen GPU to handle these improvements when they come. Their AI model utilizes deep learning to train their Image Processing algorithm. The goal is to get that high quality of the AI model performant enough so that it can run on the tensor cores.
     
  18. techuse

    Newcomer

    Joined:
    Feb 19, 2013
    Messages:
    97
    Likes Received:
    31
    I disagree for several reasons.

    1. All previous DLSS implementations have been rather expensive. Suddenly this one has the same performance as basic upscaling.
    2. The various selectable resolutions from which to upscale resulting in more combinations to train than previous titles.
    4. Its now usable at all performance levels and provides an improvement regardless of how long other parts of the GPU require to process a frame.
    3. There would be no need for the article.
     
  19. Remij

    Newcomer

    Joined:
    May 3, 2008
    Messages:
    31
    Likes Received:
    11
    A new algorithm could explain away all what you just wrote though. I mean, they literally state that they need to get it to a performance budget small enough that it can run on the tensor cores. That implies that they are using them.. and want to continue using them.
     
  20. techuse

    Newcomer

    Joined:
    Feb 19, 2013
    Messages:
    97
    Likes Received:
    31
    They mention performance budget but they dont reference tensor cores at all in that context. Its certainly possible they are being used but if i had to bet given the available info, id bet against.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...