Nvidia DLSS antialiasing discussion *spawn*

Discussion in 'Architecture and Products' started by DavidGraham, Sep 19, 2018.

Tags:
  1. vipa899

    Regular Newcomer

    Joined:
    Mar 31, 2017
    Messages:
    922
    Likes Received:
    354
    Location:
    Sweden
    Nice find, intresting tech.
     
  2. Malo

    Malo Yak Mechanicum
    Legend Veteran Subscriber

    Joined:
    Feb 9, 2002
    Messages:
    6,977
    Likes Received:
    3,054
    Location:
    Pennsylvania
    lol I like how they focus on "increasing the resolution" rather than mention that it's decreased initially.

    So when DLSS is disabled due to frame times being so low, does a game fallback to TAA?
     
  3. vipa899

    Regular Newcomer

    Joined:
    Mar 31, 2017
    Messages:
    922
    Likes Received:
    354
    Location:
    Sweden
    How bad the whole dlss thing is, the idea of the tech is nice i think, letting a supercomputer take tasks like that, perhaps can be applied to other features? NV probably refines the tech though for super sampling.
     
  4. milk

    Veteran Regular

    Joined:
    Jun 6, 2012
    Messages:
    2,947
    Likes Received:
    2,504
    There we have it.
    Exactly what I imagined they were doing. No "on-the-fly" training to speak of, nor sophisticated use of other deferred rendering buffers. They probably only use motion vectors to improve temporal stability. Other than that, it's just AI hallucinating detail based on final color alone.
    While the more outrageous ideas here sure are interesting, they are the kind of thing each dev would need to consider on their own, for each game. Nvidia wanted a drop-in solution, and that's how they marketed this from the beginning. You can't just make quickly implementable plug-in that relies on too many specifics of how the frame is rendered.
     
    Silent_Buddha and BRiT like this.
  5. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    7,781
    Likes Received:
    6,065
    I like KimBs answer. And as per Milk’s commentary I’ve suspected that nvidia DLSS is such. I expect we will get something better when DML is released and developers have control over how the ML pipeline affects the rendering pipeline. As opposed to just a packaged applied after the frame is complete.
     
    BRiT and milk like this.
  6. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,606
    Likes Received:
    11,031
    Location:
    Under my bridge
    The best upscaling may well lie with hybrid ML and reconstruction techniques. That could simplify the ML requirements is working out which bits to render and reconstruct, and allow more creative reconstruction. This is an area that needs to be explored by game devs uniquely, as offline imaging has zero need or ability to use rendering data. It contrasts with raytracing developments where the same tech is fundamentally the same (but of course hybrid rendering will be game-dev led).
     
    w0lfram, iroboto and milk like this.
  7. Malo

    Malo Yak Mechanicum
    Legend Veteran Subscriber

    Joined:
    Feb 9, 2002
    Messages:
    6,977
    Likes Received:
    3,054
    Location:
    Pennsylvania
    A critical review of DLSS in BFV



    tl;dw
    • DLSS provides a very blurry experience
    • Quality is far below simply render scaling + TAA at the same perf level
    • Far too restricting implementation tied to DXR and resolution/gpu combinations
    • Brings nothing to gaming rendering as opposed to raytracing
    • Goes as far as recommending it be removed from BFV entirely as gamers will be using it based on marketing from Nvidia and ending up with a terrible experience compared to normal TAA
     
  8. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,606
    Likes Received:
    11,031
    Location:
    Under my bridge
    That's a bit overboard. Options are always good and players have the option to turn it off. If you don't want gamers being 'duped', disable it as the default option and have gamers have to enable it in settings.

    The video itself talks about points raised with other RTX features. They didn't want to compare FFXV's TAA solution because they felt it was poor versus other games, similar to Metro's non-RT lighting being poor versus other games. There is always going to be a conflict of interests when an IHV wanting to sell a new proprietary tech works with devs on an engine to implement that tech. We need data from independent parties who nVidia isn't supporting directly.

    Another useful point that time cost of DLSS. It's fixed per frame, and at higher performance costs more than rendering which is why DLSS is disabled on BFV on 2080 at 1080p - it's faster to render a frame than upscale. That suggests to me that DLSS is relatively slow - it's not a couple of ms.

    These images show DLSS really struggles...

    Image1.jpg
    Image2.jpg

    1685p gets you the same framerate at significantly higher quality.
     
  9. Wall Street

    Joined:
    Feb 18, 2019
    Messages:
    1
    Likes Received:
    5
    Seeing the softer textures in the DLSS analysis cited in the post above made me wonder about the methodology of DLSS.

    It looks like they are running the full scene at the lower resolution with the corresponding low res mipmaps. A neat thing that Epic wrote about with TAA is that the TAA can actually run the mipmaps at the post-upscale target resolution (e.g. 4K mipmaps instead of 1885p), which is often why the TAA scenes look more crisp. Normally, running the scene with this negative LOD bias would result in texture shimmer because the lower initial render resolution would be undersampling the mipmaps creating aliasing artifacts. However, the pseudo-samples from the previous frame and the noise-reduction from the TAA filter let you 'get away with' using the higher detail mipmaps without shimmer.

    nVidia made a recent statement seem to indicate that they are not using any temporal information to compute DLSS, so they can't gain information in the image via sample reuse and can't don't benefit from the temporal jitter/shimmer reduction which should limit how much sharpening they would want to do.

    I suspect that under the hood DLSS works a lot like MLAA/FXAA, except instead of just looking for pre-determined contrast edge shapes, the machine learning is used to find the the characteristics of areas of the image which contain aliasing artifact, then find the blend instructions which would lower the error values of those artifacts the best way possible. I wonder how large of a search area they are using around each pixel. The main issue with this method is that it is impossible to construct more information than exists in the original image (which is rendered at the lower resolution). This works OK at 4k, but I assume that one of the reasons low resolution 1080p DLSS seems not to be widely available is because it would become readily apparent that the image contains too little information if it is constructed from an upsampled 720p-900p image.
     
    w0lfram, AlBran, Jozape and 2 others like this.
  10. jlippo

    Veteran Regular

    Joined:
    Oct 7, 2004
    Messages:
    1,329
    Likes Received:
    424
    Location:
    Finland
    Been thinking about the negative mipmap as well, part of the blurrier result may indeed be from teaching DLSS with target resolution mipmap range instead of what would be appropriate for 64xSSAA. (And thus loosing what would be the proper look for 64xSSAA for each pixel.)
    DLSS seem to be very good in reducing dithering and such, so it could work decently with bit grainy image as source as well.
     
    AlBran and Jozape like this.
  11. troyan

    Newcomer

    Joined:
    Sep 1, 2015
    Messages:
    120
    Likes Received:
    181
    But we can compare it when the solution is worse than traditionell console features?
    FF15 and Metro are designed for the current console generation. Maybe 2013 tech isnt up to date anymore...
     
    vipa899 likes this.
  12. keldor

    Newcomer

    Joined:
    Dec 22, 2011
    Messages:
    74
    Likes Received:
    107
    Makes me think that they're using the wrong metric to train DLSS. If they're using absolute error from the ground truth high resolution image, a blur filter is best because if you try to reconstruct fine details, they'll be wrong at the pixel level, even if they perceptually have the same texture. Think what happens to per pixel error if you take a high frequency texture, and shift it over by one pixel, for instance.

    What they need is a filter that "guesses" and tries to reconstruct texture, even the result doesn't match pixel for pixel to the ground truth. Maybe perform a Fourier transformation or something and measure against that? It needs to be correct in spectral space more than absolute pixel space.

    Oh, and the reconstruction needs to be consistent frame to frame or it'll shimmer like crazy. That's another metric the training has to keep track of.
     
    iroboto and vipa899 like this.
  13. vipa899

    Regular Newcomer

    Joined:
    Mar 31, 2017
    Messages:
    922
    Likes Received:
    354
    Location:
    Sweden
    Thats why i thought dlss, or some form of it could be a thing for consoles. Doing reconstruction with compute, RT with compute, and next gen graphics, maybe ai too. Everything compute and we have current gen gfx with new features at atleast 30fps.
    Things like reconstruction being handled on an external supercomputer is an idea atleast.
     
  14. keldor

    Newcomer

    Joined:
    Dec 22, 2011
    Messages:
    74
    Likes Received:
    107
    The reconstruction theoretically should be able to be very good, but if you give it the wrong metric, it won't converge on the desired solution. Maybe some sort of adversarial network would work...
     
    pharma and iroboto like this.
  15. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,606
    Likes Received:
    11,031
    Location:
    Under my bridge
    One can't determine that until it's proven itself.
    Reconstruction on a 1.8TF PS4 to Spider-Man/HZD quality takes a few ms. DLSS to inferior quality standards on 2080 takes longer than to render a 1080p frame, which is something like 140fps DXR off, so 7ms. Much longer with RTX enabled. There was no reason (evidence) to think DLSS is better than reconstruction; you should wait before jumping to conclusions about what are good ideas for consoles. ;)

    Well they won't be current gen graphics if they have raytraced lighting. However, this thread is about DLSS and determining how well it performs both in time and quality, and seeing how that changes over time if the algorithm/training advances. DLSS is a very interesting tech and a whole new paradigm being applied to realtime graphics. As I mentioned before, perhaps ML assisted reconstruction is the ideal? It'd also be nice to hear how PS4's ID buffer helps at all, if any. The issue at the moment seems to be the ML not creating any surface detail, but the source material isn't that blurry. Worst case, DLSS should look like 1080p upscaled. That with crisp edges (being recovered okay) would be a notable improvement.
     
  16. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    7,781
    Likes Received:
    6,065
    Perhaps, the interesting part is whether DICE and 4A Games is hands off at this point. I likely think so, given that this is a driver based solution. It will be on nvidia to find a way to train to make the results better than TAA.
     
    vipa899 likes this.
  17. keldor

    Newcomer

    Joined:
    Dec 22, 2011
    Messages:
    74
    Likes Received:
    107
    DLSS is ML assisted reconstruction. That's exactly what it does under the hood. That's also why the Tensor Cores are so important to it. Evaluating neural networks in vanilla compute is too slow to hit the performance target they need for it to actually be useful.
     
    DavidGraham and vipa899 like this.
  18. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    7,781
    Likes Received:
    6,065
    I think he's referring to how it's being reconstructed. With DirectML, developers have full control over what the ML does in the pipeline and when. With Nvidia DLSS, the solution could be entirely blacked boxed from developers and we're seeing a post processing reconstruction using ML.
     
    BRiT likes this.
  19. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,606
    Likes Received:
    11,031
    Location:
    Under my bridge
    DLSS is ML based construction, not ML assisted. ML assisted would be combining ML alongside 'checkerboard' reconstruction, using both techniques to regenerate the missing data as appropriate. For example, off the top of my head, ML could combine with actual texture data, or maybe be applied across temporal samples where it might well do a better job than algorithms in extrapolating or predicting data.
     
  20. Malo

    Malo Yak Mechanicum
    Legend Veteran Subscriber

    Joined:
    Feb 9, 2002
    Messages:
    6,977
    Likes Received:
    3,054
    Location:
    Pennsylvania
    That's assuming DICE/4A has provided all the source screens already. Keep in mind that ultrawide resolutions aren't even an option yet since DLSS needs to be trained on every single resolution separately and ultrawide hasn't been done. Whether it's in a queue at Nvidia or DICE hasn't provided non-standard resolutions yet, who knows.

    And what happens when there's a significant change in rendering, enough to make the existing DLSS training obsolete and they need to provide ALL the resolution sources again to Nvidia for re-training.
     
    #280 Malo, Feb 19, 2019
    Last edited: Feb 19, 2019
    BRiT and Shifty Geezer like this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...