Nvidia DLSS antialiasing discussion *spawn*

Discussion in 'Architecture and Products' started by DavidGraham, Sep 19, 2018.

Tags:
  1. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,409
    Likes Received:
    10,776
    Location:
    Under my bridge
    "In addition to the DLSS capability described above, which is the standard DLSS mode, we provide a second mode, called DLSS 2X. In this case, DLSS input is rendered at the final target resolution and then combined by a larger DLSS network to produce an output image that approaches the level of the 64x super sample rendering – a result that would be impossible to achieve in real time by any traditional means. Figure 21 shows DLSS 2X mode in operation, providing image quality very close to the reference 64x super-sampled image."

    So it's nVidia's dumb naming. Apologies to Alex!
     
    #441 Shifty Geezer, Apr 4, 2019
    Last edited: Apr 4, 2019
    BRiT, pharma and iroboto like this.
  2. jlippo

    Veteran Regular

    Joined:
    Oct 7, 2004
    Messages:
    1,299
    Likes Received:
    388
    Location:
    Finland
    Dually dumb in case of 2X as it implies super/oversampling twice in its name, even there is none.
     
  3. Dictator

    Newcomer

    Joined:
    Feb 11, 2011
    Messages:
    108
    Likes Received:
    248
    Hah, no problem of course. Believe me, I also think the name is incredibly silly, just as bad as the SS part of DLSS.

    DSR is also painful when you think about it, what is DYNAMIC there? huh?

    I recommend anyone here that is interested check out the "
    Truly Next-Gen: Adding Deep Learning to Games & Graphics (Presented by NVIDIA)"
    presentation from GDC. It is available in video form on the website for free wiht email submittance. They actually go over a lof the integration problems, content problems, quality problems etc. that we all here have mentioned and talked about. So they seem hyper aware and interestingly very self critical. Also some neat details about those 8X8 sampled images they feed into the network as comparisons. They are not just spatially higher resolution, but they prefer if EVERY screen element is of higher fidelity: shadows, AO, motion blur, etc. all with crazy sample counts as well. There is even the implication that they would prefer games that use ray tracing to send in path traced results of the screen in one of the slides. Also the images are trained as a sequence of a few frames for the comparison, not just single shots (so motion is accounted for). I hope the slides become available for them all...
     
    turkey, pharma and AlBran like this.
  4. jlippo

    Veteran Regular

    Joined:
    Oct 7, 2004
    Messages:
    1,299
    Likes Received:
    388
    Location:
    Finland
    It certainly would be interesting if DLSS would learn to fix details like failing edges of screen space reflections.
     
  5. Ethatron

    Regular Subscriber

    Joined:
    Jan 24, 2010
    Messages:
    855
    Likes Received:
    258
  6. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    6,683
    Likes Received:
    445
    It also does AA, so NN-SRAA seems the most honest name. Or NN-TSRAA in case motion is taken into account.

    PS. not a fan of renaming multi-layer networks deep, marketing wank.
     
  7. pharma

    Veteran Regular

    Joined:
    Mar 29, 2004
    Messages:
    2,833
    Likes Received:
    1,541
    Anthem RTX/DLSS PC Performance Review

    https://www.overclock3d.net/reviews/software/anthem_rtx_dlss_pc_performance_review/1
     
    BRiT likes this.
  8. jlippo

    Veteran Regular

    Joined:
    Oct 7, 2004
    Messages:
    1,299
    Likes Received:
    388
    Location:
    Finland
  9. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,844
    Likes Received:
    178
    Location:
    Seattle, WA
    I was thinking about this some more recently, and trying to see how they could use a neural network to directly estimate the final pixel color and still match the previous observations.

    The thing that convinced me that the neural net was just deciding how much sampling to perform was that sometimes it looked like there was no anti-aliasing at all (just rescaling). But this doesn't mesh with the statement that their training set uses 32x super-sampling: if they wanted to use DLSS to decide how much sampling to perform, they'd need to have a training set which contained multiple outputs at different sample densities.

    The idea that they'd use a neural network to directly estimate the final color still boggles my mind. It doesn't seem like it should be possible, but apparently it works. And it also makes sense if you realize that it probably doesn't work very well a lot of the time. That probably explains the no-AA outputs.

    My new bet is that what they're doing here is that DLSS is a two-stage process. First, it collects various inputs for each pixel (color, depth, normal, and motion) and feeds those inputs into the neural network. Then the neural network result is compared against the raw upscaled output using some heuristic for what constitutes a good anti-aliased image. If the result seems off, then it rejects the neural network result altogether and simply outputs the upscaled result.

    A simple way to accomplish this would be the following:
    1) Don't apply the neural network to single pixels. Apply it to groups of pixels, e.g. 8x8 groupings. Ideally, the neural network output will also be the scaled output.
    2) In parallel with the neural network, compute the rescaled raw output.
    3) Once you have the neural network result and the rescaled result, compare the two outputs to ensure that the neural network output "looks like" anti-aliasing. For example, the pixel color values should be between the color values for neighboring pixels in the raw output. If some pixels are outside some pre-determined bounds, throw the result out and just use the rescaled result.

    My bet is that without that thresholding, DLSS would be incredibly ugly: because of how neural networks operate, you'd probably have a number of pixels in every single frame that were way off. Some edges would look great, while others might flash bright colors when the scene doesn't call for them. It probably still messes up sometimes, producing crap output that doesn't get rejected by this heuristic. This might explain some of the "muddy" appearance seen in some test images: if the output has much less detail than it should have, it'll be hard for any heuristic to detect that. There might be some information-based heuristics that could potentially detect loss of detail, but they won't be simple at all.

    The reason why the ideal case is using the neural network to do the rescaling is that when the neural network produces a good result, that result will be very close to the full-resolution, 32x super-sampled image, and will potentially be much better even than performing 32x supersampling then upscaling. I don't know if they do this, but if they're using a neural net to determine final pixel colors, this would definitely provide the best image quality. This would, however, limit the possible choices of resolution upscaling. They couldn't do arbitrary ratios, as the number of pixels processed at a time is limited. They could probably only support small-integer fraction ratios (e.g. 3/2 as opposed to 71/64). And the upscaling ratio would further be mostly fixed for each game because it would require a full retraining of the learning model for every upscaling ratio. Most games would probably only support a single upscaling ratio, and they might only support a single ratio for every game. If DLSS is implemented in this way, it would also conform with nVidia's claims that the final result is close to the 32x full-resolution supersampled image.

    Note that choice of resolution doesn't necessarily require full retraining of the neural network. A group of 16x16 pixels at one resolution will be identical to a group of 16x16 pixels zoomed out but at a higher resolution. It makes sense to train the model at different resolutions, but it should be able to cope okay even if the play resolution hasn't ever been used for training.

    Finally, my earlier statement that the reason for the upscaling is to hide situations where the neural network fails is probably still accurate: upscaling limits the worst-case aliasing when the neural net outputs garbled nonsense.
     
  10. Davros

    Legend

    Joined:
    Jun 7, 2004
    Messages:
    14,775
    Likes Received:
    2,200
    Just noticed, Battlefield bad company 2 allows you to set a resolution higher than the native res of your monitor
    eg: 3000x2000 on a 1680x1050 monitor and the image fits completely into the screen so there appears to be downscaling going on.
    (1 problem is the game gets confused as to where the mouse pointer is compared to the options buttons)
     
  11. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    7,678
    Likes Received:
    5,980
    White paper on approximately how it should work.
    https://arxiv.org/pdf/1603.06078.pdf
     
    pharma likes this.
  12. BRiT

    BRiT (╯°□°)╯
    Moderator Legend Alpha Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    12,155
    Likes Received:
    8,306
    Location:
    Cleveland
    milk, Kej, Lightman and 4 others like this.
  13. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,844
    Likes Received:
    178
    Location:
    Seattle, WA
    Probably. But with some substantial differences to make it work at speed. In particular they note:
    It's not clear whether the performance drawbacks are theoretical in nature or could be reasonably-overcome via specialized hardware.

    If my quick reading of the paper is accurate, it also sounds like their neural networks used the image alone as input. This is both better and worse than what DLSS would be capable of. Better because the entire framebuffer is the input to the learning algorithm (whereas DLSS likely must operate on a limited number of pixels at a time). Worse because DLSS is capable of using more than just color data to inform the learning model (information which they do indeed use according to their public documentation).
     
  14. snarfbot

    Regular Newcomer

    Joined:
    Apr 23, 2007
    Messages:
    509
    Likes Received:
    177
    pharma likes this.
  15. Panino Manino

    Joined:
    Nov 27, 2017
    Messages:
    6
    Likes Received:
    6
    It's like over-processed photos from Samsung smartphones.
     
    Lightman and Kyyla like this.
  16. pharma

    Veteran Regular

    Joined:
    Mar 29, 2004
    Messages:
    2,833
    Likes Received:
    1,541
    The rock on lower left side of cart seems slightly blurred in .7x sharpened. Not evident in the native or dlss shots.

    Edit: Will be interesting to see how much CAS and DLSS algorithms can be tweaked going forward.
     
  17. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,409
    Likes Received:
    10,776
    Location:
    Under my bridge
    There's some motion blur/distortion thing happening in that bottom corner. Not a like-for-like comparison there, though the rest is good.
     
    CaptainGinger and BRiT like this.
  18. Ike Turner

    Veteran Regular

    Joined:
    Jul 30, 2005
    Messages:
    1,884
    Likes Received:
    1,753
    In-game motion blur. The 3 stitched video captures are not perfectly synced (the train bumps up/down which causes this):
    Time stamped link:

    No blur:
    [​IMG]
     
    Lightman, CaptainGinger and BRiT like this.
  19. snarfbot

    Regular Newcomer

    Joined:
    Apr 23, 2007
    Messages:
    509
    Likes Received:
    177
    Lol yea kinda i think in motion though it would be less noticeable. It looks almost exactly like brz upscaling from 2d games, emu, gzdoom and such.
     
  20. pharma

    Veteran Regular

    Joined:
    Mar 29, 2004
    Messages:
    2,833
    Likes Received:
    1,541
    Monster Hunter DLSS implementation on July 17.
    July 13, 2019
    https://www.techspot.com/news/80937-nvidia-claims-50-percent-framerate-uplift-monster-hunter.html
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...