Nvidia DLSS antialiasing discussion *spawn*

Discussion in 'Architecture and Products' started by DavidGraham, Sep 19, 2018.

Tags:
  1. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,789
    Likes Received:
    2,596
    DLSS is coming for Metro Exodus and Shadow Of Tomb Raider
    https://wccftech.com/nvidia-dlss-out-now-3dmark-port-royal/

    DoF isn't broken in the current implementation of FF15 DLSS.
     
    #161 DavidGraham, Feb 4, 2019
    Last edited: Feb 4, 2019
  2. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,789
    Likes Received:
    2,596
    NVIDIA pushed TXAA heavily, it was featured in 20 games at the very least (23 by my counting), but developers found TAA easier to integrate while costing a lot less performance. So they used it in a wide manner.

    Anyway, here are a punch of comparisons between TAA and DLSS, TAA adds a lot more blur to the scene. DLSS appears sharper with more details.

    Interactive screenshots:
    https://www.nvidia.com/en-us/geforce/news/nvidia-dlss-3dmark-port-royal-benchmark/

    Video made by 3DMark:


    Also some performance comparisons:

    [​IMG]
     
    pharma likes this.
  3. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    8,184
    Likes Received:
    1,841
    Location:
    Finland
    The "1440p" DLSS is rendered at 1080p native
     
    CaptainGinger and Ike Turner like this.
  4. jlippo

    Veteran Regular

    Joined:
    Oct 7, 2004
    Messages:
    1,343
    Likes Received:
    443
    Location:
    Finland
    Yup, sadly no DLSS 2x.
    Really dislike the naming in DLSS, usually the number in AA has some indication on subsamples.
     
  5. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,735
    Likes Received:
    11,210
    Location:
    Under my bridge
    Yeah. It's labelled as the upsacled resolution, so 'visually the same as 1440p', but of course the higher framerates come from rendering less pixels. If the output is closer to 1440p than 1080p, it may be fair. People just need to learn to interpret it, as ever with marketing values.
     
  6. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,902
    Likes Received:
    218
    Location:
    Seattle, WA
    I've been sort of away from 3D tech stuff for a while, so I just ran across this. The idea is fascinating! I have many detailed thoughts on this that I want to get back to later (work now!), but wanted to post to remind myself to come back to this soon.

    Short version: I don't think it's as simple as rendering at a lower resolution and intelligently upscaling. I think it's dynamically selecting the resolution to render different components of the scene using a contrast-trained learning algorithm. The lowest resolution is specifically selected to not be a power of two of the view resolution to limit the worst-case anti-aliasing (learning algorithms are notorious for bad tail behavior).

    Anyway, hopefully I'll remember come back to this this evening, because I think I know how to test my idea.
     
    pharma and DavidGraham like this.
  7. w0lfram

    Newcomer

    Joined:
    Aug 7, 2017
    Messages:
    157
    Likes Received:
    33
    I have to ask once again, who buys an RTX2080ti to game at 1440p? Tests seem flawed.

    Logically, most people who game at 1440p would not be buying a $1,200 GPU. (Those "mid-level" gamers center mostly around the $350 ~ $750 for high performance gaming (ie: 90Hz ~ 144Hz+) at 1440p.) I understand the point, in which they used the 2080ti for all the comparisons. But, DLSS uses GPU resources & time, so a GTX2080, or 2070 would be a better platform to "compare" these results. It would be more typical of 1440p case use.
     
  8. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,789
    Likes Received:
    2,596
    I did. I replaced my 2080 with a 2080Ti because some games are just too heavy to render even at 1440p. Few people game at 4K actually. 1440p with high refresh rate is more common.
     
  9. Flappy Pannus

    Newcomer

    Joined:
    Jul 4, 2016
    Messages:
    14
    Likes Received:
    4
    People who want greater than 60 fps for one.
    As their charts show, the 2060 gets a significant boost as well.

    The only problem I have with this of course is we're still using fixed demos for this stuff ~5+ months from launch for the feature that was supposedly relatively easy for devs to support, at least compared to ray tracing. Really this is getting pretty ridiculous.
     
  10. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,902
    Likes Received:
    218
    Location:
    Seattle, WA
    Okay, here's what I'm pretty sure is happening. Clues:
    1) nVidia says they're using machine learning and super sampling.
    2) Some screenshots demonstrate upscaling, e.g. 1080p source for a 1440p display.
    3) Other screenshots demonstrate better detail even than same-resolution with no AA.

    What I suspect is going on here is that, in the example of a 1440p output resolution, they use a 1080p minimum resolution. However, each individual pixel in the 1080p framebuffer might actually include data for a bunch of different sub-pixels. They could be entirely flexible with this: each pixel would be saved to a framebuffer to store up to a maximum number of samples (say, 8). They could reduce the storage size using some compression too. And when they don't need to render a 1080p image before producing the final 1440p output: they can take the image with all the extra samples for some pixels so that you get the full benefit of super-sampled 1440p when you need it.

    They would also output a set of numbers representing the rasterization inputs. These could be things like identifiers for both input data and shaders applied (or even just the number of them), colors of lights applied, distance to the location, etc. They don't have to output these values for every single pixel, just a representative sample of them. There would be a tradeoff between storing more data per pixel and storing the data for more pixels. This set of inputs is important, as it's necessary to train the learning model.

    The final step is rescaling the image to 1440p. During this step, the rescaler has access to the colors of neighboring pixels, and is able to create an estimate of how much aliasing was found in the final image. A very simple score for aliasing would be color contrast. But they might do something a little different to ensure that more detail means a higher score.

    The two data outputs from this process are combined each frame to update the learning model: the set of inputs and the per-pixel score are used to update the learning model. The learning model then takes the set of inputs to estimate how many sub-samples should be used for each pixel. This calculation is probably going to be the biggest limitation on the number of inputs they actually use. The actual calculation performed here is basically a matrix multiplication, which these cards are good at. But too many inputs and it will overwhelm the other rasterization calculations.

    Finally, why 1080p? Why not have the minimum resolution be 720p? Or keep it at 1440p for quality?

    Performance is surely part of the answer. But I think the bigger answer is simply that learning models always have problems with tail effects. Learning models make ridiculous errors, and it seems to be pretty much impossible to avoid them entirely. Performance suggests the minimum should be a lower resolution. The tail error issue with learning algorithms suggests it should not be 1440p, because some areas of the scene are going to end up with no anti-aliasing. And making the resolution too low will have the same issue only worse. So going down by a half-resolution step to 1080p is perfect: performance should be good, and you get a little bit of automatic anti-aliasing no matter how badly the ML algorithm fucks up.

    Finally, the nature of this kind of algorithm is such that it would probably benefit greatly from pre-baked learning models for each game. Which might explain why game support is important.
     
    w0lfram, DavidGraham and Ike Turner like this.
  11. milk

    Veteran Regular

    Joined:
    Jun 6, 2012
    Messages:
    2,995
    Likes Received:
    2,563
    As I understand it, this does just boil down to baked trained date per-game. They train it with low res versions and a high res supersampled versions of different frames, and that's it. I guess they use Z and Velocity buffers as well as color.
     
  12. vipa899

    Regular Newcomer

    Joined:
    Mar 31, 2017
    Messages:
    922
    Likes Received:
    354
    Location:
    Sweden
    Intresting, hope this feature lands in PS5 so 4k doesnt become too taxing again.
     
  13. bgroovy

    Regular Newcomer

    Joined:
    Oct 15, 2014
    Messages:
    629
    Likes Received:
    493
    I can't imagine how they would use any buffers like that since the supercomputer never has access to anything beyond the source and "ideal output" images to create the algorithm.
     
    vipa899 likes this.
  14. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    8,184
    Likes Received:
    1,841
    Location:
    Finland
    Yep.
    All DLSS screenshots demonstrate upscaling, unless it's DLSS X2 (which is only available in couple specific demos which aren't in open circulation)
    This can simply never be true, at best (DLSS X2) it can be equal and "SSAA" make things look smoother, but that's it. If it's DLSS and not X2 it is never even equal
     
  15. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,902
    Likes Received:
    218
    Location:
    Seattle, WA
    If it were as simple as that, then they wouldn't have the issue where the quality in the first few frames of a scene is lower than later on.

    Pre-generated models can probably help some. But aren't likely to be the whole thing. Also, the game-specific stuff may be more about selecting which variables are good for the learning algorithm, rather than training an actual model.
     
    vipa899 likes this.
  16. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,902
    Likes Received:
    218
    Location:
    Seattle, WA
    For areas of high detail, supersampling increases the amount of visible detail pretty substantially. You can see this effect in play in the last screenshot on Tom's Hardware's DLSS article from last October:
    https://www.tomshardware.com/reviews/dlss-upscaling-nvidia-rtx,5870.html

    The foliage in the background looks dramatically clearer in the DLSS image than in either of the other two (no AA or TAA).
     
    vipa899 likes this.
  17. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,735
    Likes Received:
    11,210
    Location:
    Under my bridge
    Is it better/more efficient than the existing very effective reconstruction techniques used on consoles? One tooted plus for DLSS was it seemed to be 'drop in' and work on any game, but that no longer seems to be the AFAICS.
     
    snarfbot and milk like this.
  18. vipa899

    Regular Newcomer

    Joined:
    Mar 31, 2017
    Messages:
    922
    Likes Received:
    354
    Location:
    Sweden
    Certainly hope so cause its not near native 4k on consoles with their "effective reconstruction". Like techspot mentions, on pc that tech wont suffice.
    Since 4k is harder to achieve on consoles, things like dlss are more needed there.
     
  19. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    8,184
    Likes Received:
    1,841
    Location:
    Finland
    I'm pretty sure in case of that FFXV shot it's not about SSAA bringing more detail, it's about DLSS breaking DoF which happens elsewhere in the FFXV demo too.
    In the very same show you can see easily from the license plate for example how much 1440p DLSS "4K" actually loses details
     
    w0lfram likes this.
  20. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,789
    Likes Received:
    2,596
    3D Mark released a new video using a free camera system to simulate game camera and expose DLSS to new scenes.



    Wrong. See Port Royal benchmark.
     
    Cuthalu and vipa899 like this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...