Nvidia DLSS antialiasing discussion *spawn*

Discussion in 'Architecture and Products' started by DavidGraham, Sep 19, 2018.

Tags:
  1. dorf

    Newcomer

    Joined:
    Dec 21, 2019
    Messages:
    126
    Likes Received:
    417
    It looks bad, low res. I don't think the algorithm is actually applied to the image at all.

    Less than 10 posts, can't post links.
     
    BRiT likes this.
  2. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    44,106
    Likes Received:
    16,898
    Location:
    Under my bridge
    You can embed media though?
     
  3. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    Can you put links in code-tags? I seem to remember that working somewhere. ;)
     
  4. Radolov

    Newcomer

    Joined:
    Jul 30, 2019
    Messages:
    12
    Likes Received:
    13
    First of all, your 10th post is talking about having less than 10 posts. I just felt like pointing that out. :D

    Second of all, well done! It does indeed use the tensor cores, but for short bursts (I think I can count 5 or 6?). But there’s a part before the first tensor burst that only uses a small amount of FP16 from ~10.95ms-~11.15ms. I think it looks a bit like the part from ~11.27ms-~11.88ms in the old picture. So I believe some parts of the image processing approach are still in play with less iterations, and they use AI to fill in the gaps. As stated earlier:

    “With further optimization, we believe AI will clean up the remaining artifacts in the image processing algorithm while keeping FPS high.”

    It would also explain why the issues with the embers look so familiar to those with the image processing approach. I’m not sure how they would appear on a purely AI based approach, but then again I’m no expert.

    When it comes to image quality on the frame you have on the right side of the image, the only thing I think looks better is the letter ahead of you. In the original version it looks more like an X, while in the new one it looks more like a Y.

    As for why it may look low res, I got a message from someone that attended a briefing with NVIDIA:

    What we got to hear from NVIDIA at CES is that it’s not the same DLSS in Wolfenstein as in, for example BF V, but they have updated it quite a lot. However, the developers themselves have to choose to actively update DLSS in the game to gain access to these improvements, and it’s not all developers who are so eager on doing it for a game that they are already “finished with” (according to NVIDIA)

    So if they only needed to replace a file that they received from NVIDIA, they would update it in 5 seconds and everything would be fine. There’s probably more work than that involved for them to not bother to do it.

    Dorf's third post contains a full link... so I guess the warning is just there to scare away spam bots? Not that I've heard about polite spam bots that read warning messages. ¯\_(ツ)_/¯
     
    dorf, DavidGraham and Malo like this.
  5. dorf

    Newcomer

    Joined:
    Dec 21, 2019
    Messages:
    126
    Likes Received:
    417
    Metro Exodus is getting a new DLC next week. If they don't update DLSS to the current version, it will be quite telling of difficult it is to do so imo... and disappointing.

    Oh yea whoops, had completely forgotten about that. Just followed your example with the imgur linking. :mrgreen:

    In case people missed it, here's the video @Radolov mentioned earlier ( https://forum.beyond3d.com/threads/...g-discussion-spawn.60896/page-34#post-2102182 ):
     
    pharma likes this.
  6. Scott_Arm

    Legend

    Joined:
    Jun 16, 2004
    Messages:
    15,134
    Likes Received:
    7,680
    So it seems like it might be possible to keep a library of nvngx_dlss.dll files per game to do comparisons as the game is updated.
     
  7. dorf

    Newcomer

    Joined:
    Dec 21, 2019
    Messages:
    126
    Likes Received:
    417
    Possibly, but the older versions would quite likely stop working as intended after updates. If they'd work at all.

    For example Youngblood will crash when launching with Control's nvngx_dlss.dll (version 1.3.8.0). Also Youngblood's nvngx_dlss.dll (version 2.0.0.0) doesn't work with Star Wars (version 1.0.0.0), it runs but no image.

    So if Metro (DLSS version 1.0.x) got updated to 2.0.0.0, it might just crash upon launch with the old .dll file.

    I've been able to run Youngblood's .dll in Control and Control's in Star Wars (btw in this latter scenario peak tensor usage goes from ~70% to ~8% as expected also image quality is again poor).
     
    xz321zx likes this.
  8. Scott_Arm

    Legend

    Joined:
    Jun 16, 2004
    Messages:
    15,134
    Likes Received:
    7,680

    I don't necessarily mean to run across different games. But say Metro comes out with dlss and then six months later they update it. It looks like it could be possible to swap the file to compare the two versions in Metro. Just as an example.
     
  9. Davros

    Legend

    Joined:
    Jun 7, 2004
    Messages:
    17,884
    Likes Received:
    5,334
    Reminds me of the old days of keeping different versions of Glide2x.dll
     
  10. pharma

    Veteran

    Joined:
    Mar 29, 2004
    Messages:
    4,894
    Likes Received:
    4,549
    What Is AI Upscaling?
    February 4, 2020
    https://blogs.nvidia.com/blog/2020/02/03/what-is-ai-upscaling/
     
    PSman1700 likes this.
  11. upnorthsox

    Veteran

    Joined:
    May 7, 2008
    Messages:
    2,106
    Likes Received:
    380
    So this sounds like a service for devs where they submit their source and NV returns a dlss dll for that game. As they do not do all resolutions available then I would imagine they charge per resolution? Will they also charge for updates/patches? I would imagine so.
     
  12. pharma

    Veteran

    Joined:
    Mar 29, 2004
    Messages:
    4,894
    Likes Received:
    4,549
    DLSS training is free. As far as patches/updates I would imagine it's treated probably like any other game patch.
     
    PSman1700 likes this.
  13. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    44,106
    Likes Received:
    16,898
    Location:
    Under my bridge
    nVidia don't charge at this point as it's a USP for selling hardware. If they did charge, the concept would probably die completely - no dev anywhere is going to pay to have an optional AA mode enabled on a tiny subset of their market.

    Also, their PR on that page is classic marketing horse-shit.

    No-one does this. Upscaling is via more complex algorithms giving far better results than pixel duplication or bilinear filtering. I wish companies believed in their products enough to not have to lie about the alternatives. Compare a decent quality algorithmic upscale to your AI upscale and see if it really is better.

    The interesting bit in this blog though it that this is running on Shield TV, so not a Tensor-core device. I thought DLSS on RTX cards was explained as enabled by the Tensor cores (?). This shows machine-learnt solutions don't need anything but compute to implement and can be used on other cards if it proves a viable large-scale solution (which also makes you wonder what Tensor is actually enabling in their consumer cards?).
     
    milk, Miniature Kaiju and BRiT like this.
  14. Malo

    Malo Yak Mechanicum
    Legend Subscriber

    Joined:
    Feb 9, 2002
    Messages:
    8,931
    Likes Received:
    5,533
    Location:
    Pennsylvania
    Anything like this can be run on compute but it's seemingly faster on Tensor cores as long as the return trip cost of the Tensors is less than the cost of the overall cost via compute. The DLSS version in Control (?) was on compute but that newer version of their training algorithm was later used on Tensors in more recent titles. Tough luck for older versions though, that requires the devs going back to Nvidia and training all over again for each resolution and releasing a patch.
     
  15. iroboto

    iroboto Daft Funk
    Legend Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    14,834
    Likes Received:
    18,634
    Location:
    The North
    ML can run on CPUs and we do this a lot.
    ML can run in GPUs and we do this a lot.
    Tensor Cores support mainly deep learning algorithms and we use this a lot for image processing.

    tensor cores are magnitude faster than GPUs, which are magnitude faster than CPUs.

    upscaling video is much easier than upscaling a real time game.

    Just to note.

    One you can buffer and the other you cannot.
     
  16. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    3,976
    Likes Received:
    5,213
    The blog post in question talks about video upscaling done in a typical video player.
    Again, this comes back to the blog being specifically about upscaling videos not upscaling games.
     
    pharma and PSman1700 like this.
  17. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    7,610
    Likes Received:
    825
    Has anyone taken a look at the Atomic Heart demo to see if you can hack it to enable TAAU?
     
  18. PSman1700

    Legend

    Joined:
    Mar 22, 2019
    Messages:
    7,119
    Likes Received:
    3,093
    Yes dlss does a great job offloading cpu/gpu, i can imagine something like the tensor cores would do great in next gen consoles, with 4k being the standard as opposed to 1080p. 4k is and will be rather taxing.
     
  19. iroboto

    iroboto Daft Funk
    Legend Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    14,834
    Likes Received:
    18,634
    Location:
    The North
    It could also be significantly slower as well.

    AI is just algorithm and there are different things that it is capable of. The challenge of AI isn't quality, quality for sure can outmatch any current upscaling algorithm.

    These are entirely AI generated photos
    https://www.thispersondoesnotexist.com

    The challenge is real time while keeping that high level of fidelity. The more layers the neural network the longer the time it takes to complete it's job. More layers will definitely lead to better quality. But with real time graphics, you're in a serious constraint to bring the evaluation time down to a reasonable level so that games can still operate in the 16ms range.
     
  20. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    44,106
    Likes Received:
    16,898
    Location:
    Under my bridge
    The context is about playing content on a TV box. I assume everyone feeds 1080p video to their 4K TV to upscaler with its sophisticated upscaler.

    Fair point. Is there any data on core utilisation for DLSS? Can we see how computationally demanding it is through game profiling or something?
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...