Nvidia DLSS antialiasing discussion *spawn*

Discussion in 'Architecture and Products' started by DavidGraham, Sep 19, 2018.

Tags:
  1. Rootax

    Veteran Newcomer

    Joined:
    Jan 2, 2006
    Messages:
    2,078
    Likes Received:
    1,519
    Location:
    France
    That's the part I don't get. How can dlss work correctly without a learning phase ?
     
  2. nAo

    nAo Nutella Nutellae
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    4,364
    Likes Received:
    284
    Location:
    San Francisco
    The short answer is that neural networks can be pre-trained.
     
    pharma and PSman1700 like this.
  3. Putas

    Regular Newcomer

    Joined:
    Nov 7, 2004
    Messages:
    536
    Likes Received:
    177
    Doesn't the pre-training create a danger of DLSS working well only in typical scenarios? What if somebody starts to use, for example, a camera in a very different way. Can it produce more DLSS artifacts?
     
  4. jlippo

    Veteran Regular

    Joined:
    Oct 7, 2004
    Messages:
    1,604
    Likes Received:
    851
    Location:
    Finland
    It is now a temporal supersample AA + upscaling method in which ML is used in phase where samples are selected/rejected from history frames.
     
    BRiT and VGA like this.
  5. entity279

    Veteran Regular Subscriber

    Joined:
    May 12, 2008
    Messages:
    1,293
    Likes Received:
    473
    Location:
    Romania
    I think the discussion went a bit off by this point, since we've introduced terms like " pre-training" which don't exist in ( general ) machine learning, to my knowledge, at least

    The whole point of training is to provide a sufficient coverage of inputs so that the neural network can now generalise and be able to handle the whole imput space sufficiently accurate.
    Neural networks are very, very good at handling rotations, translations and similar geometrical transformations that you seem to be worried about
     
  6. pharma

    Veteran Regular

    Joined:
    Mar 29, 2004
    Messages:
    4,218
    Likes Received:
    3,419
    pre-training" machine learning - Google Scholar
     
    PSman1700 likes this.
  7. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    2,231
    Likes Received:
    1,652
    Location:
    msk.ru/spb.ru
    One should consider that if one generic algorithmic approach like UE4's TAA is good enough for everything - a similar in its idea ML inference based approach would be just as good for everything if not better - which is basically what we see between UE4 TAA and DLSS.
    And both have edge cases which require tweaking to produce optimal results.
     
    PSman1700 likes this.
  8. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    2,231
    Likes Received:
    1,652
    Location:
    msk.ru/spb.ru
  9. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    11,212
    Likes Received:
    1,791
    Location:
    New York
    I imagine AMD is concerned about this should developers start basing their performance targets on the availability of upscaling. It’s already happening with checkerboarding on consoles.
     
  10. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    3,526
    Likes Received:
    4,173
    AMD is quite concerned already, they belittled DLSS in the beginning, citing weak game support and weak potential, now they are scrambling to make an alternative 3 or 4 years later.
     
    PSman1700 likes this.
  11. Lurkmass

    Regular Newcomer

    Joined:
    Mar 3, 2020
    Messages:
    317
    Likes Received:
    355
    There's no concern since DLSS will never widely proliferate throughout the industry. Developers are largely unable to maintain this solution because very few people have expertise in machine learning. If the model is producing unreliable results then developers practically have to wait for Nvidia to make these edits since the developers are incapable of doing this by themselves. This is frequently the case since DLSS support comes after the game's initial release and don't think for one moment that the job is ever 'finished' if you think just integrating the solution in one game/engine is enough. DLSS needs constant revisions to work well against new content in the future so it's practically a never-ending project that Nvidia has to undertake ...

    AMD has very good reasons to not introduce a comparable solution because most developers don't have the necessary expertise to sustain it by themselves and aren't actually interested in taking up these responsibilities either. More power to Nvidia if they want to keep up the work with no visible ending in sight to maintain their own solution ...
     
    Ethatron likes this.
  12. techuse

    Regular Newcomer

    Joined:
    Feb 19, 2013
    Messages:
    768
    Likes Received:
    459
    DLSS is here to stay. We are nearly at the end of node shrinks with no road forward beyond that to increase performance for the foreseeable future.
     
  13. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    3,526
    Likes Received:
    4,173
    That is no longer the case, the majority of games gets DLSS and RTX support at launch, and that's before the native integration into Unreal and Unity.
     
    PSman1700 likes this.
  14. Lurkmass

    Regular Newcomer

    Joined:
    Mar 3, 2020
    Messages:
    317
    Likes Received:
    355
    It'll stay for as long as Nvidia wants it to keep existing but it's hardly 'sustainable' in that sense because no other parties are willing to take on this burden and don't expect developers to ever take this responsibility ...

    The long-term problem with DLSS or other similar technology isn't going to be the technical knowledge, it's going to be maintenance. Can DLSS realistically succeed where other experiments with a lone maintainer like PhysX (deprecated in UE4 and Unity DOTS) or G-Sync failed ?
     
  15. pharma

    Veteran Regular

    Joined:
    Mar 29, 2004
    Messages:
    4,218
    Likes Received:
    3,419
    What burden? To implement the current implementation is basically a no brainer for developers. They are implementing it with minimal exposure and little or no help from Nvidia in less than half a day.
     
    PSman1700 likes this.
  16. Lurkmass

    Regular Newcomer

    Joined:
    Mar 3, 2020
    Messages:
    317
    Likes Received:
    355
    DLSS is a continuous project that can never be 'completed' in the true sense of the word so if Nvidia badly wants to permanently assign dozens of employees to work on it forever then good for them and they're is pretty much doing all of the work with no sign that developers will ever do the same in the future ...

    If DLSS was as simple to implement as you say then there'd be far more support in general and at launch. Not even Crysis remastered or Nioh 2 launched initially with DLSS and then we have others like Hitman 2 or PUBG which never got DLSS like they promised ... (there are games out there that don't even have motion vectors so DLSS is virtually impossible to implement in those cases)
     
    #1836 Lurkmass, Apr 15, 2021
    Last edited: Apr 15, 2021
  17. dobwal

    Legend Veteran

    Joined:
    Oct 26, 2005
    Messages:
    5,705
    Likes Received:
    1,945
    If it helps sell hardware then there is plenty motivation. Nvidia with DLSS can have the same motivation as MS with DirectX. DX doesn’t generate revenue or profits directly but it keeps gamers tied to windows which is motivation enough for MS to continue to support its API.

    Given that RT is a performance hog and DLSS helps alleviate the impact of RT, I imagine AMD will continue to struggle even if they are able to match Nvidia in general gpu and RT performance but lack a competitor to DLSS.

    In a world of high resolution gaming, no one wants to spend 100s to 1000s of dollars on mid to high end GPUs to be stuck at resolutions of yesteryear. While DLSS is far from perfect, it attempts to provide performance where the high cost of silicon makes brute force too costly.
     
    #1837 dobwal, Apr 15, 2021
    Last edited: Apr 15, 2021
    pjbliverpool and PSman1700 like this.
  18. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    13,204
    Likes Received:
    16,085
    Location:
    The North
    there are other ways to monetize DLSS however. Just sell/license their models to other platform holders.
    It's clear it works on other hardware, that's where the majority of that IP lies, not the hardware.
     
    PSman1700 likes this.
  19. techuse

    Regular Newcomer

    Joined:
    Feb 19, 2013
    Messages:
    768
    Likes Received:
    459
    I don’t think Nvidia has a choice. They brought RT before HW was fast enough to make it useful. DLSS is the only thing making even these very nominal implementations usable. Performance scaling will hit the wall long before DLSS becomes unnecessary.
     
  20. Lurkmass

    Regular Newcomer

    Joined:
    Mar 3, 2020
    Messages:
    317
    Likes Received:
    355
    Then Nvidia should be prepared to take on the long-term burden of sustaining DLSS forever without developers ever returning the favour in the future ...

    The best AMD is going to do is release a demo, open source the code, and what happens to them afterwards is the developers problem since AMD practically never updates their github samples. Do people actually think AMD are interested in endlessly chasing down as much developers as possible and offering free support indefinitely on a regular basis ? I'm pretty sure this description doesn't match their profile at all since they don't want to take on long-term commitments outside of their hardware projects or drivers ...
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...