Nvidia DLSS antialiasing discussion *spawn*

Discussion in 'Architecture and Products' started by DavidGraham, Sep 19, 2018.

Tags:
  1. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    7,716
    Likes Received:
    6,006
    Curious to see if the titles just get better and better with each title. So perhaps transfer learning is happening.
    Or; it really is a per title basis and they have to restart.
    I suspect it shouldn’t matter. They are trying to train it against SSAA. So ideally there’s enough for AI to catch title to title.
     
    pharma likes this.
  2. pharma

    Veteran Regular

    Joined:
    Mar 29, 2004
    Messages:
    2,888
    Likes Received:
    1,592
    Also curious how this DLSS "sharpness slider" works in Monster Hunter. Not sure if this the first implementation or whether another game used this feature.
     
  3. Malo

    Malo Yak Mechanicum
    Legend Veteran Subscriber

    Joined:
    Feb 9, 2002
    Messages:
    6,955
    Likes Received:
    3,037
    Location:
    Pennsylvania
    There's no indication the Sharpness slider is tied to DLSS? Many games have a sharpness slider.
     
  4. pharma

    Veteran Regular

    Joined:
    Mar 29, 2004
    Messages:
    2,888
    Likes Received:
    1,592
    Yea, it is in many games. I just noted that they mentioned "another DLSS feature with a new sharpness slider" so wasn't quite sure.
     
  5. Malo

    Malo Yak Mechanicum
    Legend Veteran Subscriber

    Joined:
    Feb 9, 2002
    Messages:
    6,955
    Likes Received:
    3,037
    Location:
    Pennsylvania
    Yeah not sure either. No idea if their article is purely based on the screenshots and the active Slider option just below DLSS is why they made that assumption or whether it's something Nvidia has added. Since some of the negativity towards DLSS has been due to the blurring, one wonders if they (Nvidia) decided to add a post-process filter like CAS to help.
     
  6. PSman1700

    Newcomer

    Joined:
    Mar 22, 2019
    Messages:
    66
    Likes Received:
    23
    What's with all the hate towards nvidia's solutions regarding RT/DLSS? I think they are or will be very similar to what AMD has or is going to have.
     
    xpea, DavidGraham and pharma like this.
  7. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    7,716
    Likes Received:
    6,006
    If we're sticking technical here, DLSS will continue to improve as they get better at it. I'd rather this topic not become RIS vs DLSS (or the politics of feature support). They are 2 entirely separate techniques and can have very different outputs on some items in which we would expect similarity. I might be able to actually talk to DLSS technically especially if we see more screenshots and what not come up, but I'm not going to do this RIS vs DLSS thing.

    There are a lot of opportunities for machine learning to be leveraged in games, DLSS is an interesting technique, but it's greatest weakness is the fixed compute time to run through the NN (we would call DLSS an edge based NN, similar to saying 'Hey Google', those devices are locally equipped with NN to detect it). In a typical upscale/AA/sharpening based scenario where we either had (a) way more compute or (b) much more time, DLSS would produce sufficiently superior results. But that's not the interesting topic to discuss in the field, since in a way that's already solved, what's interesting is getting superior results in a frame time that needs to fit in 16ms or less.

    DLSS is a good topic for that type of discussion.
     
    pharma likes this.
  8. Malo

    Malo Yak Mechanicum
    Legend Veteran Subscriber

    Joined:
    Feb 9, 2002
    Messages:
    6,955
    Likes Received:
    3,037
    Location:
    Pennsylvania
    None of us hate the technology, especially RT. What many of us dislike is marketing bullshit, which Nvidia are experts at.

    Anyone should be able to question how something is implemented rather than simply blindly bow to the green God.

    The jury is still out on DLSS IMO, especially given the CAS feature AMD has provided to devs which is seemingly producing as good quality and performance with a lot less complexity. Not seem to have the strengths but CAS wins out by far on implementation time/complexity.

    RT itself, well it's the first implementation we've seen as a hardware accelerated dedicated RT for consumers so there's not much we can compare to or analyze. There has been some question as to the black box approach and lack of flexibility but it's still very early days in the gaming world.

    So is what we're seeing exciting for gamers? Definitely. Should we simply regurgitate Nvidias marketing? No. Question everything.
     
  9. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,518
    Likes Received:
    10,896
    Location:
    Under my bridge
    There's no hate. It's about getting correct data. The impact of DLSS is measured both in quality of result and industry adoption. A list of titles quantifying industry adoption is only as useful as it is accurate. If that list is inaccurate, you get a misrepresentation on the state of industry adoption. There is thus hate towards marketing numbers that obfuscate the truth and make correct analysis difficult/impossible.

    It's a given some titles would get DLSS support at launch as heavily backed by nVidia to launch the tech. Since then, is there any independent movement within the industry, or are devs seeing adding DLSS support as an inefficient use of time/resources. Is the external training requirement a significant barrier to entry? Is the limitation of the technique to RTX cards uneconomical? If so, does that mean NN based solutions don't have much of a future?
     
    CaptainGinger and Ike Turner like this.
  10. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    7,716
    Likes Received:
    6,006
    Still too early to judge this one. But with DirectML now being out in the wild, developers could be aiming to produce their own in house variants of AA or upscaling.
    I've no delusions on the number of Machine Learning engineer jobs in game studios, been looking around and there's quite a bit, doesn't necessarily mean it's upscale and AA, but we're definitely in for seeing more usage of machine learning in games in the coming 4-5 years.
     
    pharma likes this.
  11. Ike Turner

    Veteran Regular

    Joined:
    Jul 30, 2005
    Messages:
    1,884
    Likes Received:
    1,756
    Opinion (which I've already expressed a year ago when DLSS was announced) : ML AA/Upscaling/Halucination for real-time gaming scénarios (un-recorded/not-pre-rendered) is the dumbest thing ever. Nvidia knows that (or else they wouldn't have demonstred it only with pre-recorded demos/benchmarks as they had all the time in the world to train it on any other game of their choice or even build a simple playable tech-demo.. They never did.) top of te line CBR implementions and now things like CAS (which can be used for upscaling when directly implemented in engine) are certainly the more logical and cost effective way to go. DLSS is/ was one way to justify the silicon cost of the Tensor Cores in consumer GPUs.
     
  12. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    7,716
    Likes Received:
    6,006
    I'm not sure why it's considered the dumbest thing ever; it's clear that it works. Every new technology will have growing pains, nvidia was the one to take it on (foolishly I would say). A lot of developers may not be interested in supporting a feature that is RTX only. But that doesn't mean a third party company couldn't come along and do the exact same thing using DirectML and supporting all GPUs.

    I don't see DLSS as being any more expensive or less cost effective than other solutions. You build the game as you see fit, you let the AI company do the work. You take their model and you integrate it back into your own engine add it to the tail end of your pipeline. Effort on behalf the developer is quite minimal.

    Solving this problem via as a general problem sounds trivial, but there will be edge cases that will require refinement on all titles. That will also still be on nvidia to solve. But as they get better at solving these problems they can apply this model forward to the next title and the next title.
     
  13. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,720
    Likes Received:
    2,460
    There is a definitive movement to include DLSS in most RTX titles.

    For the time being, both NVIDIA and Microsoft (through DirectML) think they have a strong future. Microsoft repeatedly expressed that in various presentations.
     
  14. Ike Turner

    Veteran Regular

    Joined:
    Jul 30, 2005
    Messages:
    1,884
    Likes Received:
    1,756
    No sane developer will hand out it's game code to a third party like that unless they are getting $$ in exchange. Which is literally what is being done now. Unless I'm mistaken.. Every single one of the games with DLSS are part of Nvidia's TWIMTBP (whatever this crap is called now) program.
    There's not one advantage to this thing. Not one, especially given the poor results after months (!) of training relative to other straightforward solutions available which have better results.
     
  15. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,518
    Likes Received:
    10,896
    Location:
    Under my bridge
    I'd certainly like some clarification on the technical complaints rather than just a straight poo-pooing of the idea.

    In terms of total effort, it's apparently a lot with the training being processor intensive and time consuming. I'm also not sure it's minimal effort - if so, why aren't more titles adding it?

    But as a way to solve the upscaling problem, any progress in that department is in competition with the reconstruction methods that are proving to be far more efficient overall and with better adoption. At this point, one really needs to point to legitimate reason to invest in ML solutions. What advantages will they bring to the table? We can look at the costs/benefits of ray-tracing and see the place it has in the future of graphics, but I'm struggling to see any advantages to ML based upscaling or IQ enhancement. If it could be advanced enough, it could be a one-size-fits-all solution for both upscaling and frame interpolation, but I doubt it could ever be advanced enough.
     
  16. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,518
    Likes Received:
    10,896
    Location:
    Under my bridge
    nVidia can be discounted because they're trying to sell hardware, so will promote whatever USPs they have. As for MS, I don't recall anything in particular about game upscaling solutions. DML is about open ML to be used however devs want without any particular emphasis. Upscaling is presented as a use for ML, but that doesn't consider any evaluation on its suitability. Have you a link to MS suggesting ML-based upscaling is/will be superior to other solutions?
     
  17. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,720
    Likes Received:
    2,460
  18. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,518
    Likes Received:
    10,896
    Location:
    Under my bridge
    That's referencing nVidia's work. The example is a static photo being upscaled. We're now past that SIGGRAPH 2018 talk and into the realm of real-world application of nVidia's ML upscaling. MS obviously believe there's a fture in ML (there is) but I'm still not seeing anything to say that they believe the future of realtime game upscaling is ML based.
     
  19. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,720
    Likes Received:
    2,460
    That's a gross overstatement، sharpness filters (including CAS) have poor IQ problems in many areas. They exaggerate shimmering and noise as well. Also their 1440p to 4K scaling is atrocious. NN solutions provide better IQ in this specific area.
     
  20. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,720
    Likes Received:
    2,460
    Who here said it is the only solution? You some how turned the argument from DLSS has no future to DLSS is the only solution for up scaling. No one said it's the only solution, we are responding to the baseless premature judgment that DLSS and NN solutions are dead in the water.
     
    pharma likes this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...