Nvidia DLSS antialiasing discussion *spawn*

Discussion in 'Architecture and Products' started by DavidGraham, Sep 19, 2018.

Tags:
  1. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,729
    Likes Received:
    11,200
    Location:
    Under my bridge
    What elegance? Devs have to send nVidia a whole load of game data for their massive supercomputer to process and for that data to be applied in game using large amounts of silicon. In-engine reconstruction techniques use perfect in-engine data for comparable results with a titchy silicon footprint. NN based image reconstruction makes a lot of sense when applied to movies and the like where there no deep-data to process, but it strikes me as grossly inefficient for game rendering, at least as it currently stands.
     
    Kej, Silent_Buddha, eloyc and 2 others like this.
  2. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,786
    Likes Received:
    2,584
    I meant the part where it requires minimal efforts by the developers. It's all on NVIDIA. Developers on PC have been reluctant to engage with checkboarding, temporal injection, and the likes. Now NVIDIA is handling those efforts.
    Which circles back to the point that DLSS was never an after thought experiment by NVIDIA. There are more efficient and cost conscious afterthought experiments for NVIDIA to create other than DLSS.
     
    #102 DavidGraham, Sep 23, 2018
    Last edited: Sep 23, 2018
    Heinrich4 and pharma like this.
  3. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,379
    I’ve always been a fan of ray tracing and I’m delighted to learn that it’s important and profitable enough in the HPC space to justify adding cost to gaming, Nvidia’s largest revenue generating segment by far.

    Would you mind clarifying which HPC application is so heavy on ray tracing?
     
  4. Jupiter

    Veteran Newcomer

    Joined:
    Feb 24, 2015
    Messages:
    1,421
    Likes Received:
    959
    So there are RT cores only because of the price and marketing? Neither do the RT cores cost so much, nor was the architecture changed for that alone. If one wants a solution for yesterday there is already Pascal. It doesn't go forward if one repeats the same thing again.
     
    matthias and Heinrich4 like this.
  5. Voxilla

    Regular

    Joined:
    Jun 23, 2007
    Messages:
    711
    Likes Received:
    282
    Do you have any evidence that tensor cores can coissue with shader cores in view of this post ?
    DLSS as a piece of software, surely must be an after thought as tensor cores were created to speedup neuralnetwork training in Volta. It might have been a consideration to bring the tensor cores to Turing if that is what you mean.
     
  6. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,729
    Likes Received:
    11,200
    Location:
    Under my bridge
    Why didn't they create a compute based upscaling system, same as they did TXAA? Perhaps because they didn't need to when they could sell their new hardware on it's ability to upscale via a NN solution on the already-present Tensor cores...

    Huh? Do you agree that DLSS is "grossly inefficient for game rendering, at least as it currently stands"? That there are leaner solutions that could have been employed instead?

    This discussion probably needs to be refocussed. Let's pose another question - if nVidia's goals were to produce a GPU capable of hardware reconstruction to get better framerates at the same comparable quality, would the best way to do that be to include NN learning-focussed cores, or to work on improving compute-based solutions and maybe add specific hardware improvements to that end?
     
  7. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,786
    Likes Received:
    2,584
    Of course it was, but it was brought to Turing to help with RTX, not as an afterthought of a marketing gimmick.
     
  8. troyan

    Newcomer

    Joined:
    Sep 1, 2015
    Messages:
    120
    Likes Received:
    181
    DLSS is an application. Like Super-Res, Slowmo and every other possiblity. And nVidia talked about it at Siggraph last year: https://blogs.nvidia.com/blog/2017/07/31/nvidia-research-brings-ai-to-computer-graphics/

    TensorCores are here to stay. nVidia is even rebranding their GPUs to "TensorCore" GPUs.
     
  9. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,786
    Likes Received:
    2,584
    Because it requires engine support and requires developers to implement it with them. TXAA is nothing like temporal reconstruction, TXAA is just TAA combined with MSAA.
    Yes I agree, but for NVIDIA, not for the developers.
    NVIDIA's goal is to produce a GPU capable of ray tracing. RT and Tensor cores were needed for that.

    That again would require heavy developer involvement. DLSS doesn't require that.
     
    Heinrich4 likes this.
  10. manux

    Veteran Regular

    Joined:
    Sep 7, 2002
    Messages:
    1,566
    Likes Received:
    400
    Location:
    Earth
    Is deep learning the future of realtime rendering? Link to presentation from last years siggraph below. To me this looks like a long term play that is at it's infancy. Following conferences there is interesting research that seems to be turning into reality in shipping games roughly around now. It will be very interesting to see how 2019 gdc and siggraph turn out.

     
    matthias likes this.
  11. Voxilla

    Regular

    Joined:
    Jun 23, 2007
    Messages:
    711
    Likes Received:
    282
    It's definitely a different strategy mobile SoC vendors follow.
    They resort to a dedicated more fixed function coprocessor to accelerate neural network computation, outside of the CPU or GPU.
    Even Nvidia also does do this with Xavier where there are neural network coprocessors outside of the GPU, they are more power efficient and can reduce memory bandwidth by using weight compression/decompression.
    Xavier based on Volta kept the tensor cores in the GPU, but I would expect these get removed, once neural network technology gets more mature.
     
    BRiT likes this.
  12. pharma

    Veteran Regular

    Joined:
    Mar 29, 2004
    Messages:
    2,930
    Likes Received:
    1,626
    I think the tensor cores will be in Xavier for quite a while based on recent AGX Xavier "wins" in the manufacturing industry. AI requires tensor cores and the expansion to "autonomous machines" lends credence to the fact there is a use case outside autonomous vehicles.
     
    #112 pharma, Sep 23, 2018
    Last edited: Sep 23, 2018
    Heinrich4 likes this.
  13. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,729
    Likes Received:
    11,200
    Location:
    Under my bridge
    Yet realtime graphics aren't based on raytracing and won't be for a year or two minimum (needs to become mainstream affordable), suggesting that the creation of this GPU wasn't for the game market. However, rather than create a new GPU designed for the game market, nVidia has looked to using their RT-focussed part in ways to augment non-raytraced rendering, resulting in the development (from existing offline AI based image reconstruction that's been going for a while now ever since the explosion of ML) of an upscaling system.

    A posit you dismiss out of hand as 'extremely silly'. :???: You can disagree with the theory, but to dismiss it as nonsense is just biased thinking. It's a very plausible option.
     
    Silent_Buddha and w0lfram like this.
  14. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    10,430
    Likes Received:
    433
    Location:
    New York
    That theory doesn't make any sense to me. Nvidia is certainly not shy about building chips that are fit for purpose. We also know that they can add/remove SM features at will so there was literally nothing stopping them from creating Turing RT GPUs for Quadro and Turing non-RT chips for Geforce. I also think it's silly to believe they bent over backwards to invent gaming use cases and worked with Microsoft / game developers on DXR as a second thought for hardware that they could've easily just cut out of the chip(s)......come on.

    Why is it so hard to accept that nVidia is just genuinely pushing real-time RT in games? Isn't that a simpler, more plausible version of reality? The argument that RT is "too early" because it runs at 1080p 60fps is a ridiculously arbitrary standard given that 1080p is still by far the most popular gaming resolution. It's here, it works and it's not going anywhere.
     
    OlegSH, xpea, Heinrich4 and 2 others like this.
  15. Malo

    Malo Yak Mechanicum
    Legend Veteran Subscriber

    Joined:
    Feb 9, 2002
    Messages:
    7,031
    Likes Received:
    3,102
    Location:
    Pennsylvania
    Well the counter-argument is that it takes a $1200 GPU to run at that speed. Those enthusiast gamers aren't running at 1080p.

    Nvidia going Turing on full stack (so far down to 2050?) seems to be an indicator that they're going all in on RT, despite seemingly the lower end versions aren't going to be very useful for RT games. Perhaps once we see some RT implementations with quality settings for amount of samples, resolution of reflections etc.
     
    BRiT likes this.
  16. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,786
    Likes Received:
    2,584
    Nope it's not. Fact is -looking at history- NVIDIA NEVER does anything for pure marketing reasons, they do it because it will give them competitive advantage, they did that with PhysX, Tessellation, CUDA, GSync, HBAO+, PCSS+, HFTS, VXAO, TXAA, and now RTX and DLSS. That's how they operate and that's how they do things. They are always pushing advanced effects that hammer the GPUs and add visual flair. RTX is no exception. And will not be the last.

    When 11 games start supporting RTX then I say it's about damn time. NVIDIA won't stop at 11 games though, there will be more on the way. And with both DXR and Vulkan being ready, I say it's about good damn time as well.

    NVIDIA just doesn't care if the medium end GPUs supports their tech, the 8600GT tanked with PhysX on, the 460 didn't do extreme Tessellation well. The 970/1060 can't do VXAO and HFTS with decent fps. That's how NVIDIA introduces their tech. RTX is again no exception, it might be absent this gen from mainstream cards. But next gen it won't.

    I will summarize all the points below:

    *RTX and DLSS are an afterthought marketing gimmick transported from the professional line to save costs.
    -Doesn't make sense after the increased support required from NVIDIA to properly implement and spread RTX and DLSS, which raises costs significantly.

    *NVIDIA's splitting of gaming and professional lines increase costs.
    -NVIDIA made that model extremely profitable and it drove them to maintain the leader position far longer than before. Doesn't make sense to risk abandoning that model for a marketing trick that could potentially jeopardize their competitive advantage, with larger dies, higher power demands, high production costs, high prices AND the added problems of supporting RTX and DLSS.

    *RTX is created to sell more cards.
    -RTX is the culmination of a long line of GameWorks effects that culminated with DXR and RTX. RTX is just the hardware acceleration of a standard DXR API. RTX is designed to gain the upper hand in that API.

    *DLSS is created as a marketing gimmick to justify the tensor cores
    -Tensor cores were repurposed and brought in to assist with RTX, DLSS is just icing on the cake if the game doesn't use RTX.

    *DLSS could be replaced with a compute solution.
    -It could but it would require the deep participation of developers, something they seem unwilling to do.
     
    #116 DavidGraham, Sep 23, 2018
    Last edited: Sep 23, 2018
    OlegSH, Heinrich4 and pharma like this.
  17. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,729
    Likes Received:
    11,200
    Location:
    Under my bridge
    They are pushing RT in games. The question is, why? My suggestion is because that better serves their goals in the more lucrative markets, and having settled on that design for those markets, nVidia looked at maximising profits by considering how to use that same design in a huge-margin 'prosumer' PC GPU. They also didn't 'bend over backwards to invent gaming uses' - they are already working on these as part of their extensive ML campaign, and realtime raytracing has value in productivity. Everything in Turing was happening anyway, whether 2070/2080 were released or not. Releasing them to the gaming space helps in several ways.

    The reason to suggest it's too early - these dies are massive and expensive! Games have to target a viable mainstream tech level, and realtime raytracing is far beyond that mainstream (<$300 GPUs?). There are lots of techs that could have been implemented earlier in history if IHVs had ignored economic limits and crammed crazy amounts of silicon in. What's different now is nVidia can afford to cram crazy amounts of silicon in for the professional markets, creating a bonkers big chip, which they can also sell to a tiny portion of the PC gaming space by using that chip to drive a couple of features.

    But no-one ever suggested that. :???: It's not done for marketing. Turing was developed for their professional, highly lucrative businesses. They then looked at ways to use that same hardware in the gaming space, and came up with DLSS.

    When the only reason devs implement raytracing is because nVidia are funding it, and they wouldn't otherwise because the costs aren't recovered by the miniscule install base paying for it, it's too early. It's nice of nVidia to invest in realtime raytracing, but it's (probably) going to be a long time before raytracing become mainstream.

    With a hideously slanted, prejudiced, and confrontational interpretation.

    The real suggestion is Turing was developed 100% for the professional markets - ML, automative, and professional imaging - with that design being intended to occupy the new flagship range of gaming GPUs to make more profit from the same design in the PC space. nVidia have looked at the hardware features of Turing and considered how to best make them work in the PC space, resulting in the development of DLSS to make use of the Tensor cores. The end result is a profit-maximising strategy from the one uniform architecture.

    Whatever, I'm out of this discussion now. Peoples gonna believe what they wanna believe and no-one's going to change their mind; certainly not if an alternative to their opinions is whole-heartedly considered childish or ridiculous. I'm going back to the more intelligent and considered discussions of the console forums. ;)
     
    #117 Shifty Geezer, Sep 23, 2018
    Last edited: Sep 23, 2018
  18. dirtyb1t

    Newcomer

    Joined:
    Aug 28, 2017
    Messages:
    31
    Likes Received:
    27
    I agree 100% with what you just established and I think its the more informed/reasoned viewpoint backed by technical facts and an understanding of basic business/profit maximization.
    At $800/$1200, these are essentially glorified Quadro cards. The lowest entry Quadro was cheaper than $800.
    They essentially pushed ever segment into higher prices and margins.
    None of the cards released are capable of doing Ray tracing in parallel perfectly w/o an impact on FPS.
    The higher the FPS, the lower the time window to do ray tracing and the uglier the results ... Thus the negative impact on FPS.

    Lastly, DLSS is nothing more than another bold push to tether people's hardware to a cloud service.
    It's a demonstration that their cards aren't up to snuff and thus they have to pre-bake compute.

    Across the board, I am learning to keep my opinions to myself when it comes to how foolish certain products are from a valuation standpoint.
    Let the people who want to blow their money do so.
    I'm better served in buying the stock of companies that have the biggest skews and milk their consumers the most.

    This is a big card release for developers and offline rendering..
    For realtime, it's a clear and present money grab and the people who reason about it frankly don't care.. which is fine.
     
    #118 dirtyb1t, Sep 23, 2018
    Last edited: Sep 23, 2018
    w0lfram likes this.
  19. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,786
    Likes Received:
    2,584
    If you expectrd ray tracing to be free of charge you need to have your expectations checked.
    Apologies if I sounded confrontational, didn't mean to. However there is nothing prejudiced in what I said, it's just basic counter arguments.
     
    Heinrich4 and pharma like this.
  20. Ike Turner

    Veteran Regular

    Joined:
    Jul 30, 2005
    Messages:
    1,884
    Likes Received:
    1,759
    I've been saying this (her) all along. Turing is a "Quadro" GPU and its architectural design was primary based on the needs of content creators (10x increase in baking maps, replacing CPU based render farms etc) and machine learning.
     
    Silent_Buddha likes this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...