Nvidia DLSS antialiasing discussion *spawn*

Discussion in 'Architecture and Products' started by DavidGraham, Sep 19, 2018.

Tags:
  1. McHuj

    Veteran Regular Subscriber

    Joined:
    Jul 1, 2005
    Messages:
    1,439
    Likes Received:
    560
    Location:
    Texas
    Not exactly. This is the first iteration using inferencing for reconstruction in a consumer product at least. It's new.

    I think you can think of rendering itself as a reconstruction technique. Fundamentally, you're just recreating an artificial image through algorithms. I'm a big fan of reconstruction/checkboarding as done on consoles. I think the image quality is excellent and it's not worth spending the resources on "native" resolution. DLSS I think can further improve quality and performance.

    For me neural network approaches have an extremely high ceiling for image qulaity. Think about this: suppose you were going to be given an image that was rendered at 720p and told to upscale it in photoshop but you had to do the anti-aliasing by hand with a paint brush, you could make that image look better then the what a native render would be (and I type better looking and not equal to native). Why? because you know where the jaggies are, what ideal edges are, what textures should looklike, etc. It's all based on experience. That's what a neural network will do as well provided, its trained well enough and is big enough to handle all the varieties of images thrown at it. This might not happen in the first go at it with DLSS, but I think it will eventually.

    The great thing about neural networks for inferencing is that they don't need a lot of precision so you can do your ops with byte, nibble, or even lower operations that the AI cores provide. You can save a lot of compute power and divert it else where.
     
  2. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,879
    Likes Received:
    11,469
    Location:
    Under my bridge
    Nah. Rendering is construction. You have the exact information available to generate the image. Reconstruction uses inbetweening for missing data.

    I guess you have aspects of reconstruction like texture interpolation, but there's a clear distinction between rendering a pixel based on the exact polygons that occupy it and their exact shader values (or whatever other maths you use, like raytraced CSGs), versus rendering a pixel on inferred data from its neighbours.
     
    bitsandbytes likes this.
  3. McHuj

    Veteran Regular Subscriber

    Joined:
    Jul 1, 2005
    Messages:
    1,439
    Likes Received:
    560
    Location:
    Texas
    Fair enough.
     
  4. turkey

    Regular Newcomer

    Joined:
    Oct 21, 2014
    Messages:
    748
    Likes Received:
    435
    This was where I was thinking it seems a good bet. You can do it on via regular compute but if ultimatly you leave a lot of performance on the table due to precision overhead under compute it could be a better use of die space. Console specifically where things are so controlled and singular hardware optimisation is the norm.

    I think their pr numbers are something like 10x the normal compute flops for this lower precision operation. That's a lot more bang for your hardware sq mm, the caveat is as Shifty points out; if you use it (which for PC was possibly not so they needed this).

    Edit
    Sony had the ID buffer and went for checkerboarding in a big way, I don't see why something like this might not appeal to them. Again my thought was simply the on screen result to used die space was very attractive not that it's revolutionary etc.
     
  5. MrSpiggott

    Newcomer

    Joined:
    Feb 26, 2005
    Messages:
    103
    Likes Received:
    21
    Location:
    UK
    Comparisons with Insomniacs approach make Nvidia's look really expensive considering the amount of chip space the Tensor cores take up.
     
    egoless likes this.
  6. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,879
    Likes Received:
    11,469
    Location:
    Under my bridge
    Has anyone done an in-depth analysis of R&C/Spiderman to reveal the weaknesses of Temporal Injection? All I hear off is complaint-free raving about the image quality. ;)
     
  7. TheAlSpark

    TheAlSpark Moderator
    Moderator Legend

    Joined:
    Feb 29, 2004
    Messages:
    20,792
    Likes Received:
    5,877
    Location:
    ಠ_ಠ
    Perhaps quality isn't so much the issue for a comparison as it is the implementation, where Insomniac folks can work around the issues specifically per title vs a T800's 12-gauge autoloader approach? :p
     
  8. matthias

    Newcomer

    Joined:
    May 19, 2010
    Messages:
    32
    Likes Received:
    22
    Location:
    Germany
    How much of the die space do the Tensor cores occupy? Do we know this?
     
  9. turkey

    Regular Newcomer

    Joined:
    Oct 21, 2014
    Messages:
    748
    Likes Received:
    435
    Not that I found, also given it seems this is value add and possibly not the real reason for their inclusion we have no idea how many of them are required to deliver the DLSS results we are seeing.
     
  10. Frenetic Pony

    Regular Newcomer

    Joined:
    Nov 12, 2011
    Messages:
    337
    Likes Received:
    89
    Unfortunately the dimensionality of reconstructing a 3d moving image from a neural net standpoint just explodes in terms of work. Fundamentally trying to reconstruct the entire image is just not a good use of neural nets, the possibility space of just taking into account three dimensions (which is to say not including shading reconstruction) in a temporally coherent manner is just way too vast for a neural net.

    I'd love to see it used in conjunction with temporal AA to try and clean up artifacts. Fundamentally DLSS just can't produce the same quality as TAA, few people notice the tradeoff in time dimension over the usual exponential decay TAA uses, it's just not something people look for. But there are problems with things like noise, or missing information, or blur that neural nets could be great at fixing up.

    As for DLSS itself, after trying to find actual quality comparisons it seems Nvidia itself let slip that it's not very good. The PSNR over simple bicubic upscaling, a rather bad upscaling algorithm, is just 1-2 points. For reference that's less than what MSAA 2x is able to offer, and far less than good temporal anti-aliasing or upscaling. For those that don't know PSNR is the scientific standard scale for perceptual differences in a given image. IE you measure your reference image (what you want your image to look like, say 4x supersampling for a game) then you run the PSNR difference between the ref image and your approximated image to see how well your approximation holds up.

    Frankly at 1-2 points DLSS doesn't hold up very well, and to make it "better" you'd need exponential training time to produce a deployable net that would take up more power, and thus more heat and thus potentially less performance for the card to run. If a game has TAA or other clever upscaling, then DLSS is useless, as it doesn't appear to be able to run in concert with such. I guess it could be useful for games that don't have good upscaling or TAA though.
     
    #70 Frenetic Pony, Sep 21, 2018
    Last edited: Sep 21, 2018
  11. Benetanegia

    Newcomer

    Joined:
    Sep 4, 2015
    Messages:
    222
    Likes Received:
    136
    1-2 dB over bicubic is about AI super resolution, not DLSS. And given that its purpose is not really to recreate the original picture as it is to just create a higher resolution picture that looks better, it is not a surprise to me that PSNR isn't much higher. It probably over compensates on many ways, like making the upscaled image crisper than the original and such, lowering the PSNR, but looing good regardless.
     
    Geeforcer, pharma and DavidGraham like this.
  12. pharma

    Veteran Regular

    Joined:
    Mar 29, 2004
    Messages:
    2,966
    Likes Received:
    1,650
    Nvidia clarifies DLSS and how it works

    https://www.kitguru.net/components/graphic-cards/dominic-moass/nvidia-clarifies-dlss-and-how-it-works/
     
    #72 pharma, Sep 22, 2018
    Last edited: Sep 22, 2018
  13. Gorgonzola

    Newcomer

    Joined:
    Feb 28, 2005
    Messages:
    5
    Likes Received:
    3
    Exactly. See http://arxiv-export-lb.library.cornell.edu/abs/1809.07517 for a state of the art discussion on that.
     
    Benetanegia likes this.
  14. Gorgonzola

    Newcomer

    Joined:
    Feb 28, 2005
    Messages:
    5
    Likes Received:
    3
    Geeforcer and BRiT like this.
  15. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,879
    Likes Received:
    11,469
    Location:
    Under my bridge
    I take this to support my view this isn't intended for the games' market and is nVidia looking for something to do with the silicon. That same die area given over to the Tensor cores could be given over to more CU, process the AI on the shaders, and have them available for other things too. If the die area were insignificant for the Tensor cores, nVidia would add that to their PR. So they built a die, had some silicon sat idle (put on for cars and raytracing for productivity), and said, "what are we going to us this for in gaming?" They found upscaling was their best option after seeing what other companies were doing with reconstruction and ML.
     
  16. pharma

    Veteran Regular

    Joined:
    Mar 29, 2004
    Messages:
    2,966
    Likes Received:
    1,650
    What you quoted is the Kitguru article writer's statement, which is fine.

    Below is the detailed Q&A link on Nvidia's Blog regarding the importance of two emerging technologies for games: real-time ray tracing and AI.
    September 19, 2018
    https://news.developer.nvidia.com/dlss-what-does-it-mean-for-game-developers/
     
    #76 pharma, Sep 22, 2018
    Last edited: Sep 22, 2018
  17. w0lfram

    Newcomer

    Joined:
    Aug 7, 2017
    Messages:
    159
    Likes Received:
    33
    Nothing in that^ link is based on industry anything. It just illustrates what shifty Geezer was saying, it's just marketing hype on what else they did to make use of that non-gaming die space. (4k AA..?)

    Honestly, for games like Battlefield to get better, do they really NEED tensor cores (either from AMD, or nVidia).. ? Or does it need more hardware ROPs ?
    ergo: Most players don't need a more visual Battlefield experience, they want a better one/(ie: 140 frames @ 4k)..! And with better physics and ballistics, more players, etc. Reflection in a water puddle(?) is for strokers, not legit Gamers.


    So I do not know how well Nvidia can market their proprietary technology, when there are open standards such as Microsoft's DXR and the fact that a 4k monitor doesn't really need antialiasing, it needs frames per second. Ask most gamers, AA at 4k is not all that important. Panel speed, Color and dynamic range are more imnportant to their games, than AA. I do not think these chips were made specifically with gaming in mind. And think the public is going to react by waiting for AMD's 7nm cards coming in late 2018. For the die space, AMD's cards might have more of what gamers want at 4k.

    Given the latest reviews, many within the public (and here) do not find the Turing's DLSS as all that inspiring for Gamers. It is just another form of AA, but does DLSS give us the best highest quality AA experience, or just another cheaper way of doing things?

    Proprietary AA methods is not the way to move forward.
     
    Clukos and Shifty Geezer like this.
  18. Malo

    Malo Yak Mechanicum
    Legend Veteran Subscriber

    Joined:
    Feb 9, 2002
    Messages:
    7,087
    Likes Received:
    3,159
    Location:
    Pennsylvania
    From what we've seen so far its a hit n' miss TAA-equivalent with very little rendering cost, which isn't very inspiring.
     
    Silent_Buddha and BRiT like this.
  19. Remij

    Newcomer

    Joined:
    May 3, 2008
    Messages:
    31
    Likes Received:
    11
    I realize that it appears that none of the current games in development which are implementing ray-tracing are using the tensor cores for denoising, but isn't that one of the primary functions of those cores? Considering the resolution and shader performance hit when using ray-tracing, does it not make sense to have those cores which are suited to not only denoise the ray-traced effects, but also high quality image reconstruction from lower base resolutions at the same? I'm sure the possibilities with the tensor cores and the neural networks they accelerate are only beginning to be explored. I'm sure Nvidia knows stuff we don't at this point.

    As for DLSS specifically, It seems to me that within cases where the TAA implementation in a given game isn't of the highest quality (which is often), besides the obvious performance improvement... DLSS seems to show it's worth in visual quality alone. In FF15, the TAA solution was mediocre at best.. the hair was often stippled and jagged looking.. a problem with their implementation due to transparancies... such as the windows in the Regalia. They had an awful ghosting effect which is largely corrected with the DLSS implementation. On the other hand, the Infiltrator demo which is really post process heavy, and utilizes a high quality TAA solution still looks a bit better. The DLSS side had more shimmering and a bit of loss in pixel detail.. but still pretty impressive given the sometimes 30fps delta between the two.
     
  20. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,801
    Likes Received:
    2,615
    Tensor cores help with RTX by doing denoising, that was always their primary function, even in Volta.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...