Nvidia DLSS antialiasing discussion *spawn*

Discussion in 'Architecture and Products' started by DavidGraham, Sep 19, 2018.

Tags:
  1. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,733
    Likes Received:
    11,206
    Location:
    Under my bridge
    Which is a prevailing theory among lots of us, that these cores were thrown in for non-gaming reasons and nVidia are looking for reasons to use them. The latest, greatest upscaling could have run on RTX cards that don't have Tensor cores, and used that silicon for more compute that'd be better for upscaling as it's fast enough. ;)

    Tensor cores got a bit of a slap-back in justification from this Control algorithm. Of course, if their AI Research Model can be run efficiently on tensor cores, it might still prove itself. Although in the comparison video, one feels just rendering particles in a separate pass on top would be the best of all worlds and the most efficient use of silicon.

    That's not what's described. The DLSS process runs on the Tensor cores and is not the 'algorithm' being talked of. DLSS as an ML technique is slow. nVidia found the ML training threw up a new way to reconstruct, but it's too slow to run in realtime as an ML solution. However, the engineers managed to take that new-found knowledge and create a new reconstruction algorithm running on compute*.

    The hope is to improve the NN technique so it can be run directly in game; what they term the AI Research Model. One of the reasons its confusing to follow what's going on is nVidia are calling the image processing algorithm 'DLSS' alongside the NN based DLSS. They showcase DLSS videos of control that are running an image-processing algorithm rather than a NN, as an example of what their NN-based DLSS will probably be doing in the future, they hope.

    * Perhaps, maybe, it's possible to run image processing on Tensor but I've never heard of it used like that.
     
  2. Remij

    Newcomer

    Joined:
    May 3, 2008
    Messages:
    31
    Likes Received:
    11
    Ahh ok. I understand what you're saying now. Your last paragraph suddenly clicked for me. My bad. I see that I was wrong now.

    So, given that they hope to get their NN-based DLSS implementation close to the quality of their AI Research model results.. how much performance could that realistically free up over let's say the "Image Processing algorithm DLSS" currently in Control? Would it be that much that it would even be worth the effort?
     
  3. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,733
    Likes Received:
    11,206
    Location:
    Under my bridge
    It probably won't free up any meaningful performance, but it'll improve the quality of the results over the image processing method. The question is more over whether the inclusion of Tensor cores is better than using compute in their place, or whether an all-out compute solution would yield better overall performance (at somewhat reduced quality)?

    I've just Googled this though which suggests RTX RT and ML adds very little overhead (10%).
     
  4. JoeJ

    Regular Newcomer

    Joined:
    Apr 1, 2018
    Messages:
    459
    Likes Received:
    555
    It's hard to discuss because it's hard to differentiate compute / tensor. But what changed my view on this is the GTX 1660, which added a shrinked version of tensor cores so they can at least do native fp16.
    This means there is no need to differntiate at all?
    Because fp16 is usueful in games the question is not 'are tensor cores worth it?', but it is 'what features of tensor cores do we really want, and how can we access them?'.

    Usually people build an algorithm first, and if it's useful there may come up hardware acceleration. Seen from perspective of gamedev it's the opposite that happened here. GPUs are mainly sold to gamers, and as of yet, games do not use Machine Learning.

    So my opinion: Drop int4+8, maybe drop matrix multiplies, keep fp16 + int16, expose to gfx APIs, still call it AI / Tensor and everybody is happy.
    I'd like to hear opinions from people knowing about AI, if they see upcoming applicatuions that justify more than that?
     
  5. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,787
    Likes Received:
    2,588
    Yes. Also Tensor Cores are responsible for packed double FP16 performance.
     
  6. BRiT

    BRiT (╯°□°)╯
    Moderator Legend Alpha Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    12,507
    Likes Received:
    8,711
    Location:
    Cleveland
  7. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,733
    Likes Received:
    11,206
    Location:
    Under my bridge
    Yeah, Tensor isn't required for RPM FP16. It's just how nVidia are using them.
     
  8. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,787
    Likes Received:
    2,588
    Only in GP100, not in GP104 and GP102.
     
  9. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    6,807
    Likes Received:
    473
    For the moment Tensor operations seem really limited, everything which isn't matrix multiply and accumulate seems to be done with generic compute (even sigmoid function). There seems to be no way to get from tensor to compute except through shared memory, at least for non NVIDIA Plebs.

    It's going to make image processing with tensor cores a bit of a pain, a lot of extra pipelining and buffering.
     
  10. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,733
    Likes Received:
    11,206
    Location:
    Under my bridge
    Given the small footprint of Tensor, I imagine they're very simple and will be designed for their purpose of ML as efficiently as possible. Like fixed-function units versus pixel shaders versus unified compute shaders - Tensor is starting at level 1, very low programmability.
     
  11. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    6,807
    Likes Received:
    473
    I wonder how much they really gain by reusing parts of the SM other than shared memory. If they made the tensor cores completely separate apart from that even their old DLSS should have only a small impact.

    PS. ignoring power consumption of course.
     
    pharma and milk like this.
  12. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    10,430
    Likes Received:
    433
    Location:
    New York
    Between Control and FreeStyle Sharpening it seems Nvidia is trying hard to convince us that DLSS is a waste of time :)

    https://www.techspot.com/review/1903-dlss-vs-freestyle-vs-ris/

     
  13. Scott_Arm

    Legend

    Joined:
    Jun 16, 2004
    Messages:
    13,277
    Likes Received:
    3,726
    I'm curious. When they're testing these sharpening filters in games like Division 2, Battlefield V are they making sure the in-game sharpening is completely disabled first? Both of those games tend to over-sharpen by default already.
     
    Silent_Buddha, BRiT and milk like this.
  14. JoeJ

    Regular Newcomer

    Joined:
    Apr 1, 2018
    Messages:
    459
    Likes Received:
    555
    I have just tried RIS in Control, upscaled from 1080 to 1440. And i don't like it. Had to quit the game after 10 seconds to disable it.
    Good: Yep it's sharp. Can see detailed wood texture bumps on concrete walls really very sharp - hard to believe the game is upscaled.
    Bad: TAA flickering is exaggerated. It's acceptable without RIS but now too distracting - i have to turn it off.
    (Also the simple aliased debug visuals i use for programming become worse - staircaising is exaggerated. And menu fonts / logos in games loose their nice smooth appearance.)

    I'm one of those who hate the artificial shaprness of realtime CG. I think making stuff even sharper is totally wrong, so even without temporal artifacts i would not want this. But surely has potential for those who think different.
    I have worked with 4K footage from very expensive cameras at work - they capture about the high frequency detail games show at 1080p in the best case. Sharpening is widely used also here, but results in temporal artifacts unpleasant to watch in motion (crawling pixels).
    If we want smooth motion, details have to swim between pixels softly, so contrast between individual pixels has to be low to hide them. In other words: Blurry, and you can only show frequencies at half resolution smoothly. Frequency matching full resolution crawls.

    That said, i see a big opportunity for upscaling to get even better images than native resolution, if we can improve TAA tech further anf get rid of damn flickering.
     
    pharma, iroboto and BRiT like this.
  15. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,733
    Likes Received:
    11,206
    Location:
    Under my bridge
    Which perhaps ties in with why often gamers would say 'looks better' for games with a vaseline lens, because softness is closer to real captured footage. It seems there are two mindsets when dealing with graphics quality, with one preferring clinical sharpness and detail resolve and somewhat more quantifiable through metrics, and the other just preferring what looks good to their eyes which often favours deliberately downgrading visual clarity through motion blur, chromatic aberration, DOF and even intended softness.
     
    #595 Shifty Geezer, Oct 16, 2019 at 9:00 AM
    Last edited: Oct 16, 2019 at 11:26 AM
    chris1515, pharma, entity279 and 2 others like this.
  16. JoeJ

    Regular Newcomer

    Joined:
    Apr 1, 2018
    Messages:
    459
    Likes Received:
    555
    Yeah exactly. Would be worth to gather some statistics across people to figure out the ratio. I think most prefer sharpness and accept some crawling for this, but not sure.
     
  17. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,733
    Likes Received:
    11,206
    Location:
    Under my bridge
    Might well vary by platform. My feeling is sharpness is more important to PC gamers, and movie-like is preferred by console gamers, because the two have grown up playing different games with different requirements. On PC in twitch shooters and fast RTS/MOBA played up close on a monitor, clarity is essential for fast accuracy, whereas on console with single player adventures and played on the living room TV, emulating what's seen on TV is more fitting. I think. ;)
     
    milk, JoeJ, chris1515 and 2 others like this.
  18. Silent_Buddha

    Legend

    Joined:
    Mar 13, 2007
    Messages:
    16,152
    Likes Received:
    5,086
    I think it comes down to how each person's visual system works. People like me who have heighted peripheral vision and thus tend to look around constantly have a difficult time getting over the visual discontinuity that something like DoF and Motion Blur represent in games or media as it doesn't reflect how we view the world.

    People with less sensitivity to peripheral motion are more likely to be comfortable focusing on single things at a time, versus looking at multiple things each second. I'd imagine for people like this motion blur and DoF likely aren't as distracting as they aren't actively trying to resolve the things that are blurred multiple times a second.

    I think that's why people in the first camp prefer PC games, you can disable those things that don't accurately reflect how your visual system works in real life. Until there is fast and responsive eye tracking that can reflect how people like me look at the world, then DoF and Motion Blur vary between heavily distracting to eyestrain inducing to potentially even headache inducing depending on how prevalent they are.

    That's a big difference from the focused blurring that you get from good AA solutions, however. That's where smart sharpening needs to happen. Things that need some blurring (like high contrast edges) should get some blurring to avoid stair stepping while textures should get some sharpening to increase detail and reduce obvious blurring, IMO.

    Too much "dumb" sharpening is just as bad as too much blurring.

    Regards,
    SB
     
    #598 Silent_Buddha, Oct 16, 2019 at 6:00 PM
    Last edited: Oct 16, 2019 at 8:27 PM
    CeeGee and JoeJ like this.
  19. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,733
    Likes Received:
    11,206
    Location:
    Under my bridge
    How do you cope with TV and movies then? These are blurred all over the shop.
     
    milk likes this.
  20. Silent_Buddha

    Legend

    Joined:
    Mar 13, 2007
    Messages:
    16,152
    Likes Received:
    5,086
    It's bothersome but far less than games. The big difference being that I'm in control of the game, which means I need to constantly be making decisions on where to go, what to do, what to respond to (enemies, threats, etc.) and all that kind of stuff. Just like in the real world I need to be aware of everything as almost anything could be a potential hazard.

    A movie I'm just an observer an nothing else. I don't need to look around to decide where to go. I don't need to wonder where enemies are because the movie is going to make it obvious where they are. I don't need to decide what to do based on the circumstances.

    Basically, when watching a movie or show almost everything just turns off, especially the parts of my brain dealing with control and observation.

    That said, there are times when it's still bad in movies for me. Especially the ones that try to do a POV style like that one Hardcore Henry, for example. Basically anytime it's overdone or if there's too much DOF in a slow scenic pan of landscape. Slow scenic pans are especially bad if there's much blurring as that's telling me that the director wants me to look around the scene.

    [edit] Also, just thought about it, but I was probably also conditioned at a young age that movies and TV are just naturally EXTREMELY blurred compared to reality. When I watch shows from the 70's or 80's on VHS, for example...whooo boy is that some blurry stuff.

    Regards,
    SB
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...