Neural SuperSampling, FaceBook Researchers [2020]

Discussion in 'Rendering Technology and APIs' started by Remij, Jul 2, 2020.

  1. Remij

    Regular Newcomer

    Joined:
    May 3, 2008
    Messages:
    250
    Likes Received:
    404
    Facebook developing an AI-assisted supersampling technique for real-time rendered content

    https://www.roadtovr.com/facebook-d...ring-performance-high-resolution-vr-headsets/

    Comparison images:
    [​IMG]

    [​IMG]

    [​IMG]

    [​IMG]

    PDF link to paper describing how it works:
    https://research.fb.com/wp-content/uploads/2020/06/Neural-Supersampling-for-Real-time-Rendering.pdf

    Direct Link to video of technique in action:
    https://research.fb.com/wp-content/...ersampling-for-RealTime-Rendering_vid.mp4.zip
     
  2. BRiT

    BRiT (>• •)>⌐■-■ (⌐■-■)
    Moderator Legend Alpha

    Joined:
    Feb 7, 2002
    Messages:
    18,883
    Likes Received:
    21,271
    Does it require access to your entire contact list as well as personal messages in order to function?
     
  3. Davros

    Legend

    Joined:
    Jun 7, 2004
    Messages:
    16,803
    Likes Received:
    4,102
    They are not photorealistic I look nothing like that ;)
     
  4. manux

    Veteran Regular

    Joined:
    Sep 7, 2002
    Messages:
    2,822
    Likes Received:
    2,002
    Location:
    Earth
    This sounds seriously good. I hope they can get it to work outside lab too

    https://uploadvr.com/facebook-neural-supersampling/
     
  5. pjbliverpool

    pjbliverpool B3D Scallywag
    Legend

    Joined:
    May 8, 2005
    Messages:
    8,573
    Likes Received:
    2,927
    Location:
    Guess...
    Karamazov likes this.
  6. cheapchips

    Veteran Newcomer

    Joined:
    Feb 23, 2013
    Messages:
    1,814
    Likes Received:
    1,936
    digitalwanderer likes this.
  7. Jay

    Jay
    Veteran Regular

    Joined:
    Aug 3, 2013
    Messages:
    3,512
    Likes Received:
    2,855
    Neural SuperSampling Is a Hardware Agnostic DLSS Alternative by Facebook

    Includes link to the Facebook paper.
    https://wccftech.com/neural-supersampling-is-a-hardware-agnostic-dlss-alternative-by-facebook/

    I know all this stuff is early research, but I hate that there's no hardware specs given.
    So something like currently able to achieve realtime on an amd rx580 using x amount of resources.
    Nice it's able to do 16* upscale to 2160p in realtime but is that with a rtx2080 ti?
     
    #7 Jay, Jul 3, 2020
    Last edited: Jul 3, 2020
    digitalwanderer likes this.
  8. OlegSH

    Regular Newcomer

    Joined:
    Jan 10, 2010
    Messages:
    536
    Likes Received:
    810
    It's given in the paper, here it goes:
    "After training, the network models are optimized with Nvidia Ten-sorRT [2018] at 16-bit precision and tested on a Titan V GPU"
    1920×1080 - 24.42 ms
    1920×1080 "fast" version - 18.25 ms
    That's an order of magnitude slower than DLSS 2.0 on 2080 Ti (actually DLSS 2.0 takes ~1.5 ms for 1080p to 4K temporal upscaling on RTX 2080 Ti), which is also slower than Titan V on FP16 inference.
    They are obviously comparing their method with DLSS 1.0, but if you take a look at provided video you will notice that temporal stability sucks ass, which is expected for 16x scaling.
    Not sure about generalization either, they include their test scenes in training data set and say "although including the test scenes into training datasets seems toalways improve the quality"
    Of cause it will improve the quality due to overfitting, but it will also likely worsen generalization.
     
    Dictator, orangpelupa, Remij and 6 others like this.
  9. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    13,182
    Likes Received:
    16,038
    Location:
    The North
    Not quite apple to apple comparisons here.

    Facebook is doing a 4x4 upscale here, vs Nvidia's 2x2.

    Their results are very good considering how low of a resolution they are coming from, it's unimaginable it can even get something so close to reference.

    2x2 is reasonable in terms of what you can achieve by inference, in simplistic viewpoints you're only asking it to guess every other pixel.

    4x4 is a great deal more reliant on the network to guess what the results should be; simplistically you're really asking it to infer a lot more.

    What facebook accomplished here is pretty massive, most networks wouldn't compare, let alone at the speed at which this rendered at.


    That being said this video here: https://forum.beyond3d.com/posts/2136620/
    As per the video is total witchcraft.

    haha. Unfortunately i have no clue on render time.
     
    #9 iroboto, Jul 3, 2020
    Last edited: Jul 3, 2020
  10. Jay

    Jay
    Veteran Regular

    Joined:
    Aug 3, 2013
    Messages:
    3,512
    Likes Received:
    2,855
    Thanks a lot.
    Had had a quick scan of the blog and download the paper but not looked at it yet.
    Also appreciate your commentary of the performance
     
  11. Remij

    Regular Newcomer

    Joined:
    May 3, 2008
    Messages:
    250
    Likes Received:
    404
    Yea, this tech is pretty amazing. I wonder how perceptible the difference is between their output and the reference while viewing through a headset?

    All of these NN based upscaling techniques are really breaking new ground. It's crazy to think that they're really just getting started too.
     
    digitalwanderer likes this.
  12. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    13,182
    Likes Received:
    16,038
    Location:
    The North
    Indeed, the reason you’re seeing such fast improvements is because Computer vision using neural networks is considered solved. Getting to run as fast as possible as real time as possible with the smallest footprint and the cheapest amount of re-training costs is the new game here.

    I believe that NLP is considered solved as well; but it’s going to be a really long time for it to get anywhere near real-time fast on a single device.
     
  13. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    3,523
    Likes Received:
    4,168
    This is strange, neither NVIDIA nor the developer has announced that this game supports any kind of DLSS.
     
    digitalwanderer likes this.
  14. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    13,182
    Likes Received:
    16,038
    Location:
    The North
    Lol. No clue what that is LOL. Never heard of it either. I don’t know if it’s one of the screenshots in the comparison.
     
    digitalwanderer likes this.
  15. dorf

    Newcomer

    Joined:
    Dec 21, 2019
    Messages:
    108
    Likes Received:
    342
    Malo likes this.
  16. OlegSH

    Regular Newcomer

    Joined:
    Jan 10, 2010
    Messages:
    536
    Likes Received:
    810
    I don't think low input resolution matters.
    For accumulation algo, it does't matter whether you accumulate samples from 4x lower res or 2x, etc. It doesn't hit performance at all.
    Network is applied in the very end of the pipeline and dominates in execution time. Hence, what matters is inference resolution and I am pretty sure 1080p to 4K DLSS 2.0 inference resolution is not lower than 270p to 1080p in the facebook research paper.

    I've read the paper further and noticed that there is no temporal loss function, this also explains bad temporal coherency in the example video.
    Once they add it, image will become blurrier, but likely much more stable without the wobbling effect.
    Also, it seems network has learned some directional blur and uses it to make edges smoother, but this also adds up to the wobbling effect.
    Wonder whether temporal loss function will force the network to forget the directional blur strategy, which should make static image more rough.
     
    DavidGraham likes this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...