Next Generation AA using deep learning

Discussion in 'Rendering Technology and APIs' started by CNCAddict, Nov 15, 2016.

  1. CNCAddict

    Regular

    Joined:
    Aug 14, 2005
    Messages:
    290
    Likes Received:
    2
  2. pcchen

    pcchen Moderator
    Moderator Veteran Subscriber

    Joined:
    Feb 6, 2002
    Messages:
    2,822
    Likes Received:
    246
    Location:
    Taiwan
    It's an interesting idea and I think it could work very well for some specific domain of images (e.g. anime).
    It might even evolve into some kind of better video compression (as you can make prediction much more accurately from lesser amount of data).
     
  3. OlegSH

    Regular Newcomer

    Joined:
    Jan 10, 2010
    Messages:
    389
    Likes Received:
    337
    I thought many times about using DNN specifically for AA in games. Learned filters might be kind of the same as SMAA's features baked in textures and if we are lucky enough it should be possible to teach another network to do temporary stable inter frame AA even without motion vectors as pure post-processing with the possibility of injection into random apps
     
    sebbbi likes this.
  4. sebbbi

    Veteran

    Joined:
    Nov 14, 2007
    Messages:
    2,924
    Likes Received:
    5,293
    Location:
    Helsinki, Finland
    Unfortunately without motion vectors the network would need to scan a huge area (fast movement = up to 1/4 of the screen = 1000 pixels at 4K). With motion vectors you could simply use a 4x4 neighborhood for the history (similar to bicubic history sampling in temporal AA).

    I have also thought about replacing hand tweaked temporal AA with neural network based version. Feed exactly the same data to the network as you feed to your TAA filter: 3x3 current frame neighborhood (color.rgb + z), 4x4 previous frame neighborhood (color.rgb + z) around the non-pixel-centered motion vector endpoint, fractions of motion vector (to interpolate previous frame neighborhood correctly). That's only 100 input values. The network needs to generate only 3 output values (rgb). As long as you could fit the network inside the LDS (32 KB per thread block), it should be very fast.
     
    Alucardx23, chris1515, Pixel and 2 others like this.
  5. Xmas

    Xmas Porous
    Veteran Subscriber

    Joined:
    Feb 6, 2002
    Messages:
    3,314
    Likes Received:
    140
    Location:
    On the path to wisdom
    That's if you want temporal AA, including motion blur. But if you just want temporally stable AA, e.g. no slowly crawling/swimming edges, fast moving objects should be less relevant.
     
  6. Ethatron

    Regular Subscriber

    Joined:
    Jan 24, 2010
    Messages:
    869
    Likes Received:
    277
    NN often don't deal well with outliers. It's a mayor problem to find the right error-metric. Probabilistic or MSE based won't do, and shape-based metrics are super complicated. It's easier to manually construct a context-driven adaptive predictor yourself by observing the statistics under each context, and tune for outliers. SMAA is a bit this way, but super simplistic with it's handful of 'contexts' and only one available predictor.
     
    OlegSH likes this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...