AMD FSR antialiasing discussion

Discussion in 'Architecture and Products' started by Deleted member 90741, May 20, 2021.

  1. Say hello to AMD FSR

    20210150669 : GAMING SUPER RESOLUTION

    https://appft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PG01&p=1&u=/netahtml/PTO/srchnum.html&r=1&f=G&l=50&s1="20210150669".PGNR.&OS=DN/20210150669&RS=DN/20210150669
     
    Tags:
  2. Gubbi

    Gubbi Veteran

    Since it's upscaling, it should be called subresolution.

    Cheers
     
    PSman1700 likes this.
  3. techuse

    techuse Veteran

  4. chris1515

    chris1515 Legend

    They talk about neural network resolution, they use AI like DLSS if I understand well

     
    Lightman, Krteq, BRiT and 1 other person like this.
  5. techuse

    techuse Veteran

    [0008] and [0009] made it unclear to me since they specifically talk about the problems with deep learning and other generalized ML approaches. They also mention a wholly learned environment. Curious to know if/how this differs from DLSS in practice. AMD GPUs don't have nearly the matrix math capability of Nvidia GPUs. But I also don't know if that is the bottleneck for DLSS performance.
     
    PSman1700 likes this.
  6. chris1515

    chris1515 Legend

    But they spoke of training after. The problem with the current method is not taking into account the non linear information.

    EDIT: This is in RDNA 3 speculation but AMD told it will be available for RDNA 2 GPU, Xbox Series and PS5. Maybe it will work with RDNA 1 GPU too?
     
  7. PSman1700

    PSman1700 Legend

    Too bad consoles missed this one.
     
  8. chris1515

    chris1515 Legend

    w0lfram likes this.
  9. upload_2021-5-20_11-9-52.png
    upload_2021-5-20_11-10-13.png

    They downsample the image at 302 (1/2 image resolution), and run two independent networks, linear and non linear, which afterwards feed back into a combined network
    They could possibly configure/tweak the depths for the indpendent networks and avoid running all activation functions across the layers if it were combined and save some compute/time

    Downsampling from current resolution also reduces the data set that is fed to the network most likely.
    All in all just another way to model an ML problem. There is basically not more information
     
    milk, w0lfram, Lightman and 2 others like this.
  10. One thing strikes me is that you can add RNN in the mix in addition to the Linear (304) and CNN (306) provided the GPU have enough memory and horsepower and you can have some correction from temporal data points as well
    For such an RNN only a short term memory is needed something like a Gated Recurrent Unit
    Probably for RDNA3 and beyond.

    One interesting tidbit from the patent

     
    T2098, orangpelupa, Krteq and 2 others like this.
  11. OlegSH

    OlegSH Regular

    RNNs are heavy and store state in weights, there is no need in storing state in weights if you can feed it explicitly, i.e. feed 2 consequent frames to CNN at once.
    It seems the thing described in the patent is just spatial upscaler, so there must be temporal part as well otherwise it wouldn't be able to converrge to higher res like DLSS does.
     
  12. Feeding same frame means performing the computation again and it is not the principle of RNN
    The result of the past activation is fed back to the next activation calculation
     
    Last edited by a moderator: May 20, 2021
  13. Xmas

    Xmas Porous Veteran Subscriber

    I don't think this is a useful way of thinking about RNNs, after all there is generally nothing preventing you from passing past inputs to the network. It's crucial that the RNN state is a learned distilled representation of significant features encountered in the recent past, you don't want to recalculate that from scratch every frame.
     
  14. OlegSH

    OlegSH Regular

    RNNs learn probability of the next event based on previous via the hidden state, there is no need for this in temporal image processing because two consequent frames are being explicitly aligned with motion vectors, so you would get nothing from RNN.
     
  15. Image processing and NN are different things.
    I am speaking about the model being able to predict the current image based on previous image
    RNN is being used for video reconstruction outside of gaming.
    https://ieeexplore.ieee.org/document/9098327
    Whether it is feasible, I dont know. But something to consider while modelling, definitely.
     
  16. OlegSH

    OlegSH Regular

    Of cause they are, nobody argues about that.

    Why would you want RNN for something like this?
    Уou can simply store the previously upscaled high res images, warp it via motion vectors and then combine it with current low res image, that's how TAAU works.

    That's irrelevant for gaming. In video you don't have precise motion vectors, just a mere approximation - optical flow, on the other hand, there are no such time constrains for video processing as in gaming, so you can brute force some problems by throwing more math at it.
    Other than this, RNNs would not help getting additional details. It's camera jittering that is adding individual details into every frame in games, it has nothing to do with RNNs.
     
    w0lfram, manux, PSman1700 and 2 others like this.
  17. Why is this in the RDNA3 thread though?

    Seems to me that this is FSR which should be available for every DX12 architecture. The patent only mentions the presence of compute units made up of parallel SIMD units. They don't mention tensor cores, matrix multiply units or anything of the like.

    simd.png




    It also seems to be missing any kind of temporal data.
     
  18. DegustatoR

    DegustatoR Veteran

    The patent is from 2019 though. I have doubts about it being relevant to FSR.
     
    PSman1700 likes this.
  19. PSman1700

    PSman1700 Legend

  20. Bondrewd

    Bondrewd Veteran

    Idk.
    Rename DLSS thread into DLSS + FSR and paste all discussion there?
    No shit.
    Stuff takes time to get outta the oven.
    Every time.
    AMD isn't bolting MFMA engines to client GPUs.
    This has no relation to RDNA3 at all.
    Not that the latter needs upscaling techniques at large.
     
    Deleted member 13524 likes this.
Loading...

Share This Page

Loading...