Non-DLSS Image Reconstruction techniques? *spawn*

Discussion in 'Architecture and Products' started by sonen, Sep 10, 2020.

  1. pharma

    Veteran Regular

    Joined:
    Mar 29, 2004
    Messages:
    3,793
    Likes Received:
    2,682
    Mods: Consider creating a *spawn thread* for console related "Shader Based Image Reconstruction" techniques not using Nvidia's DLSS (tensor core) methodology.
     
  2. BRiT

    BRiT (>• •)>⌐■-■ (⌐■-■)
    Moderator Legend Alpha

    Joined:
    Feb 7, 2002
    Messages:
    17,275
    Likes Received:
    17,678
    Paging @TheAlSpark , if you should have the time to go through tagging posts to migrate to such a new discussion.

    Pharma, thanks for the suggestion. It seems worth while.
     
    PSman1700 and pharma like this.
  3. pjbliverpool

    pjbliverpool B3D Scallywag
    Legend

    Joined:
    May 8, 2005
    Messages:
    8,062
    Likes Received:
    1,664
    Location:
    Guess...
    My point was more that the S doesn't seem to be capable of running DLSS style model at all. Its simply too slow. The X on the other hand is fast enough, if only barely. That could lead to some really awkward market segmentation issues for Microsoft. The S is supposed to be a 1440p version of the X, but give the X the advantage of ML upscaling and the gap between them becomes much, much harder to bridge.

    To put it into context, I just learned the S is only a 4 TFLOP GPU, significantly less than the 1660Ti which nvidia launched without Tensor cores on account of it not being fast enough for them to be of any use. A single frame upscale on the S should take around 15ms!

    So either Microsoft would need a vastly more efficient model than DLSS to make it feasible, or far more likely, it'll never happen, which in turn casts doubt on it seeing the light of day on the X either.
     
    pharma likes this.
  4. eastmen

    Legend Subscriber

    Joined:
    Mar 17, 2008
    Messages:
    11,713
    Likes Received:
    2,693
    DLSS started as a shader implementation and now uses tensor cores

    MS can train the models using azure and then run them on the Model S.

    Also again AMD has Fidelity FX CAS + upscaling there is also radeon image sharping seems to work really well for image quality and performance



    - death stranding from DF showing off Fidelity FX Cas + sharping

    if you go to the end of the video he talks about playground games using forza 3 using ML upscaling. So if you combine what AMD already has with ML while it may not be as good as DLSS 2.1 it may be more than passable for someone buying a $300 console.

    Here DF did some math


    Rtx 2060 has 103.2 INT-8 Tops and render time would be 2.5MS
    Series X would be 49 INT-8 tops and render time would be 5MS with them assuming similar near linear scaling.

    The question is how about inter 4 ? Will that be enough to do it and will the series s have enough to make it worth while

    DLSS 2 is great but if i'm buying a $300 piece of hardware i wouldn't even mind Fidelity FX Cas+ sharping. If AMD adds in machine learning to it to help it out I could easily see it being a big hit on the model s and even the model x.

    I am sure in Navi 2 there will be hardware assisted features
     
  5. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    1,794
    Likes Received:
    713
    Location:
    msk.ru/spb.ru
    DLSS started as ML solution actually, 1.0 was using tensor cores, the approach was different however.
    DLSS "1.9" used in Control at launch was the only DLSS which wasn't using tensor cores - and it was basically TAAU.
    DLSS 2.0 took this TAAU and added ML back into it which allowed them to clean up again lots of issues which TAA has.
     
    Dictator, DavidGraham, pharma and 3 others like this.
  6. pjbliverpool

    pjbliverpool B3D Scallywag
    Legend

    Joined:
    May 8, 2005
    Messages:
    8,062
    Likes Received:
    1,664
    Location:
    Guess...
    I'm not able to watch the videos at the moment as I'm on a limited connection, but are they talking about the Microsoft Direct Learning demonstration? If so that's using Nvidia's DLSS model and hardware.

    Yes this is what my own analysis is based on. if the X is 5ms and the S has 1/3rd the throughput then the S would be 15ms. Obviously way too slow for 60 fps where you only have 16.6ms for the entire frame. 30fps might make it worthwhile but ht enet benefit would be very small, if it exists at all.

    INT4 is just double INT8 rate on both RDNA2 and Turing as far as I'm aware so the end result would be the same. I'd say Microsofts only route to a ML upscaling solution on the S is to produce a model that's significantly more efficient than DLSS, and I'm not sure how possible that is with their current hardware within the timescales of this generation.

    As someone who thinks native 1440p with good TAA offers near perfect image quality I would have to agree. especially if playing on a TV rather than a monitor.
     
    neckthrough, pharma, Rootax and 2 others like this.
  7. eastmen

    Legend Subscriber

    Joined:
    Mar 17, 2008
    Messages:
    11,713
    Likes Received:
    2,693
    its a mix of different upscaling methods



    Except if int 4 is double the rate of INT8 then if its usable for a DLSS type program for the X it only need 2.5ms and for the 3 it would be 7.5 ms. That would put it right back in the ball park of being worth while to use

    I mean on a $300 all in piece of hardware yes 1440p with Fidelity FX CAS + sharping or even at 1080p it be a great value imo. I would l rather spend the $200 more and $500 in total to get something native 4k . However for many that $200 is bank breaking.

    Of course on a $800 video card that i'm adding to a thousands of other dollars worth of hardware i would surely want more.

    I think AMD would have an answer to DLSS or at least an upgraded version of Fidelity FX CAS. I guess we have over a month to wait and find out lol
     
  8. Benetanegia

    Regular Newcomer

    Joined:
    Sep 4, 2015
    Messages:
    376
    Likes Received:
    385
    I don't think that one can just assume that an "INT4 DLSS" network would be 2x as fast as an INT8 one. The INT4 one might need extra steps to reach similar results, thus negating part of the gains.
     
  9. techuse

    Regular Newcomer

    Joined:
    Feb 19, 2013
    Messages:
    409
    Likes Received:
    227
    Devs surely can find a use for int 4/8 in some graphics and/or compute work outside of AI no?
     
    milk likes this.
  10. neckthrough

    Newcomer

    Joined:
    Mar 28, 2019
    Messages:
    32
    Likes Received:
    52
    I may be misremembering but I think @nAo used a logluv format to pack HDR info into a 32bpp RGBA format for a PS3 era game.
     
  11. jlippo

    Veteran Regular

    Joined:
    Oct 7, 2004
    Messages:
    1,513
    Likes Received:
    699
    Location:
    Finland
    It wasn't really doing math in int4/8 though, but storing the results in buffer with int8/channel.

    Int4/8 may be very restrictive, but perhaps there is ways to use them for things which really do not need a lot of precision. (Fixed point should give some additional precision.)
    What was amazing in 386 or MMX era is amazing again.
     
    TheAlSpark likes this.
  12. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    11,992
    Likes Received:
    13,339
    Location:
    The North
    A generation is a long time. This is really infancy to see ML in games. Once it happens maybe more will pile in. But the challenge is getting things done in less than 1-2 milliseconds. That’s not particularly easy. You got dlss at 2ms. If you have AI or other things, how many actors on the screen etc need to run their AI. Like say driving games, more cars = more AI being run. It will eat up your budget very fast.
     
  13. hughJ

    Regular

    Joined:
    Feb 7, 2002
    Messages:
    802
    Likes Received:
    335
    I really doubt we're going to be seeing much ML/DL for client-side game logic/AI, especially for stuff like driving games where the existing methods are plenty sufficient, not to mention being robust/deterministic such that QA isn't a total shit show. I'd expect the biggest impact this generation is going to come on the development side (tools to assist and accelerate the art pipeline).
     
    pjbliverpool likes this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...