Support for Machine Learning (ML) on PS5 and Series X?

Discussion in 'Console Technology' started by Shifty Geezer, Mar 18, 2020.

  1. vjPiedPiper

    Newcomer

    Joined:
    Nov 23, 2005
    Messages:
    118
    Likes Received:
    72
    Location:
    Melbourne Aus.
    Honestly it seems like MS are perfectly placed to enable a large scale AI based Machine learning mechanism for all games on the Xbox environment / specific platform.

    For each game:
    Generate frames @ native 4K, or even 8k and downsample. This is your known good set.
    Then your inputs become a render of the same game / run through @ 1080p, or 1440p, or whatever resolution.
    then just chuck your input data at a great Big AI learning system to create the right outputs that then form the up-scaling algorithm.

    Given they already have so much infrastructure with XCloud etc, it seems like a natural fit.

    Even if they dont do it on a per-game basis they could offer a bunch of different presets to smaller devs,
    it would create so much value to the XDK/GDK environment.

    I'm actually quite surprised at how long it is taking.
    maybe the upscaling algorithm Does need to be trained on a game by game basis, and a more general approach will not suffice,
    but i wold still expect it to be part of the GDK pipeline pretty damn soon if not already.
     
  2. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    13,182
    Likes Received:
    16,039
    Location:
    The North
    Which is a possibility I think.

    Though from my viewpoint, typically building the training set, optimizing the training set, training computation time, the number of runs, tuning, and then labour to support it all can be fairly large when you're looking at having to build every part of that pipeline end to end.

    The amount invested to do this custom for 1 title and 1 title only, sounds like this is intrinsically the selling feature of your game; which I'm not sure if we are there quite yet.

    There likely are existing 3rd party studios that work on this type of stuff all the time catering to a variety of different industries, and have a decent number of pre-trained models with tuning features to get a baseline; then transfer learn the baseline model to tune it specifically for the customer.

    Taking on the endeavor from scratch just sounds incredibly expensive to a studio, you're pretty much expecting that no one out there has done it so the only choice is to roll up your sleeves.
     
  3. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    13,182
    Likes Received:
    16,039
    Location:
    The North
    to get a good upscaler, you would want a generic upscaler and not something that needs to be tuned per title. That is too time consuming (thus costly) and adoption would be low, and not very adaptive either ; training sets would be as large as playing the whole game through.

    It's taking so long because typically it's very difficult to get the upscale done in a very short amount of time, and more so on weaker devices. You're going to have to make a large number of compromises or find a very specific type of rendering pipeline that would slot this solution in nicely.

    This is a very challenging problem to tackle imo. Nvidia has it covered, but they seem to be the only one; other's have tried like Facebook etc. No one has yet produced a competitive model.
     
    mr magoo, Dictator and PSman1700 like this.
  4. ToTTenTranz

    Legend Veteran

    Joined:
    Jul 7, 2008
    Messages:
    12,138
    Likes Received:
    7,100
    AFAIK Nvidia is also implementing DLSS2 on a per-title (and per-resolution / screen ratio) basis, so their upscaler isn't generic either. There's now a module for implementing it in Unreal Engine, but I haven't seen any claims saying it prescinds training.

    I imagine that creating a single NN that would serve all games at all resolutions would be something gigantic. What you could call generic is e.g. Contrast-Adaptive Sharpening, which obviously doesn't provide the same performance and IQ benefits as DLSS, but it just works.

    Finding a non-black-box algorithm that can be internally tweaked but at the same time works as universally as CAS without having to provide the massive datasets to feed a deep NN (i.e. actually generic) may be the reason why AMD and game devs are apparently working on FSR that doesn't involve ML.
     
  5. Jay

    Jay
    Veteran Regular

    Joined:
    Aug 3, 2013
    Messages:
    3,512
    Likes Received:
    2,855
    DLSS 2.0 is general as far as I'm aware. (edit to be clear the models are generic, but titles need to implement vectors etc)

    MS has everything in place to do ML upscaling.
    Servers, middleware, platform owner so got access to 1000s of games, could even add to store certification that need to have x amount of pairs of training images to keep on improving it.

    Nvidia must've got training material from somewhere, can only think it would be easier for MS.
     
    PSman1700 likes this.
  6. ToTTenTranz

    Legend Veteran

    Joined:
    Jul 7, 2008
    Messages:
    12,138
    Likes Received:
    7,100
    Source?

    Any dev can generally apply DLSS2 to their games, but can they do it without any training?


    EDIT: I do see that nvidia's page on DLSS2 claims "one network for all games" suggesting it doesn't need training.

    But if so, why does it need to be enabled on a per-game basis?
     
    #246 ToTTenTranz, Apr 14, 2021
    Last edited: Apr 14, 2021
  7. Jay

    Jay
    Veteran Regular

    Joined:
    Aug 3, 2013
    Messages:
    3,512
    Likes Received:
    2,855
    https://www.nvidia.com/en-us/geforce/news/nvidia-dlss-2-0-a-big-leap-in-ai-rendering/

     
    PSman1700, Arwin, HLJ and 2 others like this.
  8. DSoup

    DSoup meh
    Legend Veteran Subscriber

    Joined:
    Nov 23, 2007
    Messages:
    14,987
    Likes Received:
    11,086
    Location:
    London, UK
    The last time Microsoft tried to do something consumer-facing with AI, it became a nazi. I'm not convinced my games would benefit from added swastikas. :nope:
     
  9. Jay

    Jay
    Veteran Regular

    Joined:
    Aug 3, 2013
    Messages:
    3,512
    Likes Received:
    2,855
    Because you don't just feed it an image and get out the upscaled version.
    It also needs motion vectors.

    It's not a system level setting that you can change in windows. Games need to implement it.
     
    PSman1700 likes this.
  10. mr magoo

    Newcomer

    Joined:
    May 31, 2012
    Messages:
    193
    Likes Received:
    322
    Location:
    Stockholm
    so you are saying its perfect for new Wolfenstein game? ;) Lessons learned dont let stupid people train your ai m'key
     
    DSoup likes this.
  11. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    13,182
    Likes Received:
    16,039
    Location:
    The North
    If I recall correctly,

    Developers typically now develop their main pipeline around the best anti-aliasing methods, like TAA etc. But DLSS 2.0 requires 2 particular inputs
    a) a completely untouched aliased image
    b) motion vectors

    iirc motion vectors are handled normally through TAA type technologies so there isn't much additional work there, but with respect to aliased frames, most developers are often improving on engines and it's a backwards step for them to go back to recreate the standard render where no antialiasing occurs, so there is development work that is required on behalf the developer to support it. I suspect there is also some trial and error on when to call the DLSS step, and what steps will come before/afterwards once the image is upscaled.
     
    Dictator and PSman1700 like this.
  12. vjPiedPiper

    Newcomer

    Joined:
    Nov 23, 2005
    Messages:
    118
    Likes Received:
    72
    Location:
    Melbourne Aus.
    I totally agree, but I do think that MS could likely speedup the entire process by providing a series of "presets". Where a "preset" represents a NN trained to a medium to high level on a specific and similar set of training data,
    Eg. A fast paced FPS preset, a 3rd person RPG preset, a 2D side scroller preset etc... Providing developers with these presets to start form would likely provide significant decrease to the per-game training required.
    Other inputs such as dominant colours, max movement speeds, expected input resolution etc.could also aid in going from a generic FPS NN, to a game specific one.

    Your comments about the need for any upscaling tech to work on weaker devices are totally valid. I haven't really messed with DLSS 2.0, so i have no idea how it performs on low end systems.
    and sadly my AI days are well behind me, so I'm not really familiar with how well they map to GPU's.
    But surely even using the XSS as the low end target provides a half decent amount of time to do the required calcs.
    (yeah i know AMD have said they want to support all RDNA 1 gen cards too, again not sure what the bottom of that range is... )

    I still think MS is uniquely placed to create a great solution / implementation, And to pass that advantage onto users via the GDK environment.
    But it may end up being at the engine level rather than the SDK level.
    On consoles, and MS consoles specifically i wonder if they could also leverage some of the colour output pipeline to assist in the process, ie the part that does the AutoHDR, and final tone-mapping of the image.
    I know they are talking about doing dynamic tone-mapping on a per-scene or even per-frame basis, which actually requires a fair bit of, somewhat specialized, processing.
    If they could combine that auto Upscale, with the AutoHDR, and Per scene/Frame Metadata there is the potential for a very powerful system....

    I also want to point out they already have the ability to generate masses of training Data, by using back-compat games @ their original res vs the same games rendered using the 4x or 9x upscale mechanism form the back compat system.
     
    iroboto likes this.
  13. Silent_Buddha

    Legend

    Joined:
    Mar 13, 2007
    Messages:
    17,786
    Likes Received:
    7,836
    Only because they allowed people on the internet to teach it. Hmmm, what does that say about people on the internet? :p

    Regards,
    SB
     
    DSoup, BRiT, PSman1700 and 2 others like this.
  14. DSoup

    DSoup meh
    Legend Veteran Subscriber

    Joined:
    Nov 23, 2007
    Messages:
    14,987
    Likes Received:
    11,086
    Location:
    London, UK
    I think it confirms everything you suspected about people on the internet. :yep2:
     
    Silent_Buddha likes this.
  15. mr magoo

    Newcomer

    Joined:
    May 31, 2012
    Messages:
    193
    Likes Received:
    322
    Location:
    Stockholm
    “I hadn't known there were so many idiots in the world until I started using the Internet.”

    ― Stanislaw Lem
     
  16. DSoup

    DSoup meh
    Legend Veteran Subscriber

    Joined:
    Nov 23, 2007
    Messages:
    14,987
    Likes Received:
    11,086
    Location:
    London, UK
    I began using the internet in the 1990s and it wasn't that easy. Most operating systems did not include a TCP/IP stack capable to communicating with ISPs so you had to buy a TCP/IP stack package and be able to configure it. DSL modems were far from plug and play. The cost and effort generally kept a lot of idiots off the internet.

    Good times.. :yes:
     
    JPT, Silent_Buddha and mr magoo like this.
  17. mr magoo

    Newcomer

    Joined:
    May 31, 2012
    Messages:
    193
    Likes Received:
    322
    Location:
    Stockholm
    yeah internet was more 1337 back in the days, i am not bbs old i jumped online quite late in the 90s cause my parents weren't wealthy enough to afford pc. But i remember the joy of configuring non plug and play modems/TCP stack and trying to get online on Win9x.
     
    DSoup likes this.
  18. Silent_Buddha

    Legend

    Joined:
    Mar 13, 2007
    Messages:
    17,786
    Likes Received:
    7,836
    Ah, the good old BBS days when speed was measured in Baud and you could watch as each character appeared on the screen. It was honestly amazing back in the day when I first started and I could almost instantly send messages across great distances to other people.

    There was a time I could type faster than my modem could transmit. :p

    Regards,
    SB
     
    AzBat, JPT and mr magoo like this.
  19. mr magoo

    Newcomer

    Joined:
    May 31, 2012
    Messages:
    193
    Likes Received:
    322
    Location:
    Stockholm
    We are "slightly" OT but for maximum nostalgia i recommend book "Masters of Deception: The Gang That Ruled Cyberspace". If you havent read it ofc
     
  20. Jay

    Jay
    Veteran Regular

    Joined:
    Aug 3, 2013
    Messages:
    3,512
    Likes Received:
    2,855
    I think the XSS could possibly be the biggest beneficiary of ML upscaling.

    The lower the resolution the worse that other upscaling/CB/reconstruction techniques look.
    I think ML upscaling from below 1080p to 1080p could possibly give decent results in comparison.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...