Machine Learning: WinML/DirectML, CoreML & all things ML

Discussion in 'Rendering Technology and APIs' started by Ike Turner, Mar 5, 2019.

  1. Ike Turner

    Veteran Regular

    Joined:
    Jul 30, 2005
    Messages:
    2,105
    Likes Received:
    2,290
    Since the first big commercial use case of WinML is now publicly available (in Adobe Lightroom CC 0219) I thought that it would be better to have dedicated thread about all Machine Learning things instead of polluting the Nvidia DLSS thread with semi OT content.

    Anyway, here are the goodies:

    Adobe Lightroom CC 0219 Image Enhancer using WinML & CoreML:

    https://theblog.adobe.com/enhance-details/

    Performance (spoiler AMD's GCN is fast):

    https://www.pugetsystems.com/labs/a...C-2019-Enhanced-Details-GPU-Performance-1366/
    [​IMG]

    In other ML news..Unity developed its own ML inference engine which is totally cross platform/HW compatible! No need for TensorFlow/WinML,CoreML or any other IE.. "it just works" on anything:

    Unity ML-Agents Toolkit:
    https://blogs.unity3d.com/2019/03/0...v0-7-a-leap-towards-cross-platform-inference/
     
  2. Max McMullen

    Newcomer

    Joined:
    Apr 4, 2014
    Messages:
    20
    Likes Received:
    105
    Location:
    Seattle, WA
    Unity's done a great job integrating ML into their product so far and it makes sense for them to have a layer that can provide ML functionality without any platform specific frameworks. That said, the DirectX platform does have some unique hardware acceleration support in DirectML. At this year's GDC we announced the public release of the DirectML API and Unity announced support for DirectML, leveraging it where they can for increased performance:

    https://devblogs.microsoft.com/directx/gaming-with-windows-ml/
    https://devblogs.microsoft.com/directx/directml-at-gdc-2019/

    Thanks,
    Max McMullen
    Development Manager
    Compute, Graphics, & AI
    Microsoft
     
  3. Ike Turner

    Veteran Regular

    Joined:
    Jul 30, 2005
    Messages:
    2,105
    Likes Received:
    2,290
    ESRGAN image upscaler (Enhanced Super-Resolution Generative Adversarial Networks) is finally free from its CUDA "shackles".
    It has been ported to Unity's cross platform Inference Engine (named Barracuda :cool:) and now works on Intel/AMD/Nvidia hardware directly inside Unity.
    Download Unity package here (& import into Unity 2018.3+ ).
     
    #3 Ike Turner, May 10, 2019
    Last edited: May 10, 2019
  4. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    13,055
    Likes Received:
    15,817
    Location:
    The North
    nice! Unity is getting pretty awesome. What a fantastic feature for developers.
     
  5. TheAlSpark

    TheAlSpark Moderator
    Moderator Legend

    Joined:
    Feb 29, 2004
    Messages:
    22,146
    Likes Received:
    8,531
    Location:
    ಠ_ಠ
    Is it too much to ask for some freeware where I can just input a batch of images/textures/etc., select a SuperScale amount, and presto-output-folder? :p

    Kind of curious to see if MS can have that as an option when taking screenshots on Xbox or Win10, for example. It'd be like having Ansel (so hot right now) without needing to inject it or have developer intervention (only select games).

    I'd love to see an experiment for 90s 2D/sprite games. Is performance there for real-time superscaling where older-school games are in the 320x200-800x600 range :?: I guess they'd have to be on D3D first... and maybe there'd be crazy artefacting in motion around moving objects (e.g. isometric games). :s zoidblerg.
     
    #5 TheAlSpark, May 11, 2019
    Last edited: May 11, 2019
  6. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    13,055
    Likes Received:
    15,817
    Location:
    The North
    not quite just a freeware with some customizations ;)
    I suspect someone can make the tool like that. what are you looking to do?
     
  7. zed

    zed
    Legend Veteran

    Joined:
    Dec 16, 2005
    Messages:
    5,339
    Likes Received:
    1,375
    well considering it took over a minute on a single 256x256 sized texture on my admittedly inboard graphics I say having this running at good framerates is a long way off.
    Also it doesnt handle Jpg artifacts well, just makes them worse, so prolly not the best for photos, but old CGI yes could be good
     
  8. milk

    milk Like Verified
    Veteran Regular

    Joined:
    Jun 6, 2012
    Messages:
    3,680
    Likes Received:
    3,731
    I think a more workable way of doing this would be to rip the game's source art, superscale that up, and inject the higher res art into the game. Some emulators allow such fan-made texture-patch injection.
     
  9. Per Lindstrom

    Newcomer Subscriber

    Joined:
    Oct 16, 2018
    Messages:
    52
    Likes Received:
    53
    Location:
    Sweden
    Very quiet about DirectML, any news?
     
  10. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    13,055
    Likes Received:
    15,817
    Location:
    The North
    its released. It's up to developers to take advantage of it.
     
    Per Lindstrom likes this.
  11. Per Lindstrom

    Newcomer Subscriber

    Joined:
    Oct 16, 2018
    Messages:
    52
    Likes Received:
    53
    Location:
    Sweden
    [​IMG]
    Found out yesterday. :).
     
    BRiT, orangpelupa and iroboto like this.
  12. Per Lindstrom

    Newcomer Subscriber

    Joined:
    Oct 16, 2018
    Messages:
    52
    Likes Received:
    53
    Location:
    Sweden
    pharma, Dictator, BRiT and 1 other person like this.
  13. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    13,055
    Likes Received:
    15,817
    Location:
    The North
    Pretty fun thing here.
    Background is static. Only players are AI generates.
     
  14. brawndolicious

    Joined:
    Sep 5, 2020
    Messages:
    1
    Likes Received:
    1
    Hey long time lurker, first time poster. But I had a question on this whole ML/DLSS type stuff. If this isn't the right thread, I apologize and would appreciate it if you could point me to the right thread.

    Anyways I was wondering that for the DLSS implementation of AI upscaling, AFAIK it starts on the low-res 2D image as the input to the algorithm, but doesn't that seem like an inefficient way to go about upscaling and filling in detail? Like it mostly works well and gives an amazing performance boost compared to native rendering, but to me as a layman it sounds too inefficient to become the standardized method of the future as the algorithm doesn't have a lot of information that it "knows" since it is just working off of a 2D image. Like if it's just seeing a pattern of pixels but doesn't have a label to say this is an office chair or the main characters pony tail or whatever.

    I would think that a better implementation is if the game engine tells the algorithm which object it is working on as well as it's location, size and orientation on the screen. For things like an explosion effect, foliage, semi-transparent clouds, particle effects, etc. Those types of visuals are really dynamic and I can imagine it tricking the DLSS algorithm or at least making it less efficient. I dunno, just a question I wanted to ask as there's a lot of hype about Tensor cores and them being crucial to graphics in the future but I'm skeptical of it the same way I was of SSAA and Physx fifteen years ago. It just feels like there's a better way to go about it.
     
    Per Lindstrom likes this.
  15. Jay

    Jay
    Veteran Regular

    Joined:
    Aug 3, 2013
    Messages:
    3,484
    Likes Received:
    2,839
    That's not the only input.
    • Current low resolution frame
    • Previous frame
    • Motion vectors
    Are all inputs into the ML upscaling.
     
  16. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    13,055
    Likes Received:
    15,817
    Location:
    The North
    DLSS is 2 fold. First it’s job is to anti alias. The second job is to upscale.

    the AI is responsible for detecting aliasing in the image, this is how it is trained. Then using 16K images as training, it tries to figure out what the antialiasing would look like if 16K Super sampling was done on this image and reverted back to the source resolution.

    per object would be too slow and it would be too difficult.
    Once it’s done with the anti aliasing, it then uses a second AI NN to upscale from the target resolution up to 4K.

    the speed gains comes from rendering at a lower resolution (1080p) vs native. By doing so you’re working with 4x less pixels. And I’m general 4x less workload. When we continue to layer more things on like ray tracing which has a massive
    Costs with more pixels (as each pixel will
    Shoot rays) the costs keep going higher. Keeping the resolution locked at 1080p and using AI to extrapolate the rest and shifting it to 4K is faster than raw calculations, if it weren’t, there wouldn’t be any speed gains.

    Hope that Helps
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...