BUILD 2020: Direct3D12 and DirectML are coming to Linux

Ike Turner

Veteran
Welp...
https://devblogs.microsoft.com/directx/directx-heart-linux/

This is the real and full D3D12 API, no imitations, pretender or reimplementation here… this is the real deal. libd3d12.so is compiled from the same source code as d3d12.dll on Windows but for a Linux target. It offers the same level of functionality and performance (minus virtualization overhead). The only exception is Present(). There is currently no presentation integration with WSL as WSL is a console only experience today. The D3D12 API can be used for offscreen rendering and compute, but there is no swapchain support to copy pixels directly to the screen (yet ).

Earlier this year Microsoft also announced OpenCL & OpenGL for DirectX12

https://www.collabora.com/news-and-...introducing-opencl-and-opengl-on-directx.html

Microsoft is also porting TensorFlow to DirectML for WSL & Windows (!)

https://devblogs.microsoft.com/comm...m-for-linux-build-2020-summary/?sf234145990=1

What a time to be alive...
 
Last edited:
But only for Windows Subsystem for Linux, it still relies on Windows being there too.
Well that's still better than nothing.
The OpenCL & OpenGL port to DX12 is interesting as it will finally allow Windows ARM devices to run OGL & OCL apps.
 
Well, not native Linux, rather custom Linux Kernel in Windows Subsystem for Linux 2.0 (WSL2).

This will require Windows Insider Preview 21H1 Iron with WDDM 2.9 drivers which will come packaged with Linux user-mode binaries and kernel drivers for WSL2 to support DXCore (Linux version of DXGI), Direct3D 12, DirectML, as well as OpenGL, OpenCL and Vulkan (using Mesa3D mapping layers), and NVIDIA CUDA libraries.

https://devblogs.microsoft.com/directx/directx-heart-linux/

word-image-9.png




By design, the Linux kernel driver (dxgkrnl) simply redirects all calls from the user-mode runtime/driver to the WDDM Kernel Mode driver (DXGK) using HyperV interfaces - that said, I suppose it's only a matter of time before libd3d12 and libdirectml end up in native Linux and dxg thunk interfaces are implemented by propritetary Linux Kernel drivers from AMD and NVIDIA, and this could help support other Microsoft frameworks on Linux, like WPF and Direct2D/DirectWrite which are all based on Direct3D...

https://www.phoronix.com/scan.php?page=news_item&px=Microsoft-DX12-WSL2
https://www.phoronix.com/scan.php?page=news_item&px=Microsoft-DXGKRNL-Uphill-Battle
https://www.phoronix.com/scan.php?page=news_item&px=Microsoft-Writing-Wayland-Comp
 
Last edited:
Interesting ...

Does DirectML backend for Tensorflow have all the same features as the CUDA/ROCm backend including XLA compiler support or is it limited to Tensorflow Lite which only does inferencing as far as I know ?
 
The merge proposal:

Objective
Implement a new TensorFlow device type and a new set of kernels based on DirectML, a hardware-accelerated machine learning library on the DirectX 12 Compute platform. This change broadens the reach of TensorFlow beyond its existing GPU footprint and enables high-performance training and inferencing on Windows devices with any DirectX12-capable GPU.

Motivation
TensorFlow training and inferencing on the GPU have so far been limited to Nvidia CUDA and AMD ROCm platform, with limited availability on ROCm only on Linux. Bringing the full machine learning training capability to Windows on any GPU has been one of the most requested features from the Windows developer community in our recent survey. By implementing a new set of TensorFlow kernels based on DirectML, Windows developers, professionals, and enthusiasts will be able to realize the full breadth of TensorFlow functionality on the vast hardware ecosystem of Windows devices, significantly expanding TensorFlow's availability on both the edge and the cloud.

User Benefit
  1. Users will be able to use TensorFlow on any DirectX12-capable GPU on any Windows device.
  2. It works right out of the box. DirectML is an operating system component that works with any GPU on the device. The users do not need to go through the process of finding the right combination of CUDA/cuDNN runtime dependencies and graphics driver that works with a version of TensorFlow.
  3. The size of our current TensorFlow-DirectML packages is roughly 20% of the current TensorFlow-GPU 1.15 package size of the comparable build. We expect our package size to double at launch, which will still be no bigger than half the current size. The smaller package size simplifies the requirements for the containers and other deployment mechanisms.
  4. DirectML starts up considerably faster than the CUDA runtime. This improves the developer's experience in the model development process.
 
Interesting ...
Does DirectML backend for Tensorflow have all the same features as the CUDA/ROCm backend including XLA compiler support
From the proposal some exceptions are no XLA complier and no dependencies using CUDA/ROC specific kernels.

This change is designed to be a drop-in replacement of the current TensorFlow-GPU 1.15 package with the following exceptions:
  • Use cases taking direct dependencies on CUDA-specific kernels are not supported. No interoperability between the CUDA and DirectML kernels is allowed.
  • Use cases requiring XLA compilation are not supported by the DirectML device and kernels.
 
It looks like an implementation of Direct3D on native Linux (if there is one) would have to use DRI/DRM interfaces directly - not dxg/dxgkernel which is just a redirector to familiar WDDM DDIs.
And if Microsoft is not open-sourcing its libd3d12 implementation, kernel developers are unlikely to accept the required DDI changes.
https://www.phoronix.com/scan.php?page=news_item&px=Microsoft-Writing-Wayland-Comp


Maybe they could refactor their upcoming Vulkan to Direct3D 12 translation layer for Mesa 3D/Gallium to work the other way round and provide Direct3D 12 on top of Vulkan?
https://www.collabora.com/news-and-...introducing-opencl-and-opengl-on-directx.html

There's also the VKD3D project which implements a subset of Direct3D 12 on top of Vulkan.
 
Last edited:
It's coming to WSL(2), not to Linux.
But only for Windows Subsystem for Linux, it still relies on Windows being there too.
This, only the kernel mode driver of a subset of DXGK is open source and probably it's meant to better handling the different distros running on WSL. Everything else is closed. OpenGL is limited to 3.3 version AFIK since they will using their translation layer initially developed for the crappy Qualcomm drivers as well for OpenCL 1.2 (another dead API). Everything then is passed to WDDM with a hardware virtualization interface, no x.org/wayland/mir clusterfuck involved.. CUDA support will run with an updated proprietary driver for WSL (so Windows)... What about AMD ROCm&co? Who knows, probably neither AMD since they do not have a clear strategy on GPGPU..

This move is to attract more Linux developers on Windows and WSL and possibly Azure. This move is not to favour gaming on Linux.
 
Last edited:
This move is not to favour gaming on Linux.
Nobody ever thought or expected it to be about gaming.
All these projects are about facilitating Linux development on Windows, bringing OpenGL/CL support to Windows ARM devices (whether both those APIs are "dead" is off-topic as this is meant to support legacy apps/soft that will never have been updated/ported to run on ARM) & finally bringing TensorFlow to 100s Millions of devices (as of right now it Linux & CUDA/ROCm only & an exceptional pain to get up & running).
 
What about AMD ROCm&co? Who knows, probably neither AMD since they do not have a clear strategy on GPGPU..

ROCm won't work with this model. CUDA on WSL2 uses the WDDM kernel driver which is already the case with Windows itself. ROCm needs AMDKFD (kernel fusion driver). If WSL3 does come to fruition then it must allow custom kernel drivers if it's desired that ROCm should work on Windows in some form ...

As a sidenote not having the XLA compiler supported with DirectML might be a deal breaker for some but then there are other advanced features that CUDA/ROCm supports like cuDNN/MIOpen and NCCL/RCCL of which I'm not even sure there's even an equivalent for DirectML ...
 
ROCm won't work with this model. CUDA on WSL2 uses the WDDM kernel driver which is already the case with Windows itself. ROCm needs AMDKFD (kernel fusion driver). If WSL3 does come to fruition then it must allow custom kernel drivers if it's desired that ROCm should work on Windows in some form ...

As a sidenote not having the XLA compiler supported with DirectML might be a deal breaker for some but then there are other advanced features that CUDA/ROCm supports like cuDNN/MIOpen and NCCL/RCCL of which I'm not even sure there's even an equivalent for DirectML ...

ROCm will use PAL on Windows according to @bridgman
 
Back
Top