We demonstrate near-instant training of neural graphics primitives on a single GPU for multiple tasks.
...
Neural graphics primitives, parameterized by fully connected neural networks, can be costly to train and evaluate. We reduce this cost with a versatile new input encoding that permits the use of a smaller network without sacrificing quality, thus significantly reducing the number of floating point and memory access operations.
We propose a novel neural compression technique specifically designed for material textures. We unlock two more levels of detail, i.e., 16× more texels, using low bitrate compression, with image quality that is better than advanced image compression techniques, such as AVIF and JPEG XL. At the same time, our method allows for on-demand, real-time decompression with random access similar to block texture compression on GPUs. This extends our compression benefits all the way from disk storage to memory. The key idea behind our approach is compressing multiple material textures and their mipmap chains together, and using a small neural network, that is optimized for each material, to decompress them. Finally, we use a custom training implementation to achieve practical compression speeds, whose performance surpasses that of general frameworks, like PyTorch, by an order of magnitude.
Seems NVIDIA is left alone optimizing DirectML along with Microsoft, AMD should be present in the scene, but they are not, I feel this is a repeat of DXR.Don't see any thread on DirectML where this would be a better fit.NVIDIA and Microsoft Drive Innovation for Windows PCs in New Era of Generative AI
At the Microsoft Build developer conference, NVIDIA and Microsoft showcased a suite of advancements in Windows 11 PCs and workstations with NVIDIA RTX GPUs.blogs.nvidia.com
AMD claims that they are participating:Seems NVIDIA is left alone optimizing DirectML along with Microsoft, AMD should be present in the scene, but they are not, I feel this is a repeat of DXR.