Machine Learning: WinML/DirectML, CoreML & all things ML

New faster material rendering using Neural Rendering.

This will be released as RTX Neural Materials.

This is even more interesting than DLSS 4 in my opinion because it makes neural rendering a core, integral part of the graphics pipeline, not something applied at the end like DLSS.
 
In a way it might be better to apply it at the end. Having golden samples which must be displayed makes things hard for AI, it's a direct cause of instability.

Maybe it's time for deferred hallucination. Have a subsampled G-buffer as a mere suggestion to the neural renderer, then let the Deep Learning Hallucinating Renderer make something up (for framegen there would also be frame time and view matrix and the G-buffer might be no-sampled).

PS. not being sarcastic.
 
Last edited:
Perhaps this should be another thread, but with "Cooperative Vectors" API coming to Direct X, devs can now access tensor cores for normal shading/compute shaders and thus leverage quick ML performance for games.
Enabling Neural Rendering in DirectX: Cooperative Vector Support Coming Soon

What are Cooperative Vectors, and why do they matter?​

Cooperative vector support will accelerate AI workloads for real-time rendering, which directly improves the performance of neural rendering techniques. It will do so by enabling multiplication of matrices with arbitrarily sized vectors, which optimize the matrix-vector operations that are required in large quantities for AI training, fine-tuning, and inferencing. Cooperative vectors also enable AI tasks to run in different shader stages, which means a small neural network can run in a pixel shader without consuming the entire GPU. Cooperative vectors will enable developers to seamlessly integrate neural graphics techniques into DirectX applications and light up access to AI-accelerator hardware across multiple platforms. Our aim is to provide game developers with the cutting-edge tools they need to create the next generation of immersive experiences.

Intel, AMD, and QUALCOMM support is due as the blog mentions - after having seen the Neural Rendering demos I am 100% onboard for this and would love to see how devs can manage to use this type of feature. Even if it just means getting better universal upscaling in the mid-term or other smaller things.
 
Back
Top