It still is.Anway, this seems to me that Jen-Sun Huang's "every pixel could be AI generated in 10 years" comment is no longer a pipe dream.
It still is.Anway, this seems to me that Jen-Sun Huang's "every pixel could be AI generated in 10 years" comment is no longer a pipe dream.
It still is.
This will be released as RTX Neural Materials.New faster material rendering using Neural Rendering.
![]()
NVIDIA Showcases Real-Time Neural Materials Models, Offering Up To 24x Shading Speedup
NVIDIA unveils a new real-time neural materials models approach, offering a huge 12-24x speed in shading performance vs traditional methods.wccftech.com
What are Cooperative Vectors, and why do they matter?
Cooperative vector support will accelerate AI workloads for real-time rendering, which directly improves the performance of neural rendering techniques. It will do so by enabling multiplication of matrices with arbitrarily sized vectors, which optimize the matrix-vector operations that are required in large quantities for AI training, fine-tuning, and inferencing. Cooperative vectors also enable AI tasks to run in different shader stages, which means a small neural network can run in a pixel shader without consuming the entire GPU. Cooperative vectors will enable developers to seamlessly integrate neural graphics techniques into DirectX applications and light up access to AI-accelerator hardware across multiple platforms. Our aim is to provide game developers with the cutting-edge tools they need to create the next generation of immersive experiences.
It was revealed that DeepSeek has been trained on a cluster of 50k H100s, does that still count as lower computational complexity?Deepseek has just flipped the script. It used to be believed that MoE could not compete for the highest end frontier models, yet they are competing ... at over an order of magnitude lower computational complexity, due to the combination of MoE, native fp8 and multi-token prediction
It was revealed that Deepseek has been trained on a cluster of 50k H100s, does that still count as lower computational complexity?
GPU for NTC decompression on load and transcoding to BCn:
GPU for NTC inference on sample:
- Minimum: Anything compatible with Shader Model 6
- Recommended: NVIDIA Turing (RTX 2000 series) and newer.
- Minimum: Anything compatible with Shader Model 6 (will be functional but very slow)
- Recommended: NVIDIA Ada (RTX 4000 series) and newer.
No, it still uses the new neural rendering format, but it's just quite slow without Ada+ hardware. I was playing around with the demo a little bit on my 2060 laptop. The closer you get to the textures, the more performance it costs. But memory savings are huge.NVIDIA today released several SDKs for neural rendering as part of the RTX Kit.
Otherwise, the NTC textures are transcoded to BCn format at load time and the classic sampling method is used. This does not save VRAM and results in quality loss.
So I just noticed that while the SDK for neural materials in not out yet the underlying library for neural shading is out: https://github.com/NVIDIA-RTX/RTXNSNVIDIA today released several SDKs for neural rendering as part of the RTX Kit.
The interesting part about Neural Texture Compression is that it provides a fallback method depending on the ML capabilities of the GPU.
RTX 40 and higher GPUs can use realtime sampling directly from NTC-compressed textures.
Otherwise, the NTC textures are transcoded to BCn format at load time and the classic sampling method is used. This does not save VRAM and results in quality loss.