Direct3D feature levels discussion

This is probably a silly question. One of the promises of neural shaders is that they will approximate complex materials at lower cost than standard compute shaders. Where exactly will developers obtain the neural models that represent the materials they want in their games? Is every studio going to start training their own ML models? All of the ML models in games to date are shipped in proprietary IHV libraries.
 
Where exactly will developers obtain the neural models that represent the materials they want in their games? Is every studio going to start training their own ML models? All of the ML models in games to date are shipped in proprietary IHV libraries.
I suspect that there will be libraries of that available for purchase / licensing eventually, with big 3rd party engines having them built in, similar to how materials and textures can be bought now.
 
Today, NVIDIA and Microsoft announced that neural shading support will be coming to DirectX 12 through the Agility SDK Preview in April 2025. The DirectX update will enable you to access RTX Tensor Cores from within shaders to achieve incredible image quality and performance gains. It will help enable AI workloads by optimizing matrix-vector operations, which are crucial for AI training, fine-tuning, and inference.
 
DXR 1.2 announced. standardizes opacity micromaps and shader execution reordering

 
DXR 1.2 announced. standardizes opacity micromaps and shader execution reordering

So DXR1.2 supports RTX 40 and 50 only? Arc only supports a variant of SER (with no OMM), and RX 9000 supports a variant of OMM (with no SER). Or am I missing something?

Also DXCV seems to standardize AI upscaling and denoising as well as neural texture compression.
 
So DXR1.2 supports RTX 40 and 50 only? Arc only supports a variant of SER (with no OMM), and RX 9000 supports a variant of OMM (with no SER). Or am I missing something?
Arc Celestial is going to support OMM, or at least something similar to it.
Sub-Triangle Opacity Culling (STOC) subdivides triangles in BVH leaf nodes, and marks sub-triangles as transparent, opaque, or partially transparent. The primary motivation is to reduce wasted any-hit shader work when games use texture alpha channels to handle complex geometry. Intel’s paper calls out foliage as an example, noting that programmers may use low vertex counts to reduce “rendering, animation, and even simulation run times.”7 BVH geometry from the API perspective can only be completely transparent or opaque, so games mark all partially transparent primitives as transparent. Each ray intersection will fire an any-hit shader, which carries out alpha testing. If alpha testing indicates the ray intersected a transparent part of the primitive, the shader program doesn’t contribute a sample and the any-hit shader launch is basically wasted. STOC bits let the any-hit shader skip alpha testing if the ray intersects a completely transparent or completely opaque sub-triangle.
Also DXCV seems to standardize AI upscaling and denoising
One potential application for DXCV is that developers can create their own neural upscaler/denoiser/frame generator instead of relying on the IHV solutions (DLSS/FSR4/XeSS), just like Epic and certain other devs created their own non-neural upscaler instead of relying on FSR2. But it isn't going to change the existing IHV solutions, Microsoft already has DirectSR for standardizing the implementation of those solutions (which is currently upscaling only, but might support denoising and frame gen one day).
 
DXR 1.2 announced. standardizes opacity micromaps and shader execution reordering


That’s very nice but I do miss the days where Microsoft was driving DirectX and not just playing second fiddle to IHVs multiple years later. They’re even using Nvidia’s terminology. I wonder how much the API differs if at all from Nvidia’s version.
 
That’s very nice but I do miss the days where Microsoft was driving DirectX and not just playing second fiddle to IHVs multiple years later. They’re even using Nvidia’s terminology. I wonder how much the API differs if at all from Nvidia’s version.
Which makes their hesitance to go all the way and just add all features Nvidia has now even stranger.
 
So DXR1.2 supports RTX 40 and 50 only? Arc only supports a variant of SER (with no OMM), and RX 9000 supports a variant of OMM (with no SER). Or am I missing something?

Also DXCV seems to standardize AI upscaling and denoising as well as neural texture compression.
OMM runs on software, while Lovelace and Blackwell have hardware acceleration. so all RT gpus should, in theory, support OMM in some fashion
 
I'm always pushing for multiple options.
But do wonder why MS doesn't move DX to legacy and just go all in with Vulcan.
Moving forward what are they actually getting from putting in whatever work they do?

Do wonder if they are having those conversations now, with multi plat push etc now also
 
Back
Top