Direct3D Mesh Shaders

“On PlayStation 5 we use primitive shaders for that path which is considerably faster than using the old pipeline we had before with vertex shaders.”
Yea there are a couple of discussions floating about here about PrSh and NGG. and RDNA1/2 support etc.
The NGG is a driver path that compiles the old 3D front end into PrSh. And then there is explicit coding for PrSh, which I think is likely only available on PS5. That one is harder to prove, but in my experience, Sony documentation never gets out. So it's just safe to assume it does.
That being said however, IIRC, PrSh is a direct drop in for the existing 3D front end, which is why NGG can exist. Mesh Shaders is a complete overhaul of the front end, which is why it always require explicit coding to take advantage of it. And I don't know which games have or have not implemented it yet.

I do think PS5 likely supports NGG as well for developers that don't want to write PrSh.
Sometimes I think this is why we see more consistent frame rates on titles on PS5 than XSX. There are just other parts of the pipeline that can be hung up that won't get fixed by lowering the resolution.
 
Yea there are a couple of discussions floating about here about PrSh and NGG. and RDNA1/2 support etc.
The NGG is a driver path that compiles the old 3D front end into PrSh. And then there is explicit coding for PrSh, which I think is likely only available on PS5. That one is harder to prove, but in my experience, Sony documentation never gets out. So it's just safe to assume it does.
That being said however, IIRC, PrSh is a direct drop in for the existing 3D front end, which is why NGG can exist. Mesh Shaders is a complete overhaul of the front end, which is why it always require explicit coding to take advantage of it. And I don't know which games have or have not implemented it yet.

I do think PS5 likely supports NGG as well for developers that don't want to write PrSh.
Sometimes I think this is why we see more consistent frame rates on titles on PS5 than XSX. There are just other parts of the pipeline that can be hung up that won't get fixed by lowering the resolution.
Fair enough
 
The NGG is a driver path that compiles the old 3D front end into PrSh.
There’s an interesting blog post detailing how NGG shaders work on RDNA1/2 from a driver developer working for Valve here:
 
There’s an interesting blog post detailing how NGG shaders work on RDNA1/2 from a driver developer working for Valve here:
Really good insight here. And quite recent. Only a month old
 
There’s an interesting blog post detailing how NGG shaders work on RDNA1/2 from a driver developer working for Valve here:
I assume there must be significant differences in the NGG between RDNA1 and RDNA2, otherwise RDNA1 would support Mesh Shaders too.

While shader culling can also work on RDNA1, we don’t enable it by default because we haven’t yet found a game that noticably benefits from it. On RDNA1, it seems that the old and new pipelines have similar performance.
Navi 10 and 12 lack some features such as per-primitive outputs which makes it impossible to implement mesh shaders on these GPUs. We don’t use NGG on Navi 14 (RX 5500 series) because it doesn’t work.
Yup, just as I thought. Now I wonder, is PS5 supporting per-primitive outputs with the Geometry Engine?
 
Last edited:
Wouldn't be of much benefit in the absence of full FL12_2 support though.

DirectX 12 Ultimate features are orthogonal to each other in terms of functionality. There's no meaningful interaction between Mesh shaders and the other features that inherently makes it better with higher feature levels ...
 
DirectX 12 Ultimate features are orthogonal to each other in terms of functionality. There's no meaningful interaction between Mesh shaders and the other features that inherently makes it better with higher feature levels ...
There is a general expectation from a feature level. Having a GPU with FL12_1 supporting just one feature from 12_2 would very likely lead to most games treating this GPU as FL12_1.
 
There is a general expectation from a feature level. Having a GPU with FL12_1 supporting just one feature from 12_2 would very likely lead to most games treating this GPU as FL12_1.
Not really, games are gated by feature usage in practice rather than feature levels if VKD3D-Proton is anything to go by. Virtually no games use ROVs in existence but that's not going to stop hardware/driver implementations from running content that theoretically requires higher feature levels than what system the software is capable of ...

Games just don't crash if a driver or a translation layer falsely reports a higher feature level than what the system can actually support ...
 
Not really, games are gated by feature usage in practice rather than feature levels
Not sure what you mean by that but I think it would be fairly unlikely that anyone would create a query for MS support specifically to support the feature on RDNA1 cards - because on other cards it's enough to query the FL supported by h/w and drivers. So basically such feature would probably be left unused and thus AMD's decision to not invest in supporting it makes sense. Especially since we're at the end of 4th year of MS availability with exactly zero shipped games using it.
 
Not sure what you mean by that but I think it would be fairly unlikely that anyone would create a query for MS support specifically to support the feature on RDNA1 cards - because on other cards it's enough to query the FL supported by h/w and drivers. So basically such feature would probably be left unused and thus AMD's decision to not invest in supporting it makes sense. Especially since we're at the end of 4th year of MS availability with exactly zero shipped games using it.
Detecting feature levels alone isn't going to block individual feature usage. Many games require resource binding tier 2 but still runs on hardware that's not FL_12 compliant. Until the actual API features are used, developers are going to design their software around required features rather than feature levels because the latter is not a useful indicator of what they want to technically achieve ...

Again if there's a feature that's a part of a feature level that isn't supported on the system, that's not going to stop the system from running the content itself if it's unused. Feature levels only become a hard requirement unless an applications makes an actual attempt at using all of them ...
 
Not sure what you mean by that but I think it would be fairly unlikely that anyone would create a query for MS support specifically to support the feature on RDNA1 cards - because on other cards it's enough to query the FL supported by h/w and drivers. So basically such feature would probably be left unused and thus AMD's decision to not invest in supporting it makes sense. Especially since we're at the end of 4th year of MS availability with exactly zero shipped games using it.
Oh, you have to mean Turing's sampler feedback "support" then
 
I assume there must be significant differences in the NGG between RDNA1 and RDNA2, otherwise RDNA1 would support Mesh Shaders too.



Yup, just as I thought. Now I wonder, is PS5 supporting per-primitive outputs with the Geometry Engine?
Rdna1 does support mesh shaders in metal3 — so I assume it is definitely doable, but maybe AMD does not think it worth it to do it
 
Rdna1 does support mesh shaders in metal3 — so I assume it is definitely doable, but maybe AMD does not think it worth it to do it
I haven't looked at Metal3 but I expect there are subtle differences between it and Direct3D's definition of Mesh Shaders.
 
Timur has posted a follow up blog post detailing “how the sausage is made” wrt adding mesh/task shader support to the Mesa-based open source AMD and Intel drivers:
 
And interestingly, based on some recent AMDVLK LLPC commits, it looks like RDNA3 / GFX11 has improved mesh shader support. I think they removed the RDNA2 limitation mentioned by Timur that each thread can only export one vertex/primitive each. If I’m correct, then RDNA3’s implementation of mesh shaders is now a lot closer to NV’s:
 
Back
Top