ML is going to be very important for AI, arguably in same way RT can aid in AI calculations, but I think ML will play a much bigger role out of the two overall in that aspect. To that end I wonder how much of MS's custom support for extended ML low-precision math took cues from CDNA design-wise but that's almost impossible to speculate on since IIRC there's no schematics, diagrams or even whitepapers for CDNA available.
From what I've been reading, it seems PS5's support for ML extends to FP16 and INT16, which makes sense considering PS4 Pro already supported FP16. So MS would have, essentially, added support for INT8 and INT4, and of course they have more CUs therefore they can dedicate more in parallel for grouped ML calculations. I'm still curious if there's anything else they've done on the GPU side specifically for DirectML; if their general tech (ish?) event hasn't already happened this month then hopefully some more details emerge around that time.
Some of the mesh shader test results have been...massive, to say the least. 1000% increases on some GPUs. In terms of actual practicality running real games though, that will probably come way down. Still though, even a 10-fold reduction would give 2x FPS in practice, that can stack with other things like what MS are already doing with their FPS enhancement tech (if this can work on older BC games, can it also theoretically be done for newer games, even if it just helps increase FPS a bit rather than 2x/4x multipliers?), that all starts to add up.
I'm sure that ML is going to be mega important going forwards, but because I know so little about the mechanics of how you make it work, I'm trying not to get myself too hyped too early. Like you, I've not seen anything for XSX beyond the reduced precision data formats.
With mesh shaders, those super impressive benchmark results are probably worst case type scenarios to stress this particular area of performance (for benching purposes, naturally!). In a real game you'd probably be looking at much smaller gains as you're so frequently bottlenecked in other areas. Googling around I found this Nvidia page on mesh shaders (I didn't read it all), but there is this paragraph near the start:
https://developer.nvidia.com/blog/using-mesh-shaders-for-professional-graphics/
"Keep in mind that mesh shaders are deliberately designed to expand existing capabilities and allow you to optimize certain use cases. They are not meant to replace the existing geometry pipeline completely. Due to the existing optimizations in the traditional VTG (vertex-, tessellation-, and geometry-shader) pipeline, do not expect mesh shader pipelines to always provide substantial wins in performance."
Anyway, hopefully PS5 has something similar. Primitive shaders on their own wouldn't seem to me to be able to do everything described with mesh shaders, but that's most definitely my "not a pro" opinion. And even if that's true, PS5 might go beyond anything we've seen described by AMD.
I guess it could depend on what kind of shader driven geometry you can push into the hardware rasteriser....?