NVIDIA made the latest SDK open source too. So you can theoretically run them on AMD GPUs as well.
Not sure about that, because on gitHub i can only find a memory manager for Cuda, but no Cuda code or gfx API compute shaders.
https://github.com/NVIDIAGameWorks/PhysX/tree/4.1/physx
Maybe the GPU stuff is now all part of Apex? Apex has features that are interesting for GPU, but also it seems more of a framework to make integration easier, so i don't think so. (
https://www.nvidia.com/object/apex.html)
So either i'm just unable to find the GPU sources, or they are still closed source and provided only by a dll which uses Cuda. (this is the origin of my confusion here)
But ignoring those politics, personally i think the question 'Why no more PhysX hardware acceleration?' has to be asked considering some points that did not exist years ago:
1. With low level APIs and async compute you want to distribute compute work along the rendering at times where there is little conflict and best performance. (E.g. the usual proposal: Do async compute while rendering shadow maps, pair ALU vs. bandwidth heavy workloads, etc.)
As long as PhysX runs on Cuda, this option is not present because graphics API and Cuda have their different mechanisms to dispatch and shedule work to GPU. (Personal assumption! I have never used Cuda.)
To fix this, NV would need to port PhysX to multiple APIs, at least DX12 and VK. And the user would need to know a bit more about PhysX internals which makes it harder to use.
Maybe NV plans to do this next - i don't think they still have much profit from keeping GPU PhysX vendor locked, and if the user is just Unity and Epic the engine extra work would pay out.
2. The physics effects interesting for GPU (particles, cloth, soft bodies, foliage) are easy to implement in comparison to constrained rigid body dynamics. Also the geometry is already on GPU for rendering and can be eventually reused, but in a non standard format across various engines.
So if you are a big AAA developer and have the manpower, you likely get the best solution for those things with custom tech. This solution is then 'hardware accelerated' too (no matter how we define this term, there never was dedicated physics hardware on NV GPUs).
This point also applies to destruction debris which might not need accurate collision detection and solver.
3. Most people agree GPU physics are not worth it in the average case. It's surely nice to enhance PC games, but it's mostly eye candy so optional. And with GPU being mostly the bottleneck it's not that attractive.
(I'm curious how next gen might change this. Fluid is really expensive, but smoke combined with some volumetric lighting could drop some jaws at acceptable costs.)
Now with fine grained sheduling on GPU introduced with raytracing, i guess that running all physics on GPU could be done much more efficiently.
Before that GPU was attractive to accelerate the solver, but polyhedra and analytical shapes collision detection seemed too complex and divergent.
So in theory NV could make a more powerful PhysX for RTX cards i guess, utilizing those options that are not exposed to graphics APIs, also the BVH. (pure speculation
)
But even then (and ignoring the vendor lock), all those CPU cores need some work too, and they are well suited for physics. So i think physics will remain mostly a CPU task for good reasons.