PhysX and Flow GPU Source Code Now Available

Sega_Model_4

Newcomer
Since the release of PhysX SDK 4.0 in December 2018, NVIDIA PhysX has been available as open source under the BSD-3 license—with one key exception: the GPU simulation kernel source code was not included.

That changes today.

We’re excited to share that the latest update to the PhysX SDK now includes all the GPU source code, fully licensed under BSD-3!

With over 500 CUDA kernels powering features such as rigid body dynamics, fluid simulation, and deformable objects, GPU PhysX represents one of the most advanced real-time simulation use cases of CUDA and GPU programming. We hope this release will be a valuable resource for learning, experimentation, and development across the community.

In addition, we’re also open-sourcing the full GPU compute shader implementation of the Flow SDK, our real-time, sparse grid–based fluid simulation library.

We can’t wait to see what you build with it. Explore, experiment—and feel free to post issues or feedback right here on GitHub!

 
Good on them! Now maybe modders could fix the issues 50 series have with PhysX in those older games no longer supported.

I think it's unlikely, the RTX 50 removed 32-bit hardware support, what could happen is someone writing a mod to change old games to use 64-bit Physx.
 
I think it's unlikely, the RTX 50 removed 32-bit hardware support, what could happen is someone writing a mod to change old games to use 64-bit Physx.
Maybe there's some misunderstanding, but that's what I meant. Obviously they're not going to be fixing the drivers lol..
 
I think it's unlikely, the RTX 50 removed 32-bit hardware support, what could happen is someone writing a mod to change old games to use 64-bit Physx.
A 32-bit application cannot use 64-bit libraries. You would need to get the source code of the game, replace 32-bit PhysX with 64-bit, and recompile it (among other steps). Doing this with code derived from decompilation instead of the original source code might theoretically be possible but I don't expect that to happen. The best you can hope for is that the developers themselves do it.
 
A 32-bit application cannot use 64-bit libraries. You would need to get the source code of the game, replace 32-bit PhysX with 64-bit, and recompile it (among other steps). Doing this with code derived from decompilation instead of the original source code might theoretically be possible but I don't expect that to happen. The best you can hope for is that the developers themselves do it.

Or you can port the PhysX codes to say DirectCompute to make them run in 32 bits mode. This way you can make a 32 bits compatible PhysX dll and the games should be able to run. This is a lot of works though.
 
Or you can port the PhysX codes to say DirectCompute to make them run in 32 bits mode. This way you can make a 32 bits compatible PhysX dll and the games should be able to run. This is a lot of works though.

How similar is DirectCompute to CUDA? I’m guessing PhysX uses CUDA instructions or fast paths that may not be present in DirectCompute.
 
It’s still crazy to me that no OpenCL or DirectCompute physics libraries have emerged. I recall heated debates in the past about the proprietary nature of PhysX and how we would be so much better off if it was open to all vendors. Sadly that never happened and the proprietary stuff also fizzled out.
 
It’s still crazy to me that no OpenCL or DirectCompute physics libraries have emerged. I recall heated debates in the past about the proprietary nature of PhysX and how we would be so much better off if it was open to all vendors. Sadly that never happened and the proprietary stuff also fizzled out.
While there have been some improvements over the years for standard HLSL compute shaders, the most popular shading language doesn't really come close to expressing the richer feature sets of GPU compute kernel languages like CUDA and in some cases even OpenCL C. Also this release doesn't really change the outcome of older applications of which have features gated behind the lack of a 32-bit CUDA driver ...

Even if we could create vendor agnostic GPU physics solvers, they still wouldn't be "universal enough" (would have to write Direct3D/Metal/Vulkan backends) for the biggest middleware solutions providers so developing your physics library with standard C++ still has the least amount of friction with regards to multiplatform development ...
 
If there's a common interface (something like "DirectPhysics") vendors might be interested in implementing their own optimized versions while game developers might be interested in actually using them.
Unfortunately I think one of the problems of a GPU phsyics engine is that unless it's only used on some cosmetic effects (such as particle effects, hair, or fabrics), it's generally desireable to have the CPU with access to the simulation results. Without an efficient way to do so, the application will be rather limited. With today's tendency of games over utilizing GPU while keeping the CPU under utilized I can understand why this is still not a thing.
 
For games specifically, is there anything an OpenCL or DirectCompute physics library can do that you can't do with regular Vulkan/DX12 compute shaders? Considering that many GPU physics effects require access to the game's depth buffer or g-buffer, it might actually be easier to use compute shaders.
 
If there's a common interface (something like "DirectPhysics") vendors might be interested in implementing their own optimized versions while game developers might be interested in actually using them.

I was thinking more along the lines of epic rolling a GPU accelerated physics library. If they can support multiple platforms for graphics they can also do it for physics.
 
I was thinking more along the lines of epic rolling a GPU accelerated physics library. If they can support multiple platforms for graphics they can also do it for physics.

This is also a possibility, there is something like Havok after all. I remember reading somewhere about some parts of Havok being accelerated with GPU, but I don't know much about it.
 
I was thinking more along the lines of epic rolling a GPU accelerated physics library. If they can support multiple platforms for graphics they can also do it for physics.
I wonder what Epic Games are supposed to do next when they encounter driver bugs on a 'supported' older Android device that won't get any updates ...

The graphics/renderer teams behind Unreal/Unity have a culture of working around/tolerating vendor politics but I'm not so sure their physics/simulation team are as keen on the idea ? (especially when they're just finished from moving away from PhysX)
 
How is this any different to the status quo? CPUs and operating systems have bugs too.
If a CPU has a hardware design bug (which are already few and far in between) then it's your job to fix the compiler (Clang/LLVM is supported on many platforms)/find an alternate algorithm or otherwise refuse to support them and operating systems generally don't break user-space processes/applications like games ...

If a GPU has a driver bug (too many to list given the different number of vendor/API combinations), are you sure that you want to develop your multiplatform physics library beyond just supporting Windows/consoles and including 'undesirable' platforms like Apple or Android systems especially when you can't fix their broken driver compiler/buggy API implementation ?
 
If a CPU has a hardware design bug (which are already few and far in between) then it's your job to fix the compiler (Clang/LLVM is supported on many platforms)/find an alternate algorithm or otherwise refuse to support them and operating systems generally don't break user-space processes/applications like games ...

If a GPU has a driver bug (too many to list given the different number of vendor/API combinations), are you sure that you want to develop your multiplatform physics library beyond just supporting Windows/consoles and including 'undesirable' platforms like Apple or Android systems especially when you can't fix their broken driver compiler/buggy API implementation ?

How likely is it that GPU bugs will affect lower level APIs like CUDA vs all of the app specific hackery that goes on in graphics. We’re not talking about game ready drivers here. Either way it’s a cop out. The APIs are available. The problem is lack of incentive.
 
How likely is it that GPU bugs will affect lower level APIs like CUDA vs all of the app specific hackery that goes on in graphics. We’re not talking about game ready drivers here. Either way it’s a cop out. The APIs are available. The problem is lack of incentive.
I wouldn't really compare publically available modern gfx APIs (which are very much still designed around multiple hardware vendors) to a singular vendor compute-centric API like CUDA (doesn't really expose fixed function HW too) in terms of potential surface area with respect to buggy implementations ...

When gameplay code and physics code are heavily interwined between each other, any driver bugs can become a much more invasive showstopper to shipping your game on more device configurations. A driver bug won't just stop at just taking down your renderer but it'll take down your whole game with it too ...

If the choice comes down to writing your library multiple times (Direct3D/Metal/Vulkan/consoles) with no guarantees that it'll actually run everywhere you want it to compared to the option of just writing once (portable C++) and running everywhere (any modern x86/ARM CPU), it's really not that hard to see what the more sensible choice is ...
 
Last edited:
A 32-bit application cannot use 64-bit libraries. You would need to get the source code of the game, replace 32-bit PhysX with 64-bit, and recompile it (among other steps). Doing this with code derived from decompilation instead of the original source code might theoretically be possible but I don't expect that to happen. The best you can hope for is that the developers themselves do it.

Maybe with a compatibility layer, like Proton. A contributor to dxvk-nvapi, an API that provides Proton with the ability to use AMD technologies like DLSS, Physx, Reflex, etc.

 
Back
Top