Silent_Buddha
Legend
Think about fluid simulation for example. If the recursive depth is high enough, you'd basically have every simulated particle potentially influencing every other particle, which would make the problem to solve quite serial, wouldn't it?
Now, nobody would do such a thing apart from scientific simulations and techdemos. But the larger your "kernel" (or whatever the influence diameter would be called in a technically correct way) becomes, the less parallel your physics becomes. Talking about the butterfly effect.
Sure - you can make it scale almost perfectly like in 3DMark Vantage's physics test, where no single system can influence the other systems. You can parallelize also things like particle systems for smoke and a system for cloth sim and some rigid body simulation - okay. But the more realistic it becomes, the more serial it gets.
The more serial it becomes the worse it will run on a GPU than on a CPU. The main advantage of a GPU for physics modeling is how incredibly parallel it is.
Also as someone noted elsewhere PhysX isn't even taking advantage of things like SSE3/4. I'd have to go looking but I believe it doesn't use anything more advanced thatn MMX on the CPU side. Which means it's even more crippled on CPU and wasting even more cycles.
The advantage of an open source or more practically a non-GPU specific solution is that it would theoretically also be optimized to use the CPU fairly efficiently. A GPU should still be faster in most situations since it's highly parallel. A CPU would naturally excel in things that are much more serial in nature.
As it is. PhysX is optimized for PPU and GPU, and appears to be deliberately castrated with regards to CPU. Or if not deliberately castrated, deliberately ignore and not optimized for. As that would make the PPU/GPU look less attractive.
In that sense, yes I do hope PhysX dies horribly fast after OpenCL is released.
Regards,
SB