Being faster with 0% load is not nearly enough to be faster in the real world for those of us with quad cores.As for the question of whether the GPU is only faster because the CPU version is crippled, I don't buy it: Bullet physics is faster on a GPU than a CPU. Why would PhysX not be?
Being faster with 0% load is not nearly enough to be faster in the real world for those of us with quad cores.
Lets take the cause celebre, Batman, if the CPU implementation of PhysX ran 3 times faster (in all likelihood it would run quite a bit faster than that when properly optimized). Would it still be outperformed with the GPU implementation on a GTX285?
Given the huge variability in performance for PC gaming they probably didn't intend to push the load into double digit percentages to begin with ... so speed was secondary to ease of use. It's only when it starts heavily impacting framerates that the speed becomes important and it took NVIDIA sponsorship for developers to start doing that.The funny thing is that most of the comparisons revolve around the interface and easy of use. Not a single mention of performance differences (on the CPU).
Hmmm I can't think of an example of a game doing stuff on the CPU where PhysX would require a GPU to do the same thing. When you say "PhysX" you're referring to the whole platform and not just specifically the GPU accelerated bits right?
Nah, maybe it WILL mean something if the dynamic changes in the future and there's real competition. Do you have word of this happening sooner rather than later?
My argument is all about this: "If your requirements are such that PhysX CPU is sufficient for your needs..."
I don't think it's unreasonable to expect that there are many cases where you don't need the state-of-the-art and decent is good enough.
As for the question of whether the GPU is only faster because the CPU version is crippled, I don't buy it: Bullet physics is faster on a GPU than a CPU. Why would PhysX not be?
Given the huge variability in performance for PC gaming they probably didn't intend to push the load into double digit percentages to begin with ... so speed was secondary to ease of use. It's only when it starts heavily impacting framerates that the speed becomes important and it took NVIDIA sponsorship for developers to start doing that.
Richard said:I have no word, but I'm hinting exactly what you said. People (that want PhysX in OpenCL) must understand the only way for nV to see a benefit in doing this is if there is an alternative. NV will face the music if their decision (NOW - without competition) turns out to be wrong, but you can't fault their reasoning.
How does that contradict what I said?See that doesn't mesh with what I'm reading though. Nvidia is definitely pushing GPU PhysX very hard in order to sell hardware. But it looks like PhysX was pretty popular and highly regarded by developers long before Nvidia got involved and that still seems to be the case today.
In the googling that you reference is it clear whether it was widely regarded as fast on the PC (where AGEIA were working on their own hardware implementation which they were trying to sell), or just on consoles (where there was no option for proprietary hardware and no potential conflict of interest)?Did Nvidia lower the performance of PhysX when they acquired Ageia? Reading around a bit it seems that PhysX was widely regarded as being very fast, if not the fastest of the available libraries even back in 2007. A quick google turns up quite a few discussions among devs that are diametrically opposite to the assumptions being made in this thread.
In the googling that you reference is it clear whether it was widely regarded as fast on the PC (where AGEIA were working on their own hardware implementation which they were trying to sell), or just on consoles (where there was no option for proprietary hardware and no potential conflict of interest)?
As to it being a fast implementation on PC CPUs at the moment - it's hard to see how a highly parallel physics solver can be regarded as fast if it's only using one of the available CPU cores on a multi-core machine....
http://techreport.com/articles.x/17618/13
Unless the solver takes the basic step of parallelising across cores, debating the optimization (or lack thereof) of the lower levels of the implementation (effective use of SSE, for example) would hardly seem to matter.
Well sure, but I have a problem with doing that evaluation in a vacuum. IMO there are no absolutes in a competitive environment. The only thing that matters is whether you're faster than the other guy, not whether you're "fast" in an absolute sense based on an arbitrary set of criteria.
Or does the current implementation of CPU PhysX look lackluster in comparison to the other options.
Well sure, but I have a problem with doing that evaluation in a vacuum. IMO there are no absolutes in a competitive environment. The only thing that matters is whether you're faster than the other guy, not whether you're "fast" in an absolute sense based on an arbitrary set of criteria.
Does the fact that Physx's CPU implementation seems quite poor in the scheme of things mean that developers themselves cannot even think to use a real PhysX implementation as they would want consistant performance for all people, not just those with dedicated Nvidia GPUs for the job? Perhaps if the CPU implementation were to be better then developers would use more effects and thus the actual usefulness of having an Nvidia GPU for the task vs say a better CPU would be highlighted?
Essentially are Nvidia shooting their GPU/PPU implementation in the foot by having weak CPU performance? Also this can't be good PR as it looks to me and many others as a deliberate attempt to weaken the CPU runtime to make the GPU runtime look better.
Absolutely not, the only reason for the existence of PhysX (currently) is to sell graphics cards, if they allow a CPU implementation to get close to a GPU implementation (in a direct comparison) then they've lost.
I expect Nvidia to completely drop PhysX as soon as they don't think it's moving video cards anymore or has the potential to move video cards.
Regards,
SB
PS. as I said before, there really is no physics engine left with the same kind of reach which ATI could buy, optimize for themselves and give away for free ... Bullet can effectively not be made proprietary and Intel already got Havok. All the other players have fallen by the wayside long ago. Even if Havok was available for purchase in the end it would be a pretty piss poor situation for the rest of us to have two proprietary physics engines (with token support for the competition).