No PhysX on OpenCL soon

So yes, PhysX is not bad on consoles performance wise and it would not be bad on PC if it would took advantage of multi core CPU's instead of concentrating only on nVidia GPU's.

Why do you think PhysX performance is bad on PC's? It's slow when trying to do stuff that GPUs can do but so is every other CPU based physics engine.

But I understand from where this decision is coming and therefore I think this API will fail sooner or later.

I highly doubt that running on a single core will doom an API. I'm sure many games out there only allocate one thread for physics calcs with other threads used for other things - AI, scene construction etc. The toolset, featureset, ease of use, extensibility etc are all more important than how many cores it runs on.
 
Why do you think PhysX performance is bad on PC's? It's slow when trying to do stuff that GPUs can do but so is every other CPU based physics engine.



I highly doubt that running on a single core will doom an API. I'm sure many games out there only allocate one thread for physics calcs with other threads used for other things - AI, scene construction etc. The toolset, featureset, ease of use, extensibility etc are all more important than how many cores it runs on.

Running on a single core in Pentium MMX compatibility mode will. As you know physics tasks are parallel in nature and can get very good speed boost from vectorized code.
I wouldn't mind buying GeForce as my PhysX accelerator if that would be possible (other company main 3D accelerator) if it would give me tangible and honest reasons for that, but still having as an option to run everything on CPU in a optimal fashion is a must for good multiplatform API.

If I can bring somewhat similar analogy here! We have Battleforge with SSAO in DX10 and DX11. Both will produce exactly same picture, but DX10 implementation is slower due to HARDWARE/API limitations. Someone with DX11 card will have better performance thanks to CS5.0, yet person with older DX10 card can still get same effects only slower. In case he want's better performance he can choose to get faster DX10 card (2nd card, 3rd card) and they still will improve performance of software, or jump to DX11. SSAO implementation is not crippled to run only on half a shaders on your GPU or anything like that.
With PhysX you can't get more cores to improve your experience, you only can get a GeForce on PC... This is :cry:

On the other hand with OpenCL Bullet Physics you will be able to utilize your GeForce's with OpenCL driver or Radeon's or simply fall back to CPU patch with multicore support.
If only PhysX would move to OpenCL all my objections would be obsolete. ;)
 
I wouldn't mind buying GeForce as my PhysX accelerator if that would be possible (other company main 3D accelerator) if it would give me tangible and honest reasons for that, but still having as an option to run everything on CPU in a optimal fashion is a must for good multiplatform API.

I really don't follow your reasoning. Are you saying that because PhysX on the CPU can't keep up with PhysX on the GPU that it's worse off than other CPU only APIs? That doesn't really make any sense to me.

Bullet Physics does present an interesting alternative option for hardware accelerated physics but we'll have to wait until it matures a bit (and for IHV OpenCL support to mature) before we can make a comparison.
 
I really don't follow your reasoning. Are you saying that because PhysX on the CPU can't keep up with PhysX on the GPU that it's worse off than other CPU only APIs? That doesn't really make any sense to me.

Bullet Physics does present an interesting alternative option for hardware accelerated physics but we'll have to wait until it matures a bit (and for IHV OpenCL support to mature) before we can make a comparison.

This is only true because they choose to do so and not because GPU's are so much better in doing it.

I agree with Bullet point though.
 
Well we're back to square one. You're stating something with no supporting evidence which is what started this whole discussion. Until you can point to another physics engine that has better performance than PhysX then those assertions don't hold any water.
 
Well we're back to square one. You're stating something with no supporting evidence which is what started this whole discussion. Until you can point to another physics engine that has better performance than PhysX then those assertions don't hold any water.

Just because something is relatively good, doesn't make it good.
 
When it comes to technology it does. The merits of any technology by definition are relative - that's why everything has to be "better" than the previous solution else it's crap.

Was R600 bad or just bad relative to G80? Are SSD's good or just good relative to spinning platters? Etc, etc.
 
When it comes to technology it does. The merits of any technology by definition are relative - that's why everything has to be "better" than the previous solution else it's crap.

Was R600 bad or just bad relative to G80? Are SSD's good or just good relative to spinning platters? Etc, etc.

But the difference here was R600 was ATi's best shot (at that moment in time). The CPU implementation in PhysX is clearly not Nvidia's best shot.
 
Maybe not but why does that matter? Depending on the resources and efficiency of a given firm they could expend less effort than needed to match or beat the competition's products. Does that make their products any worse because they didn't put their best foot forward? I must admit, you guys come up with some interesting angles to try to beat down this stuff :)

Effort is meaningless. It's only the result that matters. Case in point is Intel's less than aggressive clockspeeds on their parts. They could have clocked them higher but for what reason? They were still fast enough at the lower speeds and better than the competition.
 
Depending on the resources and efficiency of a given firm they could expend less effort than needed to match or beat the competition's products. Does that make their products any worse because they didn't put their best foot forward?

But again, that's not the case here. Nvidia purposely "cripples" the CPU implementation because they want their GPU implementation to look that much better. It is not because they lack the resources to do so. And it is certainly not because they lack competition considering most games do not use PhysX (I would argue PhysX is actually "losing" in this regard).

And I personally don't have a problem with Nvidia doing this. It's a company; they are just trying to "pay the bills". Having said that, as a community we should striving/pushing for something (a little bit) better than PhysX.
 
But again, that's not the case here. Nvidia purposely "cripples" the CPU implementation because they want their GPU implementation to look that much better. It is not because they lack the resources to do so. And it is certainly not because they lack competition considering most games do not use PhysX (I would argue PhysX is actually "losing" in this regard).

But what I'm saying is that it doesn't matter if it's crippled or not. That's something most consumers would never know or care about. The only thing that matters is performance and I haven't seen any evidence of Havok for example being faster than PhysX.

And I personally don't have a problem with Nvidia doing this. It's a company; they are just trying to "pay the bills". Having said that, as a community we should striving/pushing for something (a little bit) better than PhysX.

Yeah but why target PhysX specifically for improvement when all other physics engines are in the same boat feature/perf wise?
 
But what I'm saying is that it doesn't matter if it's crippled or not. That's something most consumers would never know or care about. The only thing that matters is performance and I haven't seen any evidence of Havok for example being faster than PhysX.

Ah the ole "but most people won't care/notice" card. I don't see how that's relevant to the discussion. I already said I don't have a problem with Nvidia doing it (and as you pointed out, the masses don't appear to notice). But as an (educated) community we should desire more (I don't assert this community is in any way the majority).

Yeah but why target PhysX specifically for improvement when all other physics engines are in the same boat feature/perf wise?

Who says PhysX is being singled out? I'm just merely stating that Nvidia has the power to make the CPU implementation of PhysX (a lot) better. The same could be applied to other physics API's.
 
Well we're back to square one. You're stating something with no supporting evidence which is what started this whole discussion. Until you can point to another physics engine that has better performance than PhysX then those assertions don't hold any water.

What metrics do you propose to use?

Anyway this is a moot point. Game dev would need to step in who used more than one SDK and explain pros and cons for each implementation.
 
I don't see how that's relevant to the discussion.

The discussion in this thread is how PhysX fares in a comparison to other physics libraries. We weren't discussing whether or not Nvidia is trying as hard as they could or whether customers care that they aren't.

@Lightman, yes agreed.
 
The discussion in this thread is how PhysX fares in a comparison to other physics libraries.

Actually I believe the discussion was why PhysX is being used more efficiently on consoles than the (CPU implementation on the) PC. Your answer to this question was the premise of the question itself was incorrect (i.e. your point was, comparatively speaking, PhysX is plenty efficient on the PC).

I disagree with your answer. It is fair to say the CPU implementation of PhysX is rather inefficient. It's designed to be that way. You may accept this effort by Nvidia as reasonable (and that's perfectly fine). But surely you can see why others are not happy with the current environment.
 
I disagree with your answer. It is fair to say the CPU implementation of PhysX is rather inefficient.

Based on which metric though? Inefficiency would imply that PhysX uses more resources to accomplish the same task compared to other similiar physics solutions. Or alternatively it uses the same resources but accomplishes less. Is there evidence of either scenario out there?
 
Based on which metric though? Inefficiency would imply that PhysX uses more resources to accomplish the same task compared to other similiar physics solutions. Or alternatively it uses the same resources but accomplishes less. Is there evidence of either scenario out there?

Based on the fact that they are fully capable of making it multi-threaded (if console rumors are anything to go by) and fully capable of doing at least minimal optimisations for platforms (again if console rumors are anything to go by), and yet are plainly opposed to do anything of the sort for the PC platform because it would erode the marketing impact of GPU <-> CPU PhysX comparisons.

Regards,
SB
 
What people think they can do is sorta irrelevant if their current product is competitive - "fast enough but not as fast as we think you can be" is not a valid critism.
 
What people think they can do is sorta irrelevant if their current product is competitive - "fast enough but not as fast as we think you can be" is not a valid critism.

Fast enough is just as much guesswork as slower than the competition but free. And as Havok is paid and isn't excactly cheap, I still hold that PhysX is likely to be noticeably slower than Havok... Either way, noone will ever know for sure, as I'm sure there's masses of NDA's signed to make sure performance numbers are never released.

Regards,
SB
 
Last edited by a moderator:
Back
Top