It's a proprietary API that anyone can adopt if they so desire. Just like CUDA.
...or another way of playing with terms for semantics. Which IHV is likely to adopt CUDA for its architecture? None.
It's a proprietary API that anyone can adopt if they so desire. Just like CUDA.
Since when is Direct3D a proprietary API for just one IHV?
It's a proprietary API that anyone can adopt if they so desire. Just like CUDA.
I fervently hope that PhysX does not become the established Physics API; I don't want to see good quality Physics support limited to one hardware vendor.
...or another way of playing with terms for semantics. Which IHV is likely to adopt CUDA for its architecture? None.
But with a proprietary solution like CUDA, developed by a competitor which has an interest in your cards being slower, you risk getting screwed. Just like nVidia is screwing people with quad-core CPUs which can't run PhysX fast enough as they can only use one core. However, if there's a third party developing the spec, and neither nV nor ATI have any big weight over the development process, it's safe for everyone.Or let me put it this way. Is it any harder for AMD/ATI to adopt CUDA than it is for them to adopt OpenCL? Nope.
This may be slightly tangential but that is akin to Apple adopting DirectX.Or let me put it this way. Is it any harder for AMD/ATI to adopt CUDA than it is for them to adopt OpenCL? Nope.
Is there evidence for this?Just like nVidia is screwing people with quad-core CPUs which can't run PhysX fast enough as they can only use one core.
Is there evidence for this?
Jawed
But with a proprietary solution like CUDA, developed by a competitor which has an interest in your cards being slower, you risk getting screwed.
While it could be done, but it makes ZERO sense from a competitor perspective.
I remember the software PhysX mode has been single-threaded since the the first tryout of CellFactor "promo" game. Mirror's Edge impl is no different by my account, as it's evident by the CPU load graphs.Is there evidence for this?
http://forums.techarena.in/web-news-trends/1027042.htm
Larrabee of course brings us the 2013 feature-set in 2010.
I'm quite sure about this too, but I'll check with Mirror's Edge soon on how many cores it utilizes with physx on
Yes it does sound ridiculous. Obviously, physics through PhysX is parallel enough to run well on an array of parallel stream processors. I suppose it wouldn't make much sense to run physics on a GPU if it was a single-threaded task, would it?Just how parallel is physics in itself. (I know, the question sounds a bit ridiculous - but give it a second's worth of thinking)
Looking at the CPU graphs on that page it appears that PhysX is indeed wasting huge amounts of available CPU capability.If you also use a quad-core instead of a dual-core processor at the same clock speed, you also get about 20 percent more performance.
What has any of this got to do with effects physics in games like Mirror's Edge, effects that are trivially parallelisable, not taking full advantage of available CPU when running on the CPU?But the more realistic it becomes, the more serial it gets.
Just switch to conspiracy mode!What has any of this got to do with effects physics in games like Mirror's Edge, effects that are trivially parallelisable, not taking full advantage of available CPU when running on the CPU?
Mirror's Edge is a console port scaling with more than two cores on PC. It even scales quite a bit with a dedicated physx-processor, which means, that there's not too much unused cores left in the system. Now, of course you could fill those remaining cores up with physics calculations, but how to control the amount of processing time? If there's a window breaking into n pieces, then you'd have to have them processed - even more so, if a game should add physics not only for effects but for real gameplay reasons.http://www.pcgameshardware.com/aid,...hysx-effects-benchmark-review/Reviews/?page=2
Looking at the CPU graphs on that page it appears that PhysX is indeed wasting huge amounts of available CPU capability.
Jawed
See above. I have difficulties imagining how to effectively control the FLOPS used for physics. The obvious solution would be a slider for the amount of pieces some stuff would break up into, but AFAIK every instance of this slider would have it's own set of pre-tesselated geometry to work on, i.e. pre-defined breaking points.What has any of this got to do with effects physics in games like Mirror's Edge, effects that are trivially parallelisable, not taking full advantage of available CPU when running on the CPU?
Jawed
Depends how you look at it. In PhysX games, you don't have to enable the advanced effects (and if you do, a quad-core is useless anyway).You simply cannot sell a PC game running decently only on quad- or octacore machines.