I did not say it was really that important. Just an example that it "can" scale better than a properly threaded benchmark that was initially created to test multi core scaling. It is one of the very few examples we have a properly threaded PhysX enviroments.
Developers trying to get their games extra marketing, better promotion. Dont seem to mind it very much. I see some consumer distaste for PhysX, but I see just as much people who like it.. People are of course entitled to their opinions. But I just dont see alot of it from the developer side of things. Despite all the complaints I see from some certain consumers. I still see PhysX titles coming out at a pretty steady rate. I have yet to see a title where it cannot be toggled off.
You're missing my point. PhysX is not an integral part of a game, but an optional one, because it will not run on 100% of their target market. If you want software to be used by game developers they need to know it will not limit their addressable market.
That's one reason why folks like PGI exist, when Intel has their own compilers which can produce great code for AMD - Intel tries to prevent ICC from emitting optimal code for AMD CPUs. That's fine and it's perfectly legit (as is tying PhysX to NV hardware or cuda or whatever), but it limits the market uptake.
I also dont think comparing a "Software" library too hardware standards is really all that apt of a comparison. And Intel is hardly what I'd call "open standard" friendly with their hardware and interfaces. But thats another argument and I dont really care to go there.
It's a software library tied to specific hardware. For all intents and purposes, it is a feature of that hardware.
Compare it to Havok. Or at the very least OpenCL/DirectX Compute which have the possibility of being competing solutions. When it comes to Havok its far less of a moral highground to stand on due the fact that it has alot of the same inherit limitations of being company driven and licensed software where royalties are payed.
Havok though, runs on any CPU - whether it's attached to an ATI GPU, an Intel one or an NV one. So a developer can count on it working. It may not always use the GPU, but it will function.
There is the possibility that there could be a non licensed free entirely open standard OpenCL/DirectX compute Physics library. But I havent seen anything like that yet.
That's when physics on the GPU will start to matter.
Yes in the consumer market GPU Compute hasn't hit a home run. I highly doubt Nvidia expected it too At least not at the very start of things. There are some niche applications out there using it though that some consumers might be interested it.
It was marketed aggressively and didn't live up to expectations. I'm calling a spade a spade here.
PhysX is also a way Nvidia has been able to take GPU Compute and make it useful in some tangible way too consumer market. Like it or hate it. PhysX now is 100x more usable than it was when Aegeia owned it.
I totally agree with you here. Buying new hardware was crazy talk, at least NV can re-use existing hardware for PhysX.
There is zero advantage of incentive for Nvidia to try and provide a PhysX enviroment that runs solely on their competitors CPU hardware. ((AMD/Intel)). Havok may end up being the Physics solution that solves all these problems. ((I doubt it)) But currently. The GPU "Havok" Physics is even less proven than PhysX is right now. So we're all waiting for this to prove itself. For better or for worse.
If NV was smart, they'd get PhysX to run on ATI cards. That would substantially drive adoption, since almost all gamers have ATI or NV cards. That would enable developers to rely on GPU acceleration of physics. As it is, they can't rely on that.
I was never trying to argue the GPU is superior to CPU at all tasks. It isn't. But there are definately tasks where the GPU does excel at compared to using a threaded CPU enviroment to do the task. I dont know why you took me post as some kind "extinction of CPUs" post. Or perhaps I just misunderstood you. It wasn't meant as that either way. Nvidia is aiming at taking some of the CPU's lucrative marketshare in heavy math computing areas.
HPC isn't that lucrative, but it's a good strategy for now.
Its also using PhysX to promote its GPU Compute in consumer gaming market. It's obviously working for them a bit as they are making money off of it.
GPU computer for consumers is irrelevant and I doubt it's made them any money worth talking about. Who says "I'm buying an NV gpu because of physX" - nobody sane. If NV has a better card, they'll buy NV, if ATI does, they'll buy ATI. I doubt physx makes a significant difference.
My point is that if NV was smart, they would try to get PhysX implemented as widely as possible, on all types of hardware. That would get developers to actually use it as an integral element of their tool box. As it is, it's not.
David