Yeah but you don't need a SLI motherboard to plug a NVidia GPU whereas you do need an Intel socket MB to plug-in an Intel CPU And along that path the people using SLI are at least in my opinion a minority (and a relatively small one at that) so it should be NVidia's incentive to stimulate SLI adoption since as it stands now the majority of the people won't be needing the SLI feature let alone shell out a price premium on it to compensate for the SLI fee. BUT if it did come as a feature of the board for the same price then someone with a GeForce could add a second card some time in the future therefore resulting in a buy for NV.
You don't HAVE to buy an SLI motherboard you know. Consumer choice and all that.
Was it already mentioned?
John Carmack says no to PhysX
http://www.guru3d.com/news/john-carmack-says-no-to-physx-/
It looks a little more promising since at least NV has a pretty big presence, but you still cannot integrate any fundamental features that rely on physx as they wouldn't run on a lot of systems.
PhysX is still an idiotic idea from a SW development perspective, since it is tied to a specific IHV. It looks a little more promising since at least NV has a pretty big presence, but you still cannot integrate any fundamental features that rely on physx as they wouldn't run on a lot of systems.
There is a difference if a game uses SW PhysX only, or if the game uses HW PhysX, which is performed via CPU. I think in the later case CPU gets highly unoptimized code, which even isn't multi-threaded (only one core is used independently on CPU configuration).Of course it runs on CPU as well. You don't need a PhysX add-on card or a GeForce display card for PhysX to work. You just need them for it to run faster. So in a sense you can use PhysX as, say, Havok, because it runs fine on CPU alone.
I'd argue they have. Havok's CPU based physics seem quicker and more robust than PhysX's CPU implementation.
Didn't someone disassemble some PhysX CPU code before? The code used for HW accelerated PhysX run on CPU, that is. And found that it only uses up to MMX and doesn't even take advantage of any form of SSE? I could swear I read that in these forums somewhere about a year or so ago.
Regards,
SB
I'd argue they have. Havok's CPU based physics seem quicker and more robust than PhysX's CPU implementation.
Was it already mentioned?
John Carmack says no to PhysX
http://www.guru3d.com/news/john-carmack-says-no-to-physx-/
How did nVidia "loose" those titles just because the studio went bankrupt? The games are still there. Also, several of their developers have already set up a new studio and their devrel connections surely won't disappear overnight when they get new projects. Even if future Ghost Recon games on the PC were to be farmed out to different developers there could still be PhysX depending on what Ubisoft wants.nVidia lost 4 PhysX "AAA" titles: GRAW, GRAW 2, Terminator: Salvation, Bionic Commando