bit-tech Richard Huddy Interview - good Read

If NV actually wanted to show how much better GPU physics were than CPU, instead of putting out a hobbled versions that runs x87 code instead of SSE/2/3, they would put out something that is decently optimized for the CPU and scales well.

Yeah, optimizing for your competitor's products is a sure fire business strategy. Entertaining, as always Charlie :D

So the other (less used) physics engines for games are just as bad on CPU?

That's precisely my question. I see a lot of judgements being made about PhysX but they seem to be pulled out of thin air. Where are the other CPU based physics engines that do all these things that PhysX supposedly falls short on? Examples of more computationally intense workloads or higher performance on the same workload are welcome.
 
Sorry, can you be a little more specific? Where's the evidence that the physics effects in those games are more computationally taxing or that the engines are more efficient/scalable/faster etc?

Edit: Oh, were you referring to the Ghostbusters video? I see a few simple rigid body effects where most of the "debris" simply disappears after half a second or so without interacting with the environment. No where near a physics showcase.
 
Absolute performance isn't proven, scalability is ... in that it can scale at all beyond a single thread (which PhysX can not in a meaningful way besides what NVIDIA says).
 
That's not particularly relevant at all unless you can point to another engine that scales with multiple cores and produces effects that aren't possible or are slow with PhysX. Most of the angst over PhysX has been over comparisons to GPU PhysX, which is silly to me since that's not its competition.
 
That's not particularly relevant at all unless you can point to another engine that scales with multiple cores and produces effects that aren't possible or are slow with PhysX. Most of the angst over PhysX has been over comparisons to GPU PhysX, which is silly to me since that's not its competition.

The main problem is that other games scale based on cpu cores while physx games like batman do not. Batman hardly cares if i have a quad core or a dual core.

So as gamers we are buying faster cpus and gpus to get better performance but nvidia through controling physx is forcing their hardware on us and rendering other hardware useless.

Thats why I'm done buying physx games.
 
That's not particularly relevant at all unless you can point to another engine that scales with multiple cores and produces effects that aren't possible or are slow with PhysX. Most of the angst over PhysX has been over comparisons to GPU PhysX, which is silly to me since that's not its competition.

Easiest physics possible, point particles, valve particle benchmark. First hit on google "valve particle benchmark" from December 2008:
http://www.overclock.net/nvidia/314125-official-valve-particle-benchmark-results-thread.html

From that thread:
Code:
Username--------------Metric Score------------CPU/SPEED

[LEFT]1. Emmett--------------------125---------------Q6600 @  4GHz
2. grunion--------------------123-----------------Q6600 @ 3.7GHz
3. rolandooo-----------------120----------------Q6600 @ 3.7GHz   
4. kbrescher------------------118------------------Q9450 @ 3.5GHz
5. Tricky-----------------------116------------------Q6600 @ 3.6GHz
6. Criswell---------------------115---------------------Q6600 @ 3.84 GHz
7. Havegooda----------------108-----------------------Q6600 @ 3.46GHz         
8. h33b------------------------106-------------------Q6600 @ 3.42GHz
9. Hailscott-------------------105-----------------------Q6600 @ 3.15GHz
10. spazbob------------------104----------------------Q6600 @ 3.30GHz
11. PGT96AJT------------------100------------------Q6600 @ 3.2GHz
12. spazzbob------------------93-------------------Q6600 @ 3.15GHz 
13. l0ckd0wn------------------90-------------------Q6600 @3.2Ghz  
14. lordikon--------------------73-------------------e8400 @ 3.81GHz           
15. RoadRashed---------------71-------------------E8400 @ 3.6GHz
16. SiPex-----------------------63-------------------E6750 @ 3.6GHz 
17. j_canna---------------------57--------------------e6400 @ 3.2GHz
18. Vegnagun666-------------38-------------------X2 4000+ @ 2.81GHz
19. Quicks----------------------34-------------------Opteron 170 @  2.81GHz[/LEFT]
Recently i saw some cryostasis shots with sparky particles. How well does that scale on CPU physX physics with quad cores?

Nebula has some experience with crysis sparky particles orgies, maybe he can give us some configs so we can try out the timedemos with millions of particles on different dual and quad core CPUs and lowest gfx settings?

I guess steam p0war3d games @anno 2008 and crysis are actual popular comparison vs. physX physics?
 
That's not particularly relevant at all unless you can point to another engine that scales with multiple cores and produces effects that aren't possible or are slow with PhysX. Most of the angst over PhysX has been over comparisons to GPU PhysX, which is silly to me since that's not its competition.

There are multiple engine that scale with multiple cores. They can produce all the effects that nvidia's solution can, so I think you need to drop this constant stone wall response. There is NOTHING physx can do that other physics engines cannot and all the rest actually have multi-cpu support. ergo, the issue is physx and it limitations.
 
Of course it is, if it's supposed to influence your next graphics card buying decision.

PhysX's CPU performance doesn't change depending on what graphics card you have. If you want GPU PhysX then that's a whole other story isn't it? What people are complaining about is PhysX's deficiency on the CPU and I'm still waiting for evidence of such.

Easiest physics possible, point particles, valve particle benchmark.

If I'm not mistaken that's a particle system with no physical simulation.

There are multiple engine that scale with multiple cores. They can produce all the effects that nvidia's solution can, so I think you need to drop this constant stone wall response. There is NOTHING physx can do that other physics engines cannot and all the rest actually have multi-cpu support. ergo, the issue is physx and it limitations.

Where was it stated that other engines can't do what PhysX does? You guys are the ones constantly harping that PhysX has a poor CPU solution so it should be easy enough to find examples of other physics engines that prove that point. Interestingly, such evidence isn't forthcoming. Can't really stone wall against nothing.

Physx on the CPU tanks miserably when tasked with any sort of fluid or cloth simulation but since no other CPU implementation even attempts those things then where are the comparisons coming from? Even rigid body stuff is pretty rudimentary across the board.
 
Yeah DMM looks very nice (both visually and from a technology standpoint). The first general purpose physics library to integrate something like that would have a nice advantage. I didn't see any of the heavy stuff that usually brings CPUs to their knees though, i.e the fluid and cloth simulations mentioned above. The Havok bits were just the usual crate kinematics.

I couldn't tell from the video but does DMM debris interact with the environment? It was clear that the big wood chunks did but the smaller objects like glass shards seemed to disappear in mid-air.
 
Alright I think this off-topic discussion has run its course. If the respective parties haven't "seen the light" at this point, it's unlikely they will ever. I will delete posts relating to" PhysX CPU efficiency" in this thread from now on. Start a new thread if you feel the discussion must go on.
 
The bottom line is we have enough; we don't let triple-A or double-A titles go through the net and in fact it's extremely rare for a Saboteur situation to come up. It's very rare for titles as a whole to go past us without seeing our QA and testing labs along the way. Our team is big enough and actually exceedingly good at some of the things they do.

Going back to this again it seems Mass Effect 2 isn't classed as a triple A title since it's seemingly been released with no way to force AA and no Crossfire profile, unless this weeks drivers miraculously fix both issues but I doubt it.

Both issues fixable on Nvidia cards.
 
Which is nice but they have a bug that prevents monitors going into suspend, so now we have to choose between proper game support and power saving. Not a good trade off to be honest.
Does your monitor lack a power button? If it's really that big a deal, you could turn it off.

-FUDie
 
Back
Top