bit-tech Richard Huddy Interview - good Read

Well, presumably when Nvidia is finally forced to port PhysX to OpenCL it'll automatically support multiple CPU threads, unless they code it specificly to disallow this.

So, it's only a matter of time until they are forced to better support multiple CPU cores, either that or they drop PhysX once that situation presents itself.

I'll be surprised if they don't eventually port PhysX to OpenCL as the same code would be able to run on both CPU and GPU. Oh wait that might be a problem, as then the gap between GPU and CPU wouldn't be as great. So maybe they will forever stick with CUDA.

Regards,
SB
 
There is no excuse for underutilized cores in a physics API meant to run on a stream processor. Whatever parallel loads you feed to CUDA can be fed to a very simple job queue application running code from a CUDA to CPU compiler. The latter is probably already there.

Nobody is saying NVidia is breaking the law by doing this. They're just saying NVidia is being a corporate douche for intentionally crippling CPU performance.

I would love to see some Havok vs. PhysX CPU benchmarks for similar physics simulations. It wouldn't surprise me if there was an order of magnitude difference.

I can repeat:
I have never seen an AGIEA (or even novodex) demo/game with full CPU-multicore support?
I would still like to see Huddy back up his claims, as they are in contrast to my own experince?
 
Specifically which claims and what is your experience? (The probably mistaken assumption that the old library was multithreaded for x86 before NVIDIA got it was not from the interview nor from mr. Huddy ... although I'm pretty sure the console versions are and it shouldn't take that much work to port that work to x86.)
 
I would love to see some Havok vs. PhysX CPU benchmarks for similar physics simulations. It wouldn't surprise me if there was an order of magnitude difference.

I would love to see some Havok vs Physx GPU benchmarks for similar physics simulations. The magnitude of difference should be off the charts.
 
Specifically which claims and what is your experience? (The probably mistaken assumption that the old library was multithreaded for x86 before NVIDIA got it was not from the interview nor from mr. Huddy ... although I'm pretty sure the console versions are and it shouldn't take that much work to port that work to x86.)

That PhysX was multithreaded on the CPU before NVIDIA aquired AGEIA?
All the demo's (From Carwash to PhysX Rocket to RealityMark) I have run myslef always had Core1 maxed out and virtually nothing going on Core2.
As this link shows:
http://forums.guru3d.com/showthread.php?p=1974275

I cannot remember any games with multicore usage (on the PC) of PhysX and that is why I call this quote bad PR spin:
bit-tech: -so we'll see AMD GPU physics in 2010?

RH: Bullet should be available certainly in 2010, yes. At the very least for ISVs to work with to get stuff ready.

The other thing is that all these CPU cores we have are underutilised and I'm going to take another pop at Nvidia here. When they bought Ageia, they had a fairly respectable multicore implementation of PhysX. If you look at it now it basically runs predominantly on one, or at most, two cores

I don't have a PPU(and AGIEIA's old drivers) in any systems anymore so I cannot disprove his claims meself...but I a damned sure about my memories...and they cannot remeber the "situation" he describes.

AFAIR even the AGIEA control panel demos ran the physics on a single thread.
 
If the GPU isn't needed in the first place then so what?

Nvidia prefers you spend money on a GPU rather than on a quad core CPU. Intel and AMD have another perspective on this. All of which makes business sense.

Somehow, one side gets called a corporate douche. The other side is commended for sticking to open standards, which in this case mainly translates to a failure to innovate and/or secure market share.

Like you say - so what? None of this matters.
 
Meh, googled a bit ... the old Novodex was multithreaded, they made a big song and dance routine about it, but if you looked a bit closer it wasn't really done in a very useful way (not in a way which would show up in demos, or in a way which would be terribly practical in games).
 
Nvidia prefers you spend money on a GPU rather than on a quad core CPU. Intel and AMD have another perspective on this. All of which makes business sense.

Somehow, one side gets called a corporate douche. The other side is commended for sticking to open standards, which in this case mainly translates to a failure to innovate and/or secure market share.

Like you say - so what? None of this matters.
It matters to me as a gamer. Taking my money and bribing developers to run algorithms inefficiently runs directly counter to my interests. When companies compete with open standards it gives me better hardware when they compete on who can find more obnoxious way of locking in customers it gets me what exactly?
 
Meh, googled a bit ... the old Novodex was multithreaded, they made a big song and dance routine about it, but if you looked a bit closer it wasn't really done in a very useful way (not in a way which would show up in demos, or in a way which would be terribly practical in games).

Yup and people forget that physics scale very badly with the number of CPU cores too...hence why I don't know why he would say what he did...unless it's a bad PR spin. :?:

It matters to me as a gamer. Taking my money and bribing developers to run algorithms inefficiently runs directly counter to my interests. When companies compete with open standards it gives me better hardware when they compete on who can find more obnoxious way of locking in customers it gets me what exactly?

AFAIK only NVIDIA (which basicly means AGIEA) has delivered.
Back when I got my PPU in 2006, both ATi and NVIDIA hurried up to claim they would do GPU physics real soon.

The rest is history..NVIDIA acquired AGIEA...and AMD (read : ATi) is still all talk and nothing to show....going on ~4 years.
Sorry, but I would rather have something proprietary...than nothing "open to all"...
 
It matters to me as a gamer. Taking my money and bribing developers to run algorithms inefficiently runs directly counter to my interests.

I disagree with your characterisation of what's happening.

When companies compete with open standards it gives me better hardware when they compete on who can find more obnoxious way of locking in customers it gets me what exactly?

Havok is not exactly an open standard either, just the one the CPU makers happen to be aligning behind because right now it fits their purposes.

As we see so often, the outrage is quite selective. You know what? Just vote with your wallet.
 
PhysX = proprietary NVIDIA
Havok = proprietary Intel (and why AMD in the end dropped it...after much(again) PR and nothing to show)

Then we have BulletPhysics that from what I gather only has been used in "indiegames"..and now is the hope of AMD...(until they pick something else).
 
Havok is proprietary, but it runs on x86 which is not quite as proprietary. Bullet is not only used in indiegames, but it's predominantly used by Sony (since they employ the main developer).

Apart from those though, not all game engine developers are as ready to sacrifice their physics performance for expedience as Tim Sweeney. Cryengine and Internal engine for instance have pretty well developed internal physics engine. The Infernal engine physics demo video is purty and has a nice quote :

I’m pretty sure we’ve never seen GPU-enabled physics to be able to do anything on a scale like this ... this frees up the graphics card to do the real work that it needs to do, and allows the CPU to really shine.
 
Havok is proprietary, but it runs on x86 which is not quite as proprietary. Bullet is not only used in indiegames, but it's predominantly used by Sony (since they employ the main developer).


Apart from those though, not all game engine developers are as ready to sacrifice their physics performance for expedience as Tim Sweeney. Cryengine and Internal engine for instance have pretty well developed internal physics engine. The Infernal engine physics demo video is purty and has a nice quote :[/QUOTE]

That quote is "disproven" by their simple rigid bodies collisions that dissapear after ~10 seconds.
It was done better in 2006 (via hardware physics) so it only means that CPU physics still are way behind and not really something to aim for.
 
PhysX = proprietary NVIDIA
Havok = proprietary Intel (and why AMD in the end dropped it...after much(again) PR and nothing to show)

Then we have BulletPhysics that from what I gather only has been used in "indiegames"..and now is the hope of AMD...(until they pick something else).

When did AMD drop Havok? Last I heard, that was still going quite strong.

-Charlie
 
As we had evaluated PhysX for BattleForge I have some experiences with the SDK. From the SDK point of view using another CPU core for the physic simulation is pretty easy. But it could only work as it should if you can update and start the simulation some time before you need the result. If you need it immediately the fetch will block. Unfortunately there is still much multicore unfriendly engine code out there that could not handle this latency. If you want to run on the GPU you have to face these latency problems, too. This could make it complicate to integrate.

On the other hand locking PhysX to the GPU is quite simple. There are some functions that work only if you use a GPU context. They are simply not implemented in the CPU version. If someone makes use of these functions the simulation would not run on the CPU before you remove them.

Got a good explanation for why Physx doesn't even max a single CPU?

-Charlie
 
Back
Top