PhysX Performance Update: GPU vs. PPU vs. CPU

Heh, GPU+PPU is slower than GPU+PhysX enabled drivers. Oh how worthless that PPU was, is and continues to be. Ugh.
 
Heh, GPU+PPU is slower than GPU+PhysX enabled drivers. Oh how worthless that PPU was, is and continues to be. Ugh.

What's worse, look at the Radeon results.
The 4850 and 4870 score about the same, both can't even keep up with the 8800GT because they're limited by the PPU, and they can't enable GPU physics.
AMD had better pray that this PhysX doesn't take off, because otherwise it would be pointless to make faster GPUs, they'd be limited by the CPU or PPU, no matter how fast AMD makes them.
 
well amd had better get off their collective asses and add physx support apparently nv has no objection to this..
 
well amd had better get off their collective asses and add physx support apparently nv has no objection to this..

nV has no objections apparently, true, but if AMD got the Havok working on Radeons on current titles too it would be quite bad for nV, at least Havok has far superior list of titles behind it.
 
As far as I know, Havok uses standard Direct3D or OpenGL shaders to perform physics.
Which means that it will work just fine on nVidia cards.
The one thing nVidia needs to watch out for is Havok with Larrabee-optimizations.
 
As far as I know, Havok uses standard Direct3D or OpenGL shaders to perform physics.
Which means that it will work just fine on nVidia cards.
The one thing nVidia needs to watch out for is Havok with Larrabee-optimizations.

Havok? Using shaders to perform physics? I think you're mixing it up with something, Havok is run only on CPU atm, there was Havok FX project earlier which was meant to be GPU accelerated, but Intel put a bullet on that one when they bought Havok.
 
Havok? Using shaders to perform physics? I think you're mixing it up with something, Havok is run only on CPU atm, there was Havok FX project earlier which was meant to be GPU accelerated, but Intel put a bullet on that one when they bought Havok.

No, I'm talking about exactly that.
Intel and AMD have announced that they're working together on trying to leverage the processing power of the GPU for physics.
Not too much is revealed about this, but so far it sounded remarkably like the original plan with HavokFX: process particle effects and such on GPU shaders, and process the core gameplay physics on the CPU.
Intel could set up AMD against nVidia this way, while Intel will have a Larrabee-optimized implementation for itself.
 
No, I'm talking about exactly that.
Intel and AMD have announced that they're working together on trying to leverage the processing power of the GPU for physics.
Not too much is revealed about this, but so far it sounded remarkably like the original plan with HavokFX: process particle effects and such on GPU shaders, and process the core gameplay physics on the CPU.
Intel could set up AMD against nVidia this way, while Intel will have a Larrabee-optimized implementation for itself.

I'm quite sure that it won't be "using direct3d and opengl shaders", shader units will do the calculations, of course, but that's it. There's no way Intel would let nVidia support (for free anyway) it before PhysX goes down and nVidia unlikely wants to support it since it would essentially mean death of PhysX
And for all I've read on the subject, there's nothing pointing it would be just "HavokFX", but running the "normal Havok" on GPU (including LarrabeeGPUCPUthingy), in other words, straight competitor for PhysX with quite a bit bigger force behind it.
 
I'm quite sure that it won't be "using direct3d and opengl shaders", shader units will do the calculations, of course, but that's it. There's no way Intel would let nVidia support (for free anyway) it before PhysX goes down and nVidia unlikely wants to support it since it would essentially mean death of PhysX

Aren't you contradicting yourself?
Also, why would they allow AMD to support it if they won't allow nVidia? AMD is as dangerous to Intel as nVidia is, because AMD's GPUs can put quadcore CPUs out of business in physics just as easily as nVidia's GPUs can. Intel would be really stupid to hand AMD a fully optimized PhysX competitor and put their own hardware out of the physics business in the process.

Seems to me like this is exactly what Intel would want. If Havok can run on GeForce cards, that's one less reason for developers to use PhysX. The biggest reason obviously being that over 50% of all discrete videocards sold are GeForce.
This is the lesser of two evils.
The alternative would be a direct Havok/PhysX war, which Intel cannot win at this point. Their CPUs aren't powerful enough, and Larrabee is far from ready... And its first generation most probably would still not be powerful enough. By that time, PhysX might have already taken over the market.
By getting Havok out there now, Intel can control the market, and make sure the nVidia cards won't be TOO fast. Then they can strike with Larrabee.

And for all I've read on the subject, there's nothing pointing it would be just "HavokFX", but running the "normal Havok" on GPU (including LarrabeeGPUCPUthingy), in other words, straight competitor for PhysX with quite a bit bigger force behind it.

They specifically mentioned that they will be primarily focused on optimizing for multi-core CPU, and using GPU on the side, in some kind of hybrid-solution.
This hybrid solution would also mean that people will still need fast CPUs, which is exactly what Intel wants. nVidia is currently doing whatever they can to convince people that you don't need fast CPUs, GPU is where it's at.
 
Aren't you contradicting yourself?
How? Shader units do the CUDA calculations, Brook+ / CAL and whatnot, but they certainly aren't "standard Direct3D / OpenGL" shader calculations
Also, why would they allow AMD to support it if they won't allow nVidia? AMD is as dangerous to Intel as nVidia is, because AMD's GPUs can put quadcore CPUs out of business in physics just as easily as nVidia's GPUs can. Intel would be really stupid to hand AMD a fully optimized PhysX competitor and put their own hardware out of the physics business in the process.
Because AMD doesn't have competing product on that front (PhysX) and having 2 against 1 is always better.
Of course it might be just a case of nVidia not even wanting to licence Havok from Intel (like part of your post which I cut off gave good points on)
They specifically mentioned that they will be primarily focused on optimizing for multi-core CPU, and using GPU on the side, in some kind of hybrid-solution.
This hybrid solution would also mean that people will still need fast CPUs, which is exactly what Intel wants. nVidia is currently doing whatever they can to convince people that you don't need fast CPUs, GPU is where it's at.
Got links to that? IIRC it was said that AMD would put it all on Radeons :???:
 
How? Shader units do the CUDA calculations, Brook+ / CAL and whatnot, but they certainly aren't "standard Direct3D / OpenGL" shader calculations

I was talking about the sentence after that.
You say Intel doesn't want nVidia to support it, then you say if nVidia supported it, it would be the end of PhysX. But Intel wants the end of PhysX, therefore Intel wants nVidia to support it.
By using D3D/OGL shaders they can 'force' it on nVidia while letting AMD do the hard work for them.

Because AMD doesn't have competing product on that front (PhysX) and having 2 against 1 is always better.

You're missing the point that Intel cannot compete with GPU physics at this point.
It won't be 2 against 1. It'd be AMD vs nVidia and Intel on the sideline.
Got links to that? IIRC it was said that AMD would put it all on Radeons :???:

I don't have a link, you'll have to excuse me. I read a lot of news articles over time, regarding this subject, and fragments have been said here and there, but I can't recall what was said where exactly. I suggest you just google for articles on the Intel/AMD cooperation on Havok.
It's quite clear that the primary focus is on getting multicore CPU physics in the limelight. They haven't said much more than something vague as that they'll be 'investigating' opportunities of offloading certain effects on the GPU.
Even the press-release on AMD's site reads that way:
http://www.amd.com/us-en/Corporate/VirtualPressRoom/0,,51_104_543_15434~126548,00.html
AMD said:
As part of the collaboration, Havok and AMD plan to further optimize the full range of Havok technologies on AMD x86 superscalar processors. The two companies will also investigate the use of AMD’s massively parallel ATI Radeon GPUs to manage appropriate aspects of physical world simulation in the future.
 
Back
Top