AMD: "[Developers use PhysX only] because they’re paid to do it"

I wish they'd leave the GPU to do graphics and compute-related graphics effects, and let the quad cores that are starting to be everywhere get some more usage by doing the physics. This hardware accelerated physics thing started as a way to create a need for a new product and I'm still not convinced years later. No game has done anything noteworthy with it IMO. It just slows the GPU down more, rendering lame barely interactive extras, while we have 2-3 idle CPU cores.

Some gamers need to try to think for themselves instead of letting NVIDIA do it for them. It's plainly obvious what NV sees Physx's role as when they pull shit like Batman con jobs.

Heres a small problem with that tho. I have been playing DarkVoid(got it for free being apart of the SLIClub) and use a pair of GTX260 216s. I dont have them in SLi and they are eVGAs so I was able to monitor the usage of Physics on the one with the eVGA tool. it averaged right around 15% thru the game. My shaders are clock at 1512 so I'd say rough estimate of 850GFlops and at 15% it put the GFlops usage at about 130GFlops. Please tell me of ANY CPU not in a huge farm or server cluster that has that kind of computational power. I seriously doubt ANY CPU out today could deliver the kind of visuals I was seeing in the game with PhysX set to high an still maintain 45+ FPS at all times.

What Batman con job?
 
It may not take the place of physx but on batman when enabling physx my quad core only uses 1 core.... What are the other 3 doing ? Going to a day spa ?

It would be nice if physx took advantage of all 4 cores and then used the gpu for anything that didn't work on it. But then the nvidia cards wouldn't look so great
 
It may not take the place of physx but on batman when enabling physx my quad core only uses 1 core.... What are the other 3 doing ? Going to a day spa ?

It would be nice if physx took advantage of all 4 cores and then used the gpu for anything that didn't work on it. But then the nvidia cards wouldn't look so great

Again, the problem being, if the physics code/algorithim is so complex that you need say 100GFlops of power to give you playable frame rates, hvaing 4 cores wont matter as it would still be hindering your performance. Even a 16 core server wouldn't be enough. 4 chips each at around 10GFlops(12.5 with HT) you're still around half of what you'd need.

I find the problem with otehr solutions I've seen is that while it is cool, it is all done as if on rails. Why bother doing it, if you aren't going to go all out with the physics to begin with. sure it looks nice, but after awhile, it becomes boring to look at.
 
We don't know how complex what they're doing really is because they gimp the CPU version and who really knows if the GPU version of the code is efficient? Or if the GPUs are able to run physics simulations efficiently in general? And what kind of cost is there when the GPU is doing both graphics and CUDA/Physx/Compute code at the same time (context switching style stuff) ? The mega huge FLOPS numbers are best case scenarios that don't happen outside of shaders written specifically to utilize the hardware perfectly.

And maybe more importantly, what exactly needs extreme horsepower to compute that will actually be interactive and worthwhile in a game? They haven't done much that is compelling! The most "amazing" thing I've seen from Physx personally is some ground fog in Mirror's Edge and it brought my 8800GTX to its knees? And it was kinda lumpy and fakey being made out of those Physx particles or whatever. Ridiculous maybe?
 
Last edited by a moderator:
There are pure graphical effects that are even more ridiculous in terms of minimal IQ impact for the performance toll they take so calling for the banishment of physics from the GPU isn't really a defensible position at this point.
 
Again, the problem being, if the physics code/algorithim is so complex that you need say 100GFlops of power to give you playable frame rates, hvaing 4 cores wont matter as it would still be hindering your performance. Even a 16 core server wouldn't be enough. 4 chips each at around 10GFlops(12.5 with HT) you're still around half of what you'd need.
As has been alluded to, percentage of GPU utilization does not equate to a specific "flop" count. Further, the work being done on the GPU may lend itself to far more optimization on a CPU core due to memory access patterns and the like.

Thus, comparing 15% utilization on your GPU core and assigning it some random operational value and then declaring that a CPU could 'never do it' is a strawman argument at best.
 
There are pure graphical effects that are even more ridiculous in terms of minimal IQ impact for the performance toll they take so calling for the banishment of physics from the GPU isn't really a defensible position at this point.
Fancy graphics effects being processed on a Graphics Processing Unit.

And then there are pieces of paper blowing in the wind that I barely notice, or glass breaking not much differently than in other games, or funky particle-based water/fog/etc made of little "balls", bringing a GPU to its knees for whatever reason while there are 2-3 idle CPU cores. Astonishing is not the word that has come to my mind while seeing this stuff.

Why do we own these multicore CPUs again? I don't buy that they don't have the grunt for interactive physics effects, since like uh we had 486s doing convincing flight simulators and stuff.
 
Last edited by a moderator:
Fancy graphics effects being processed on a Graphics Processing Unit.

And then there are pieces of paper blowing in the wind that I barely notice, or glass breaking not much differently than in other games, or funky particle-based water/fog/etc made of little "balls", bringing a GPU to its knees for whatever reason while there are 2-3 idle CPU cores.

Yeah, Graphics (paper) Processing (physic) Unit. Why should the CPU do the job when a gpu is a lot faster? Do you want back the days in which the cpu is processing things like T&L and Vertexshader? :LOL:
 
Yeah, Graphics (paper) Processing (physic) Unit. Why should the CPU do the job when a gpu is a lot faster? Do you want back the days in which the cpu is processing things like T&L and Vertexshader? :LOL:

Because the GPU's primary job is graphics output. Vertex transforms, vertex shaders, and lighting shaders are all very specifically graphics-pipeline concepts, that's why they don't belong on a CPU.

Because quite a bit more than 90% of the population doesn't have two video cards. Even fewer have two (or more) NVIDIA cards.

Because with a single video card, physics + a game's worth of rasterization = epic failure for framerates. See also: Batman AA on a single GTX275 with and without PhysX

Because systems selling with quadcores outsell systems selling with multiple video cards by several orders of magnitude. See also: machines selling in BestBuy, Circuit City, on Dell's website, Asus's website, Gateway's website, Sony's website, HP's website, you name it.

I think we've entirely answered your questions as to why. However, I'm sure someone can come up with a few more strawman arguments as to why it's still worth talking about...
 
Yeah, Graphics (paper) Processing (physic) Unit. Why should the CPU do the job when a gpu is a lot faster? Do you want back the days in which the cpu is processing things like T&L and Vertexshader? :LOL:

I believe his point was that the GPU is being brought to it's knees, and is already pushing massive amounts of graphics, whereas most of the CPU sits idle. The problem is that no one can really compare them because nvidia won't ever work on a true multi-core version of PhysX will they?
 
As has been alluded to, percentage of GPU utilization does not equate to a specific "flop" count. Further, the work being done on the GPU may lend itself to far more optimization on a CPU core due to memory access patterns and the like.

Thus, comparing 15% utilization on your GPU core and assigning it some random operational value and then declaring that a CPU could 'never do it' is a strawman argument at best.

But have you seen the CPU based physics used in games that used Havok, Bullet or what ever other option there is? CryTek, who does there own in hose physics and ignores the others, brings damn near any machine to its knees and atleast they do interactive, believable almost truely real physics. Where as Games like BF:BC2 becomes nausium after an hour or so watching the same thing over and over again with nothing really changing in the respect to how something does what it does. Call me a freak, but I believe if the dev is going to do physics, do it right or dont bother. Atleast with GPU physics it behaves more real like than the CPU options other games are using.
 
Yeah, Graphics (paper) Processing (physic) Unit. Why should the CPU do the job when a gpu is a lot faster? Do you want back the days in which the cpu is processing things like T&L and Vertexshader? :LOL:
Strawman argument. While processing graphics in a game, such as Arcam Asylum, just how much physics can be handled by the GPU without impacting overall performance much? If the CPU physics processing were optimized and took advantage of multiple cores, maybe CPU physics would give playable results. Nvidia has no vested interest in making CPU physics look good, hence they will never optimize PhysX for the CPU. That doesn't mean that CPU physics is "dead", there are plenty of games, even ones using PhysX(!), that do physics on the CPU.

The GPU being faster doesn't imply that the CPU isn't fast enough. But we'll likely never know since nvidia probably won't fix PhysX.

-FUDie
 
Atleast with GPU physics it behaves more real like than the CPU options other games are using.
I don't see where you're going with this line of thought. You gave an 'awesome' physics example (I paraphrased your words) with Crysis, and then gave a 'terrible' physics example with BF.

First, how do you know that these physics effects aren't underpinned by the bigger picture of the game engine itself? Second, and more generally, the programmer's choice of physics computational algorithm and the 'realness' of it isn't decided by the computation platform (CPU versus GPU.)

This is why I don't understand what you're trying to say.
 
Last edited by a moderator:
Strawman argument. While processing graphics in a game, such as Arcam Asylum, just how much physics can be handled by the GPU without impacting overall performance much? If the CPU physics processing were optimized and took advantage of multiple cores, maybe CPU physics would give playable results. Nvidia has no vested interest in making CPU physics look good, hence they will never optimize PhysX for the CPU. That doesn't mean that CPU physics is "dead", there are plenty of games, even ones using PhysX(!), that do physics on the CPU.

You forget that you want at the same time: Sound, K.I calculations, network, graphics calcutions... One, two cores are 25%/50% of the processing power of a quad core cpu. You think it would be clever to use all cores in a game for physics at the same time? Maybe you don't like music. :LOL:

The GPU being faster doesn't imply that the CPU isn't fast enough. But we'll likely never know since nvidia probably won't fix PhysX.

-FUDie
Let us wait for the new fluidmark. :smile:
 
You forget that you want at the same time: Sound, K.I calculations, network, graphics calcutions... One, two cores are 25%/50% of the processing power of a quad core cpu. You think it would be clever to use all cores in a game for physics at the same time? Maybe you don't like music. :LOL:
So the argument has now been spun into "we don't want to make the CPU too busy?" Wait, so completely hammering the video card into borderline unplayable framerates for physics is OK, but putting a quadcore CPU above 50% utilization is suddenly scary?

These little PR spins go both ways...
 
So the argument has now been spun into "we don't want to make the CPU too busy?" Wait, so completely hammering the video card into borderline unplayable framerates for physics is OK, but putting a quadcore CPU above 50% utilization is suddenly scary?

These little PR spins go both ways...

No, but you can't say "my gpu can't do all this stuff' only to go to the cpu. In the end it will always be the cpu which will limited the performance with physics. You need one or two free cpus for all others calculations. So even you use 2-3 cores for physics it will be alot slower than the gpu.
 
You forget that you want at the same time: Sound, K.I calculations, network, graphics calcutions... One, two cores are 25%/50% of the processing power of a quad core cpu. You think it would be clever to use all cores in a game for physics at the same time? Maybe you don't like music. :LOL:

Let us wait for the new fluidmark. :smile:

I have a sound card for sound. I have a nic card for the network. I have a video card for graphics.

I have a quad core and in the majority of games it acts like a single or dual core cpu. Later this year your going to be able to buy 8 and 12 core machines mabye even 16 core machines.

Are we still going to have only 1 or 2 of these cores active during games ?

I get that not everyone will have quad cores or 8 cores of what have you, but a far bigger group will have quad cores and greater than those who have dual video cards.

Even if they can't do all the phsyics on a quad core or greater machine those core should still be used first because more people will be able to enjoy those physics effects and then what can't be handled on the cpu can be moved to the graphics card.
 
I have a sound card for sound. I have a nic card for the network. I have a video card for graphics.

"I have a physics card for physics". :LOL:

Even if they can't do all the phsyics on a quad core or greater machine those core should still be used first because more people will be able to enjoy those physics effects and then what can't be handled on the cpu can be moved to the graphics card.

GPU-PhysX != GPU-Physics. We are in the first years of this evolution. And PhysX shows that GPUs a lot faster in special situations. Maybe you don't need more than 4 cores in the future.
 
No, but you can't say "my gpu can't do all this stuff' only to go to the cpu. In the end it will always be the cpu which will limited the performance with physics. You need one or two free cpus for all others calculations. So even you use 2-3 cores for physics it will be alot slower than the gpu.

What you say isn't reflected by reality. As evidence, I present the Steam hardware survey.

Quad cores represent 21.9% of the populace, multi-video card systems represent a paltry 1.9% of the populace. The most popular NVIDIA video cards are the G80/G92 series, with almost 30% of the market share. Now, find me a PhysX game on a 9800GT that doesn't completely faceplant the benchmark scores...

NVIDIA recommends that you need a second whole 9800GT card to not faceplant the scores, but multi-GPU rigs are in the VERY low single-digit system penetration. The only time you can tell me that the GPU is guaranteed to be faster than the CPU is if you could show me where GTX260's and above start to rule supreme over quadcores.

And that isn't happening.
 
Back
Top