AMD: "[Developers use PhysX only] because they’re paid to do it"

Fancy graphics effects being processed on a Graphics Processing Unit.

Can you explain how "breaking glass I hardly notice" is any less of a graphical effect than "shadow filtering I hardly notice"? Seems like you're arbitrarily deciding what's graphics and what isn't to suit yourself. As far as I'm concerned anything that can improve what I see on the screen can run on whichever processor is most suited to the task.

We're long past the stage of pigeon-holing graphics hardware into some arbitrarily restricted set of tasks. The whole point of programmability is to break those chains.
 
Can you explain how "breaking glass I hardly notice" is any less of a graphical effect than "shadow filtering I hardly notice"? Seems like you're arbitrarily deciding what's graphics and what isn't to suit yourself. As far as I'm concerned anything that can improve what I see on the screen can run on whichever processor is most suited to the task.

We're long past the stage of pigeon-holing graphics hardware into some arbitrarily restricted set of tasks. The whole point of programmability is to break those chains.

Generally speaking, I agree with all of this. But in this mindset, I want to see all of my hardware doing it's part. If my CPU is sitting there twiddling it's thumbs and my graphics card is way beyond hammered, then I want my CPU to pick up some slack.

Arguing about which is faster, which is 'better', which is 'more suited' is still worthless if I've got devices that are entirely capable of doing work and yet are doing nothing for no other reason than some ISV told me that it's better this way...
 
Albuquerque said:
NVIDIA recommends that you need a second whole 9800GT card to not faceplant the scores, but multi-GPU rigs are in the VERY low single-digit system penetration. The only time you can tell me that the GPU is guaranteed to be faster than the CPU is if you could show me where GTX260's and above start to rule supreme over quadcores.
That is unlikely to ever happen, due to the rather harsh context switch penalty on those GPUs. Fermi is supposed to be better... It would be interesting to benchmarks in this regard (comparing the % loss of enabling physx on G92 vs GT200 vs Fermi).
 
That is unlikely to ever happen, due to the rather harsh context switch penalty on those GPUs. Fermi is supposed to be better... It would be interesting to benchmarks in this regard (comparing the % loss of enabling physx on G92 vs GT200 vs Fermi).

Quite true, my little challenge was entirely rhetorical :) And I also agree with you, or at least share your interest in how Fermi performs with all the new technology it will be wielding.

But then you run into the same basic premise as before -- which will be more abundant? Quadcores, or Fermies / Fermis / Fermii? That question is obviously rhetorical too :D
 
No, but you can't say "my gpu can't do all this stuff' only to go to the cpu. In the end it will always be the cpu which will limited the performance with physics. You need one or two free cpus for all others calculations. So even you use 2-3 cores for physics it will be alot slower than the gpu.

But wouldn't that depend entirely on the organisation of the engine. Many devs, judging from what I've read on forums and twitter, seems to be going with a system of tiny jobs, inspired by SPUs, that is given to a scheduler with a certain priority (and kill time), and is then processed on any of the cores available.
And with this system, can't one possibly build a phyics system which will place the different physics jobs on the GPU if it is not overutilsed and the job can be run there, or else run it at the CPU. Granted, there will be some overhead to have code that can be run on both CPU and GPU, but hopefully OpenCL can help with that.
 
So, Nvidia provides a modern and easy-to-use physics API, development support and plenty of cross-marketing opportunities for free to the developers that decide to pick their solution. And AMD, well Huddy in this case, alternative to this offer is to get on their high horses and imply that developers who choose to go with that interesting deal are sell-outs?

Yeah, sounds about right, when you don't think about it.

Dev houses are businesses, just like AMD, NV or Havok (Intel), therefore they take their decisions based on what's best for their interests. There's nothing fundementally wrong with that.

There's no such thing as OpenPhysics/DirectPhysics on the horizon at the moment, therefore standardisation is not the topic at hand here, so I don't understand why people argue that point in this thread. To get GPU acceleration, IHVs have to get the physics ISVs on board. And some of the IHVs went the extra mile, Intel bought Havok and Nvidia bought Ageia. And what has AMD to show for themselves? Crying fool in the media about a simple business tactic (offer free and performant software to sell HW that supports it)?

AMD needs to push for an alternative to PhysX and that won't come for free. They'll have to invest in a compelling alternative, be it their own API or push heavily for a standard (via MS or the Khronos group). Well, it's either that or bet that GPU accelerated physics will turn to be a dead end.

And don't go even thinking about calling me a ThizzX apologist. I call BS when I see games turning off basic effects like particle emitters when not runing on NV hardware (see my Batman AA thread). But that's not reason enough for me to question the motives of developers. AMD has to offer or work on better alternatives.

Statements like this usually say more about the poster's perspective than anything else
Statements like this usually say more about the poster's perspective than anything else. :cool:
 
I simply don't agree with the stance that the CPU is somehow incapable of doing "physics" simulations. Just because NV has some special extras that run on GPUs, extras that they built specifically to sell GPUs, doesn't imply that a ~2.5 GHz quad core can't do the same job. Judging CPUs based on NV's obviously biased efforts is not smart.

Let me put it another way for everyone: why does NV intentionally gimp CPU Physx on the PC? The console edition isn't. Obviously if they want to sell their physics library on consoles with GPUs that have little compute-oriented abilities the CPU edition needs to be crazy fast or they won't be selling anything. Their priorities in PC land are clearly very different.
Generally speaking, I agree with all of this. But in this mindset, I want to see all of my hardware doing it's part. If my CPU is sitting there twiddling it's thumbs and my graphics card is way beyond hammered, then I want my CPU to pick up some slack.

Arguing about which is faster, which is 'better', which is 'more suited' is still worthless if I've got devices that are entirely capable of doing work and yet are doing nothing for no other reason than some ISV told me that it's better this way...

That's where I stand too but I can't express myself adequately lol.

There's no such thing as OpenPhysics/DirectPhysics on the horizon at the moment, therefore standardisation is not the topic at hand here, so I don't understand why people argue that point in this thread. To get GPU acceleration, IHVs have to get the physics ISVs on board. And some of the IHVs went the extra mile, Intel bought Havok and Nvidia bought Ageia. And what has AMD to show for themselves? Crying fool in the media about a simple business tactic (offer free and performant software to sell HW that supports it)?
This too. AMD has dropped little physics PR tidbits since the X1900 era. Talk talk. Luckily for them NV Physx hasn't been a massive success story or consumer interest in Radeons could vanish.
 
Last edited by a moderator:
Albuquerque said:
But then you run into the same basic premise as before -- which will be more abundant? Quadcores, or Fermies / Fermis / Fermii? That question is obviously rhetorical too
Well, it may have rhetorical, but it isn't a bad question.

Consider the available FLOPs from an i7 today, and lets say X amount of time later they have achieved 2x performance density (so double the performance in the same size die). Now, consider the FLOPs available in Fermi or RV870 today, and where AMD/NV are likely to be in X amount of time. The theoretical performance density advantage is over a full order of magnitude. This is why things like GPGPU and Fusion are so interesting to me, I think they really have the potential to redefine our everyday experiences with these devices in a meaningful way.
 
The catch is that GPUs are much less flexible than CPUs. You can't do as much with those GPU FLOPS. :D CPUs are not inherently weaker at everything, quite the opposite. With a CPU you get maximum programmability. With a GPU you get simpler computational resources that can do what they are intended for very fast.

Folding at Home has chatted about this occasionally. Some work units are not even close to well suited for the GPUs.
 
And I think everyone understands that... But GPUs are improving (C++ support in Fermi), and there are many workloads where they are quite viable.
 
The thing is I would bet that the more flexibility they add to the GPUs, the less efficient they become per transistor. You can see this in some cases already, where an old Radeon 9800 Pro with ~125 million transistors can dominate a Radeon 4350 with more than twice as many. And Fermi sounds like it's not going to be all that much faster than the much smaller RV870.

The more programmable they get the more CPU-like they become in performance per transistor. Or something like that. And it's not like CPUs haven't added various mechanisms to add parallelism too. :) Not all code works for SSEx, either.
 
Not all FLOPS are created equally. Some are more equal than others. :cool:
 
Judging CPUs based on NV's obviously biased efforts is not smart.

That pretty much sums up my confusion in this physics debate. All the people who think CPUs can do much more, why aren't you judging that based on what people are doing with those CPUs? Nvidia does not have a stranglehold on the physics middleware market and therefore what they do with PhysX has no bearing on what CPUs are capable, or not capable of. If all these great things are possible on CPUs why aren't they being done?
 
Consider the available FLOPs from an i7 today, and lets say X amount of time later they have achieved 2x performance density (so double the performance in the same size die). Now, consider the FLOPs available in Fermi or RV870 today, and where AMD/NV are likely to be in X amount of time. The theoretical performance density advantage is over a full order of magnitude. This is why things like GPGPU and Fusion are so interesting to me, I think they really have the potential to redefine our everyday experiences with these devices in a meaningful way.
As has already been discussed, FLOPS != FLOPS :D Besides that obvious point, howabout the question that we don't know yet: of the available "Fermetical FLOP Powah", how much will go down the toilet due to a context switch? Obviously none of us know yet; we're told that it will be better (perhaps FAR better) than current implementations, but FAR better than the complete suckage we deal with now may still be a less-than-good result.

trinibwoy said:
If all these great things are possible on CPUs why aren't they being done?
I don't see myself as part of this particular bandwagon, but I'll still answer: I think they are being done. Choosing between a GPU and a CPU as your computational medium should only affect the overall quantity of effects OR speed at which those effects are computed. Otherwise, the medium should not dictate whether particles have a limited lifespan or not, or their interaction level with other effects, or the like. And once the particles are at rest, I'm having a hard time believing they're any more computationally expensive on one medium versus another.
 
I don't see myself as part of this particular bandwagon, but I'll still answer: I think they are being done. Choosing between a GPU and a CPU as your computational medium should only affect the overall quantity of effects OR speed at which those effects are computed. Otherwise, the medium should not dictate whether particles have a limited lifespan or not, or their interaction level with other effects, or the like. And once the particles are at rest, I'm having a hard time believing they're any more computationally expensive on one medium versus another.

Theoretically yes but performance is a practical consideration. A developer isn't going to just toss stuff at a CPU, performance be damned. But the Metro 2033 devs claim they're going to make use of whatever CPU resources you have and that there will be no graphical difference between CPU and GPU PhysX. Hopefully this provides a useful benchmark to compare Gulftown vs Fermi etc.

There are no visible differences as they both operate on ordinary IEEE floating point. The GPU only allows more compute heavy stuff to be simulated because they are an order of magnitude faster in data-parallel algorithms. As for Metro2033 - the game always calculates rigid-body physics on CPU, but cloth physics, soft-body physics, fluid physics and particle physics on whatever the users have (multiple CPU cores or GPU). Users will be able to enable more compute-intensive stuff via in-game option regardless of what hardware they have.

http://www.pcgameshardware.com/aid,706182/Exclusive-tech-interview-on-Metro-2033/News/
 
Try Red Faction: Guerilla.

-FUDie

Have, ad nausium same shit physics everytime. waste enough heavy ammo rounds on something, it'll do the same thing it does with a rocket launcher. HALF ASSED PHYSICS. Why did they bother?
 
I don't see where you're going with this line of thought. You gave an 'awesome' physics example (I paraphrased your words) with Crysis, and then gave a 'terrible' physics example with BF.

First, how do you know that these physics effects aren't underpinned by the bigger picture of the game engine itself? Second, and more generally, the programmer's choice of physics computational algorithm and the 'realness' of it isn't decided by the computation platform (CPU versus GPU.)

This is why I don't understand what you're trying to say.

BF:BC2 is half assed physics, sure it makes the game a bit more fun to play, but after awhile, it gets really damn boring watching the umpteenth building fall apart in the same exact fashion as all the rest.

Generally speaking, I agree with all of this. But in this mindset, I want to see all of my hardware doing it's part. If my CPU is sitting there twiddling it's thumbs and my graphics card is way beyond hammered, then I want my CPU to pick up some slack.

Arguing about which is faster, which is 'better', which is 'more suited' is still worthless if I've got devices that are entirely capable of doing work and yet are doing nothing for no other reason than some ISV told me that it's better this way...

Fine, then you convince the Devs to make all game engines from this day forward multi CPU friendly. While your at, have them do as the guys behind Metro2033 have done. Make the CPU do meanile physics task while off loading the much heavier stuff to the GPU. Good luck getting that to fly. We've had multi core CPUs for how long and are ONLY NOW starting to get a few game engines to use two.
 
I simply don't agree with the stance that the CPU is somehow incapable of doing "physics" simulations. Just because NV has some special extras that run on GPUs, extras that they built specifically to sell GPUs, doesn't imply that a ~2.5 GHz quad core can't do the same job. Judging CPUs based on NV's obviously biased efforts is not smart.

Let me put it another way for everyone: why does NV intentionally gimp CPU Physx on the PC? The console edition isn't. Obviously if they want to sell their physics library on consoles with GPUs that have little compute-oriented abilities the CPU edition needs to be crazy fast or they won't be selling anything. Their priorities in PC land are clearly very different.


That's where I stand too but I can't express myself adequately lol.


This too. AMD has dropped little physics PR tidbits since the X1900 era. Talk talk. Luckily for them NV Physx hasn't been a massive success story or consumer interest in Radeons could vanish.

I dont believe they have based on the link I provided earlier. I believe it falls on the devs who dont want or simply cant be bothered to code for multi core setups. Metro 2033, according to the dev is going to have massive multi core firendliness where the cpu will do the meanile physics while off loading the heavier stuff to the GPU where a GPU is present to do such a task. When a GPU is not present, it will scale back the physics calcs, simplify them for the CPU. They wont be exactly the same as if a GPU does them, the end user wont lose much in the details.
 
BF:BC2 is half assed physics, sure it makes the game a bit more fun to play, but after awhile, it gets really damn boring watching the umpteenth building fall apart in the same exact fashion as all the rest.



Fine, then you convince the Devs to make all game engines from this day forward multi CPU friendly. While your at, have them do as the guys behind Metro2033 have done. Make the CPU do meanile physics task while off loading the much heavier stuff to the GPU. Good luck getting that to fly. We've had multi core CPUs for how long and are ONLY NOW starting to get a few game engines to use two.

Yes cause god forbid non nvidia users should get thier money worth from the game.

I need to buy a cpu , i don't need to buy a second gpu. I want to see my hardware taken advantage of.

I'm hoping intel and amd start pushing open cl implementations together s othat our cpus are used along with our gpus.
 
Back
Top