AMD: "[Developers use PhysX only] because they’re paid to do it"

Statements like this usually say more about the poster's perspective than anything else.

Yes, pointing out the blindingly obvious fact that tonally suggesting only one side of the fanboy faction wars would act in such a manner is a clear indication of being one-sided certainly must speak volumes about my own biases.

Yeah, sounds about right, when you don't think about it. :cool:
 
Last edited by a moderator:
Sigh... as usual, people assume the worst and take things out of context... it gets really old.

People also throw around the term FLOPS as if there's no other consideration to be had when dealing with GPU's and CPU's. More than once in this very thread someone has thrown "GPU FLOPS" out as some sort of metric and then directly compared them to "CPU FLOPS" -- including yourself. The "FLOP density" of an i7 has little bearing on anything compared to a GPU unless we're doing very specific types of math and in very specific ways and using very specific patterns for memory access and with very specific sizes in computation kernels.

Thus, I felt it was still worth mentioning FLOPS != FLOPS piece (with a smiley nonetheless, as I wasn't trying to tackle you over the head with it...) I still agree with you that GPU's have lots of potential workloads where they have proven themselves far more suited to the task than CPU's of today or tomorrow.

My argument is with this:
trinibwoy said:
A developer isn't going to just toss stuff at a CPU, performance be damned.
Why not? NVIDIA is having us do just that, except they're tossing stuff at a GPU, performance be damned. 30% of the video card market belongs to NVIDIA and is of the G80 to G92 architecture type. Care to find a video game where I can play with PhysX and not utterly destroy performance on a 9800?

I've already addressed this entire issue; the epic majority of video cards out today do not have the capacity to do this kind of workload and retain playable framerates. The only answer is seemingly to buy a GT275 or better, or to buy a second NVIDIA video card. And since multi-GPU platforms account for less than 2% of the installed base, I think that's a pretty moot point.

Quad cores have that beat by an entire order of magnitude (~22% of installed base), so are we going to go with theory here or real life? Theory says we can do this NVIDIA's way on a platform that somewhere between 2% and 4% of the computing world actually owns, or else we can focus on performance and features via some CPU-specific way that maybe 25% of the computing world has access to?

Metro2033 has it right at least; they're doing it for both. Kudos to them, I applaud their effort. As for PhysX? My argument isn't specifically with that API, my argument is with NVIDIA shoving it onto the GPU, performance be damned. (using your words)
 
nd as I said to you before:
Fine, then you convince the Devs to make all game engines from this day forward multi CPU friendly. While your at, have them do as the guys behind Metro2033 have done. Make the CPU do meanile physics task while off loading the much heavier stuff to the GPU. Good luck getting that to fly. We've had multi core CPUs for how long and are ONLY NOW starting to get a few game engines to use two.

And you are also using the Steam survey in the wrong way as it only account for SLI/XFire configurations. IT DOES NOT take into concideration people who own say a 9800GTX and are using say an 8600GT(not the best option but doable) to off load physics to. That is a completely different scenorio and STEAM doesn't have a catagory for it.
 
And you are also using the Steam survey in the wrong way as it only account for SLI/XFire configurations. IT DOES NOT take into concideration people who own say a 9800GTX and are using say an 8600GT(not the best option but doable) to off load physics to. That is a completely different scenorio and STEAM doesn't have a catagory for it.
Given the entire gamut of options available, this possibility isn't going to make up for an order of magnitude's worth of difference -- not by a long shot.

And rather than being a facetious ass about it, why are you so up-in-arms against utilization of quad cores? Every time this topic comes up, you get ridiculously snarky about the whole thing. A quick browse of your posting history suggests that NV is going to have a hard time doing wrong in your eyes; are your glasses really so green-tinted that you can't accept any better solutions?
 
Heres a question, physx is multthreaded on the consoles how does it perform there

From my understanding it does quite well, altho with reduced affects. Shame the same devs who port the games to PC don't keep that threadedness in place for multi-core setups.

1.Given the entire gamut of options available, this possibility isn't going to make up for an order of magnitude's worth of difference -- not by a long shot.

2. And rather than being a facetious ass about it, why are you so up-in-arms against utilization of quad cores? Every time this topic comes up, you get ridiculously snarky about the whole thing. A quick browse of your posting history suggests that NV is going to have a hard time doing wrong in your eyes; are your glasses really so green-tinted that you can't accept any better solutions?

1. How would you know that? I sure dont, but I do know that Steam doesn't count system with more than 1 card if those card are cores. IE G92 and a G94 in one box. That isn't concidered SLi or XFire. And before Nvidia killed PhysX thru drivers, many a people were retiring there old NV cards to PhysX duty with there ATI cards, some stayed with it and used the hack. Many Nvidia users do this aswell

2. I'm not being an ass. simply pointing out the weakness of your arguement of using multi-core setups for physics duties. Even in game where PhysX is not the physics engine of choice, the game devs are only coding for the use of a single core, if you lucky partial use of a second. This surely isn't Nvidias fault. Hell even PhysX is multi core friendly, but for some reason when the game Dev ports a game from the PS3 or 360 to PC, that threading it used on the console stops being used. Not all PhysX enabled games require the use of a GPU. But the way you act, this is all some evil doing of Nvidia when the fact remains, its not them, its the Devs who truely seem to not be bothered to make use of the power available to the end user. Red Faction, BF:BC2 are 2 I can atest to being physics users but yet dont fully utilize the CPU and sadly the physics in the game shows this as it all looks preprogrammed crap. Its as if they have the physics coded so that after X amount of damage, enact this to x,y,z. Sure it looks good the first few times, but after awhile I get sick of seeing the same rendering over and over again regaurdless of what was used to make it happen. The point of physics is to bring a sence of realism to the game. But if all the dev does is preprogram a rendered scene and applies it to everything, it isn't very real and IMHO a great waste of time and coding effort as its half assed.

And I dont wear super green tinted glasses. Its just thus far, the best physics I've seen on games has been GPU based and the only solution going for that right now is PhysX. Hell I'd love to see OCL GPU based physics, but Nvidia isn't going to push that as that isn't their baby even if they are supporting it. Thats ATI thing and quite franly, they aint shown shit in terms of actually wanting to develop that for use in GPUs and I believe it is because Nvidia also supports it more than they do. But yet they will cry foul everytime a game Dev uses GPU based PhysX in its game.

ATI either shut the hell up or step up and get OCL GPU based physics moving, you've been dragging your damn feet on it long enough. Time to put up or shut up.

There is a reason why Spawn the comic book writen by Todd McFarlane doesn't carry the CCC tags, his firm belief is that if you can draw a man sitting on a toilet taking a shit, drinking a beer and reading the sports page(incase anyone is wondering, drawing anything remotely close to this in a comic that carries the CCC tag gets the publisher a big fat fucking fine), do it, dont fucking half ass it as it will end up looking like shit. The same rule should apply to game Devs when they are coding physics into their games.
 
Last edited by a moderator:
1. How would you know that?
Similarly, how would you? I can use anecdotal evidence the same way you can. But let's use some common sense: the grand majority of systems in people's houses today are something that they bought as an entire, complete unit. You can buy a monumental number of fully assembled machines with quadcore processors when compared to the number of fully assembled machines with multiple graphics cards installed. That fact is basically indisputable, and as such, whatever little bit of extra you can dredge up for a few extra NV cards being wedged into machines still isn't going to catch another 20% of the market penetration. I'd be genuinely surprised if you could show me data that says even five percent of the market has a second video card, whether it be for SLI / CF or physx.

2. I'm not being an ass. simply pointing out the weakness of your arguement of using multi-core setups for physics duties.
What weakness in my argument? That there are more quadcore-equipped systems on this planet than systems with multiple video cards? That in the grand scheme of things, we're having a discussion about massively parallel architecture (a GPU) somehow not translating well to parallel CPU cores?

Even in game where PhysX is not the physics engine of choice, the game devs are only coding for the use of a single core, if you lucky partial use of a second.
There's more than sufficient data around the net to debunk this claim. Now, if you had said that devs are only coding for the use of a dual core, and if-we're-lucky maybe part of a third, then you'd be far closer to the truth.

This surely isn't Nvidias fault. Hell even PhysX is multi core friendly, but for some reason when the game Dev ports a game from the PS3 or 360 to PC, that threading it used on the console stops being used. I don't think it's some invisible reason. Not all PhysX enabled games require the use of a GPU. But the way you act, this is all some evil doing of Nvidia when the fact remains, its not them, its the Devs who truely seem to not be bothered to make use of the power available to the end user.
I don't believe you, and you have no proof of any of your claims. The libraries don't just STOP magically working when you move them from another platform... It's sorely obvious that the PhysX libraries for PS3 and 360 have native multicore CPU support built into them, and there's no logical or defendable reason why they do not have this support on the PC platform.

The only plausible explanation is that they want to sell video cards to process that data. And given the hardware details we had to address yet again in the beginning of this reply, we can thoroughly conclude that NV doesn't want quad cores to be competitive -- because they have no CPU's to sell.

GPU's accelerating PhysX or CPU's accelerating PhysX should absolutely not matter, unless they're doing even more underhanded tricks in what they do or don't allow their "CPU-only" libraries to compute. Thus, any differences in your selected games that demonstrated awesome physics should be negligable on a CPU so long as they're playing fair.

Chances are, they're not.
 
XMAN do you seriously think that those combinations are goin to make it go from 1.8 to 22% ?

I think the main problem with your arguement is this.

1) New games will tax these cards more and new games is what will have physx so many people will retire the pre 200 series from nvidia . Some may go ati for cheaper dx 11 , some may go to faster nvidia cards. But not all of them will keep two cards for physx and of those who do newer physx titles may stress the older gpus more to render them moot.

2) Moving foward dual core cpus will be phased out just like single core cpus were. While that happens 4core cpu % of the market will start to sky rocket as the newer 6 core , 8 core and greater cpus start to settle in. I'm not sure what ati's next move is nor intels. I'm pretty sure it will be 6 cores tho. 6 cores will then take the place of 4 core chips in 2008 and 2009 .


The adoption rate of quad core and greater cpus will be much faster than dual gpus esp since a large % of the gpu market is left out in the cold (ati users) for physx gpus.

I also think we are finally starting to leave the dx 9 ghetto and devs are opening up to program dx 10 only paths or create dx 10 games and down port them to dx 9 instead of the current trend of creating dx 9 games and up porting them to dx 10.
 
Similarly, how would you? I can use anecdotal evidence the same way you can. But let's use some common sense: the grand majority of systems in people's houses today are something that they bought as an entire, complete unit. You can buy a monumental number of fully assembled machines with quadcore processors when compared to the number of fully assembled machines with multiple graphics cards installed. That fact is basically indisputable, and as such, whatever little bit of extra you can dredge up for a few extra NV cards being wedged into machines still isn't going to catch another 20% of the market penetration. I'd be genuinely surprised if you could show me data that says even five percent of the market has a second video card, whether it be for SLI / CF or physx..

The only fact here here is that neither of us know the exact number of people out there who have 2 cards in there system be it for SLi/Xfire or a second card being used for PhysX. That is the only true fact. But to try and claim SLi/XFire numbers as a basis as to how many machines are capable of doing GPU physics without totally killing frame rates is pure bullshit.


What weakness in my argument? That there are more quadcore-equipped systems on this planet than systems with multiple video cards? That in the grand scheme of things, we're having a discussion about massively parallel architecture (a GPU) somehow not translating well to parallel CPU cores?.

Where did I ever say there wasn't a massive amount of multi-core CPUs in the world let alone thos with 4 or more?


There's more than sufficient data around the net to debunk this claim. Now, if you had said that devs are only coding for the use of a dual core, and if-we're-lucky maybe part of a third, then you'd be far closer to the truth..


Really? Have you check CPU utilization in BC2 or RF:GW. AT best, they'll hit around 40% CPU use in my Q6600 setup. Yeah, those 2 game devs went above and beyond for CPU use with in game physics. You sure you wanna kep going down this road of game devs imploying proper use of multicore cpu setups for the use of physics?

I don't believe you, and you have no proof of any of your claims. The libraries don't just STOP magically working when you move them from another platform... It's sorely obvious that the PhysX libraries for PS3 and 360 have native multicore CPU support built into them, and there's no logical or defendable reason why they do not have this support on the PC platform.

The only plausible explanation is that they want to sell video cards to process that data. And given the hardware details we had to address yet again in the beginning of this reply, we can thoroughly conclude that NV doesn't want quad cores to be competitive -- because they have no CPU's to sell.

GPU's accelerating PhysX or CPU's accelerating PhysX should absolutely not matter, unless they're doing even more underhanded tricks in what they do or don't allow their "CPU-only" libraries to compute. Thus, any differences in your selected games that demonstrated awesome physics should be negligable on a CPU so long as they're playing fair.

Chances are, they're not.

There are several games that use PhysX but dont not imploy the GPU based code for it. Most games that are PhysX enabled are not GPU based physics game but CPU. There is one fairly popular game that got ported that supports physx but ODES NOT use GPU based physics and strangely enough, on the PC doesn't use more than 1 cpu core. Now either the game dev got lazy, figured one cpu was enough in modern day PCs or Nvidia payed them not to use mulyithread code on teh PC. You can straw man the Nvidia line all day if you want, I'll stick with option 1. And as to the seconded sentance of the first paragraph in teh above qoute, last I heard and have read, fewer and fewer game devs were ports to PC from console because of the truely PITA process it was to do so. No if all devs that do cross platform games are starting to stop porting to PC from console because its a PITA to recode, why do you insist its as easy as pie and they wouldn't short cut or cut things out to save time?

XMAN do you seriously think that those combinations are goin to make it go from 1.8 to 22% ?

I think the main problem with your arguement is this.

1) New games will tax these cards more and new games is what will have physx so many people will retire the pre 200 series from nvidia . Some may go ati for cheaper dx 11 , some may go to faster nvidia cards. But not all of them will keep two cards for physx and of those who do newer physx titles may stress the older gpus more to render them moot.

2) Moving foward dual core cpus will be phased out just like single core cpus were. While that happens 4core cpu % of the market will start to sky rocket as the newer 6 core , 8 core and greater cpus start to settle in. I'm not sure what ati's next move is nor intels. I'm pretty sure it will be 6 cores tho. 6 cores will then take the place of 4 core chips in 2008 and 2009 .


3)The adoption rate of quad core and greater cpus will be much faster than dual gpus esp since a large % of the gpu market is left out in the cold (ati users) for physx gpus.

4)I also think we are finally starting to leave the dx 9 ghetto and devs are opening up to program dx 10 only paths or create dx 10 games and down port them to dx 9 instead of the current trend of creating dx 9 games and up porting them to dx 10.

1.Of that I have no doubt. but if those heavy physics situations are going to tax even a GTX260, how can anyone think an 8 core CPU will be any better at handling the same work load?

2. Of that I have no doubt

3. As to the ATI issue and PhysX with Nvidia GPUs. I'm in agreeance with gamers, Nvidia should never have killed that. They lost sales and they also irked damn near the whole gaming community.

4. That would be nice as it may allow for a true DX10 game to perform better than the DX9 version.
 
Last edited by a moderator:
. Hell even PhysX is multi core friendly, but for some reason when the game Dev ports a game from the PS3 or 360 to PC, that threading it used on the console stops being used.

Because physx on a pc cpu is single threaded, its only multithreaded on console cpu's
 
Why not? NVIDIA is having us do just that, except they're tossing stuff at a GPU, performance be damned. 30% of the video card market belongs to NVIDIA and is of the G80 to G92 architecture type. Care to find a video game where I can play with PhysX and not utterly destroy performance on a 9800?

That's in no way comparable. You can't simply toss in multiple CPUs into your system to get higher performance. However, you can drop a 9800GT or GT 240 in there and get useful performance out of PhysX. It's accessible.

If your argument is that PhysX should be fast on all cards well that's complete nonsense. There are games that have nothing to do with PhysX that utterly destroy performance on anything short of an SLI/Xfire setup. Why the double standard?
 
If your argument is that PhysX should be fast on all cards well that's complete nonsense. There are games that have nothing to do with PhysX that utterly destroy performance on anything short of an SLI/Xfire setup. Why the double standard?

Aside from Crysis, I'm drawing a blank. What games were you thinking of when you wrote the above?
 
Last edited by a moderator:
Aside from Crysis, I'm drawing a blank. What were you thinking of when you wrote the above?

I'm thinking of my GTX 285 struggling on every game I play at 2560x1600 max settings. And Crysis isn't even included. My question is why is there an arbitrary stipulation that PhysX must be free or cheap for it to matter? In general I see no reason for advanced physics options to be treated any differently to any IQ setting, including resolution.
 
I'm thinking of my GTX 285 struggling on every game I play at 2560x1600 max settings.

Heh, my 285 handles every game I play at that res with aplomb. 4x AA and 16x AF are standard, in-game settings maxed. But I'm not into shooters much these days, so we're very liable to be playing different titles. But it handles Dragon Age, LOTRO, Napoleon: TW, Mass Effect 2, Batman, etc., very handily.

My question is why is there an arbitrary stipulation that PhysX must be free or cheap for it to matter? In general I see no reason for advanced physics options to be treated any differently to any IQ setting, including resolution.

Like my above paragraph, it's all subjective. I ended up turning off PhysX for Sacred 2 and my 2nd play through of Batman, what it added to each title just wasn't worth the nasty performance impact IMO.

Edit: People shouldn't expect it to be free, I agree with you on that, but I think it's more the attitude that whatever it's adding to each game should be worth the performance loss. And, again, that's just a subjective call. For me, having experienced it once I wanted the smoother frame rate for my 2nd time through Batman.
 
Last edited by a moderator:
Heh, my 285 handles every game I play at that res with aplomb. 4x AA and 16x AF are standard, in-game settings maxed. But I'm not into shooters much these days, so we're very liable to be playing different titles. But it handles Dragon Age, LOTRO, Napoleon: TW, Mass Effect 2, Batman, etc., very handily.

Bad Company 2. Lumber-yard. SSAO, 4x/16x. 21fps. Aplomb aye? :)

Like my above paragraph, it's all subjective. I ended up turning off PhysX for Sacred 2 and my 2nd play through of Batman, what it added to each title just wasn't worth the nasty performance impact IMO.

Well yeah that's my point. I see no difference between buying a second GPU to achieve good performance at XYZ settings and buying a second GPU to achieve good performance in PhysX. Whether or not it's worth it is up to the person spending their cash. If somebody were to try to lecture you that a 30" monitor wasn't worth the performance hit or hardware expense over a 20" one you would laugh at them all the way home.
 
The only fact here here is that neither of us know the exact number of people out there who have 2 cards in there system be it for SLi/Xfire or a second card being used for PhysX.
Sure, but we both know that quadcores outnumber multi-card configs by a massive amount. That is still indisputable; the only question to that is "by how many?"

Really? Have you check CPU utilization in BC2 or RF:GW. AT best, they'll hit around 40% CPU use in my Q6600 setup. Yeah, those 2 game devs went above and beyond for CPU use with in game physics. You sure you wanna kep going down this road of game devs imploying proper use of multicore cpu setups for the use of physics?
I didn't say they were, and you can quote me where you think I did. I said that they should be. Further, as was pointed out, PhysX current PC implementation forces all calculations to a single core -- meaning your own examples are worthless. That's at least part of the discussion at hand: NV artificially crippling performance on the PC platform for no other reason than because they don't sell CPU's and thus don't make any money by making it better (even though they've already done the CPU threading work for the other platforms eg: PS3 and 360.)

There is one fairly popular game that got ported that supports physx but ODES NOT use GPU based physics and strangely enough, on the PC doesn't use more than 1 cpu core. Now either the game dev got lazy, figured one cpu was enough in modern day PCs or Nvidia payed them not to use mulyithread code on teh PC
The PhysX libraries on the PC platform do not use any more than one thread. This isn't a developer issue, it's a PhysX library issue on the PC platform.
 
My argument is with this:

Why not? NVIDIA is having us do just that, except they're tossing stuff at a GPU, performance be damned. 30% of the video card market belongs to NVIDIA and is of the G80 to G92 architecture type. Care to find a video game where I can play with PhysX and not utterly destroy performance on a 9800?

I've already addressed this entire issue; the epic majority of video cards out today do not have the capacity to do this kind of workload and retain playable framerates. The only answer is seemingly to buy a GT275 or better, or to buy a second NVIDIA video card. And since multi-GPU platforms account for less than 2% of the installed base, I think that's a pretty moot point.

I have been trying to follow both sides of this argument - and I'm afraid this is the stance I don't understand the most. So I just wanted to clarify. While I know automotive analogies are frowned on here, it is convienient to show my confusion.

Assume for an instance Ford decides to put an engine with X amount of horse power in their vehicle. They have a choice between one made by BMW and one made by Cummings. BMW says their new model of engine will not only provide the necissary horse power, but will also handle automatically changing gears. Because this change would be difficult or expensive to implement and BMW wants people to use their engine, they offer to implement it for Ford if Ford chooses their engine.

Now, it seems to me that your argument is that it is dishonest or lazy for Ford and BMW to make such an agreement because people with last years model or people with Cummings engines can't use it. Instead, you are arguing that BMW should spend a lot of money to implement the automatic transmission in a different part of the car they do not manufacture so that everyone can use it. I am not sure that makes sense.

PhysX is supposed to be a value added feature. From NVidia's standpoint, it is supposed to add value to the game for people who buy dual cards or who buy their top of the line cards. Yes, their newer cards do it better. Yes, multiple cards do it better. However, that is the value added part. I have yet to see a game where owners of a 9800 can't just turn it off and still play the game. Will it be the exact experience of someone who ownes a 285gtx and plays with PhysX enabled? No - but it was never meant to be.

In this case, it seems the correct thing for AMD to do is not to complain about NVidia's implementation or method of getting the feature used. If AMD doesn't believe that it is that much of a value added feature and that it will eventually die out if NVidia stops subsidizing it, then they should be happy NVidia is wasting money on it. If AMD believes that it is that much of a value added feature, then they need to come up with their own solution.

Either way, I guess I would just like you to be a little more clear with your argument. Is your issue really that NVidia is trying to make having multiple cards and/or their newer cards worth more to the consumer the issue or did I miss something?
 
NV artificially crippling performance on the PC platform for no other reason than because they don't sell CPU's and thus don't make any money by making it better (even though they've already done the CPU threading work for the other platforms eg: PS3 and 360.)


The PhysX libraries on the PC platform do not use any more than one thread. This isn't a developer issue, it's a PhysX library issue on the PC platform.

Concidering Metro 2033 Devs are using the PhysX libraries and have stated THEY DO and WILL use more than 1 core, really flies in the face of your stance that Nvidia is crippling the PhysX libraries for multi core multithreaded capable systems. If they can take the time to make sure it works and impliment it, then so can every other dev out there. The fact that PhysX in other games isn't using more than 1 core in those games point to the Dev not taking the time to impliment multithread PhysX libraries not Nvidia preventing it from happening.
 
Or could it be that it is a single threaded library, with several threads having their own instance and each handling a set of physics tasks?
If so, that would put a lot more work on the devs requiring that they handle load balancing of the physics tasks explicitly, which an ideal library would take away imo.
 
I dont believe they have based on the link I provided earlier. I believe it falls on the devs who dont want or simply cant be bothered to code for multi core setups. Metro 2033, according to the dev is going to have massive multi core firendliness where the cpu will do the meanile physics while off loading the heavier stuff to the GPU where a GPU is present to do such a task. When a GPU is not present, it will scale back the physics calcs, simplify them for the CPU. They wont be exactly the same as if a GPU does them, the end user wont lose much in the details.

According to the Metro 2033 guys the GPU will only do double what the CPU can do. Take that however you want. I'm assuming they are talking about quad core CPU's when they say that.

Regards,
SB
 
Back
Top