CrossFireX & SLI scaling today

Is that so?
As far as I know, UE3.0 always uses GPU/PPU physics acceleration if there is support.
It just doesn't matter much for the standard UT3 maps because the physics load is so low that performance between CPU and GPU/PPU generally makes little or no difference.

The Ageia maps simply bump up the physics content, giving the GPU/PPU something to do, and giving the CPU more than it can handle. I don't think they're anything other than just maps, so they don't change the engine code in any way. PhysX is handled the same as it always is.

Have you read the anandtech piece about it?
http://www.anandtech.com/video/showdoc.aspx?i=3539&p=11

HARDWARE physx is only enabled in those three maps. Right now there's Physx, PPU PhysX and GPU PhysX and nVidia makes sure that no one is really sure what they're actually using.
 
Have you read the anandtech piece about it?
http://www.anandtech.com/video/showdoc.aspx?i=3539&p=11

HARDWARE physx is only enabled in those three maps. Right now there's Physx, PPU PhysX and GPU PhysX and nVidia makes sure that no one is really sure what they're actually using.

I don't think Anandtech is right there.

In fact, they aren't even right in saying that nVidia made that pack available a while ago.
It was released way earlier than that, by Ageia.
nVidia just re-packaged it for its PowerPack.
In fact, Anandtech actually reviewed the Ageia pack back in 2007: http://www.anandtech.com/showdoc.aspx?i=3171
Their own conclusions show that even the stock maps were accelerated with the PhysX PPU: http://www.anandtech.com/showdoc.aspx?i=3171&p=4
 
that's PPU acceleration and not GPU acceleration!

That's what i meant with the confusing part. you ASSUME your GPU is accelerating UT3 because it used to with the PPU as well.
 
that's PPU acceleration and not GPU acceleration!

That's what i meant with the confusing part. you ASSUME your GPU is accelerating UT3 because it used to with the PPU as well.

I'm quite sure that in the case of UT3, PPU and GPU don't matter.
This is the same situation as with 3DMark Vantage: because the code defaulted to PPU-acceleration anyway, it will now silently use the GPU instead. That is how the PhysX libraries normally work.

But if you want to prove otherwise, go ahead. It would mean that the code for UT3 somehow has been changed to only enable hardware acceleration on the mod pack maps (while this was not the case in the earlier Anandtech benchmark)... or that somehow the PhysX libraries somehow not only know that you're playing UT3, but actually know which map you're playing, and silently disable acceleration... Which I doubt the library is capable of in the first place.
 
I'm quite sure that in the case of UT3, PPU and GPU don't matter.
This is the same situation as with 3DMark Vantage: because the code defaulted to PPU-acceleration anyway, it will now silently use the GPU instead. That is how the PhysX libraries normally work.

But if you want to prove otherwise, go ahead. It would mean that the code for UT3 somehow has been changed to only enable hardware acceleration on the mod pack maps (while this was not the case in the earlier Anandtech benchmark)... or that somehow the PhysX libraries somehow not only know that you're playing UT3, but actually know which map you're playing, and silently disable acceleration... Which I doubt the library is capable of in the first place.

I'm happy to oblige :D

http://techgage.com/article/nvidias_physx_performance_and_status_report/3

utiii_results_2.png

at high resolutions there is no performance improvement on CPU PhysX only or PhysX on a 9800gtx. at lower resolution the PPU outperforms the GPU 10-15%.
The Vantage scores on the page before that one indicate that Vantage PhysX relies solely on GPU PhysX as PPU physics can't hold a candle to what the 9800GTX does there.

There are some functions not available on GPU PhysX. An older (PPU based development) PhysX approach will show the PPU outperform the GPU based part all the time (UT3). a newer implementation (GPU based development) PhysX product (vantage) will basically kill off any reason to have a PPU. I can't get it any more apparent than that.
 
I'm happy to oblige :D

http://techgage.com/article/nvidias_physx_performance_and_status_report/3


at high resolutions there is no performance improvement (even a deficit on average) on CPU only or physics on a 9800gtx.
The vantage scores on the page before that one indicate that Vantage PhysX relies solely on GPU PhysX as PPU physics can't hold a candle to what the 9800GTX does there.

I fail to see your logic.
I thought you were trying to prove that UT3 is not using the GPU for physics in any maps other than the Ageia mod pack. Now you are using Ageia mod pack to try and prove that?
I think you mistakenly interpret "GPU acceleration" as actually getting better performance. GPU acceleration simply means that processing is offloaded from the CPU to the GPU.

There are some functions not available on GPU PhysX.

Got any info to back that up?

An older (PPU based) PhysX approach will show the PPU outperform the GPU based part all the time. a newer implementation (GPU based) PhysX product (vantage) will basically kill off any reason to have a PPU. I can't get it any more apparant than that.

Nonsense.
Firstly, 3DMark Vantage was released long before nVidia made PhysX work on GPUs. So Vantage couldn't have been aimed at GPU-based PhysX in the first place.
Secondly, the real reason why Vantage is so much faster with a GPU is that it has very simple graphics, allocating nearly the entire GPU processing capability for physics.
UT3 has a far heavier graphics workload (you said it yourself: at high resolutions), so only a relatively small portion of the GPU's processing power can be allocated to physics. If you look at benchmarks with two GPUs, one dedicated to PhysX, the other to graphics only, you'll see that the GPU PhysX setup will outperform a PPU-based setup in UT3 aswell.
 
Last edited by a moderator:
I fail to see your logic.
I thought you were trying to prove that UT3 is not using the GPU for physics in any maps other than the Ageia mod pack. Now you are using Ageia mod pack to try and prove that?
I think you mistakenly interpret "GPU acceleration" as actually getting better performance. GPU acceleration simply means that processing is offloaded from the CPU to the GPU.

Yeah.. I couldn't get any further than anandtechs quotes on PPU acceleration
Looking at all of our results then, is the PhysX PPU improving performance under UT3’s stock maps? Probably. We can’t rule out other possibilities with the data we have, but our best explanation is that given a big enough map with enough players and vehicles, and enough of a computer to not be held down elsewhere, the PhysX PPU is giving us a measurable performance improvement of 10-20%. However we also have to keep in mind that with the frame rates we were already getting and the kinds of maps we believe this benefit is most pronounced on, that it’s not making a significant difference.
And with the GPU performing less than the PPU version, I can only assume performance improvements are nill.


Got any info to back that up?

Does Cell Factor run PhysX on the GPU yet? Manju Hedge said their goal was to support all PPU functions on the GPU solver but as far as I know that's not yet the case. he later admitted that some functions (riggid bodies) will probably never be GPU accelerated
 
Yeah.. I couldn't get any further than anandtechs quotes on PPU acceleration
And with the GPU performing less than the PPU version, I can only assume performance improvements are nill.

But that was not what you said.
You said: "HARDWARE physx is only enabled in those three maps."
Even if there aren't any performance improvements, that doesn't mean hardware physx isn't used.

Does Cell Factor run PhysX on the GPU yet? Manju Hedge said their goal was to support all PPU functions on the GPU solver but as far as I know that's not yet the case. he later admitted that some functions (riggid bodies) will probably never be GPU accelerated

Do we know why Cell Factor doesn't run on the GPU? Since it's esentially a marketing-tool for Ageia, it could just have some extra checks to make sure it is running with a real Ageia PPU installed. When I tried to run it, it simply refused to switch to hardware mode. No indication that it was broken or anything.

Also, even if GPUs don't support all features, why wouldn't nVidia just add CPU code to remain compatible? I see no reason why they shouldn't, especially since they already have a full CPU solver anyway. And I can see many reasons why they should... It would mean that developers could seamlessly enable GPU acceleration, rather than only enabling it when there are only simplified physics in the game. Some acceleration is better than no acceleration, you'd get the best of both worlds.

I find all this rather circumstantial.
Besides, your article doesn't say "will probably never be GPU accelerated". It says "however: the port of rigid bodies n ' has not yet been made and this feature may not, for the moment at least, be accelerated by the GPU."
"For the moment" is something quite different from "probably never".
This topic has more info on it:
http://developer.nvidia.com/forums/index.php?showtopic=2649

It seems that only one feature was missing at the time of writing ("D6 joints only support angular drives. They do not support linear drives or SLERP drives").
PhysX SDK 3.0 is supposed to have full support.
 
Last edited by a moderator:
This is the same situation as with 3DMark Vantage: because the code defaulted to PPU-acceleration anyway, it will now silently use the GPU instead. That is how the PhysX libraries normally work.

IIRC Vantage is only GPU accelerated when the physx driver installer has modified your vantage installation by replacing the physx dll... (heck, shouldn't be hard to make a good cpu score that way ;)
 
I find all this rather circumstantial.
Besides, your article doesn't say "will probably never be GPU accelerated". It says "however: the port of rigid bodies n ' has not yet been made and this feature may not, for the moment at least, be accelerated by the GPU."
"For the moment" is something quite different from "probably never".
This topic has more info on it:
http://developer.nvidia.com/forums/index.php?showtopic=2649

It seems that only one feature was missing at the time of writing ("D6 joints only support angular drives. They do not support linear drives or SLERP drives").
PhysX SDK 3.0 is supposed to have full support.

indeed. it is supposed to be there in sdk 3.0 but release has been postponed .. and postponed and even then it's not confirmed yet.
this one is more graphic: http://developer.nvidia.com/forums/index.php?showtopic=2836

So, as it currently stands GPU's can't rigid body simulations.
 
IIRC Vantage is only GPU accelerated when the physx driver installer has modified your vantage installation by replacing the physx dll... (heck, shouldn't be hard to make a good cpu score that way ;)

Well obviously.... the DLL bundled with Vantage would have been pre-GPU-support.
But that's no different from updating your DirectX or your videocard driver. It doesn't change the application itself, only one of the third-party API libraries it uses.
Heck, that DLL is updated anyway if you install a newer PhysX version, regardless of whether you use GPU, PPU or CPU for physics.
 
So, as it currently stands GPU's can't rigid body simulations.

But there is a fallback for CPU, so you don't notice it isn't there. Hence it can't be the reason why something like CellFactor doesn't work.
And certainly your statement "will probably never be supported" is complete nonsense.
You even seem to agree that it will come in SDK 3.0, and act like you already knew that before. So why did you make that claim then?
 
Honestly, I thought AMD would have a greater advantage in pure, unadjusted Fps-scaling. Just reading the article: They hit 99C on the middle HD6970 with a Tri-Crossfire? WTF?
 
Honestly, I thought AMD would have a greater advantage in pure, unadjusted Fps-scaling. Just reading the article: They hit 99C on the middle HD6970 with a Tri-Crossfire? WTF?

The motherboard they used left no room for the middle card to get air from, after they put in some paper between the cards to give it a millimeter or two room, it dropped to 90C
 
Besides that's poor design for the cooling shrouds: are there actually Mainboards where you don't have to choke at least one of the cards?
Just reading: 1200 watt for system with 3x GTX 580. Insane.
 
Besides that's poor design for the cooling shrouds: are there actually Mainboards where you don't have to choke at least one of the cards?
Just reading: 1200 watt for system with 3x GTX 580. Insane.

Not ATX mobos I think, but isn't EATX bigger in both dimensions? :p

Anyway, there's apparently rubberstuds on HD6990's backside to prevent this from happening when you put 2 cards too close together
 
Let's bump this oldie, this time Anandtech is at it:
http://www.anandtech.com/show/4254/triplegpu-performance-multigpu-scaling-part1/1

I took the liberty of putting his numbers to a graph, and add averages to the end
scaling.png


(Note: These are scaling numbers, meaning Radeons (on red) are compared to single Radeon, and GeForces (on green) are compared to single GeForce)

Amazing. This way a card that starts with 20 fps @ single and 35 fps @ dual config can be painted in much more glorious way than another card that run with 70 fps @ single and 90 fps @ dual (due to CPU bottle neck).
 
Amazing. This way a card that starts with 20 fps @ single and 35 fps @ dual config can be painted in much more glorious way than another card that run with 70 fps @ single and 90 fps @ dual (due to CPU bottle neck).

Except that those numbers don't really happen at all

I ran the actual FPS numbers too (I excluded the 3-way CFX number from wolfenstein in the average, as one scoring nothing in it messes up the complete average to give wrong image):
scaling-perf.png
 
Back
Top