Multi threaded PhysX benchmarks - bye bye GPU PhysX.

Discussion in 'PC Gaming' started by brain_stew, Mar 18, 2010.

  1. Squilliam

    Squilliam Beyond3d isn't defined yet
    Veteran

    Joined:
    Jan 11, 2008
    Messages:
    3,495
    Likes Received:
    114
    Location:
    New Zealand
    If they are indeed limiting performance as this seems to indicate then does this not make their efforts unsubstantial? Why buy a GPU for physics when you know that the wind is blowing in the opposite direction? Even for developers, if they are seeing this kind of behaviour, would it not make sense from the perspective of >50% of their users to use a physics implementation which works well for everyone? I.E use Havok because that way everyone gets a fair deal even if it costs money.
     
  2. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,059
    Likes Received:
    3,119
    Location:
    New York
    Yes, that's the factor that is constantly ignored in these proposed schemes. If Nvidia is in fact hobbling the CPU intentionally then developers or competitors can simply roll better solutions. But that's not happening either.
     
  3. Silent_Buddha

    Legend

    Joined:
    Mar 13, 2007
    Messages:
    19,426
    Likes Received:
    10,320
    Well, from what I've read, a lot of the time when PhysX is used, Nvidia does much of the coding for the developer. In the case of many of these smaller studios, that means not only a free physics SDK, but also free dev. time. And if Nvidia is doing the GPU accelerated coding for them, they may not entirely be aware of what is being done, just that it's completely free to them and they get free marketing from Nvidia if it's additionally a TWIMTBP title.

    Take Batman: AA for example, there was CPU and GPU power to spare in that title, and even basic versions of some of PhysX effects could have been added, but since Nvidia did all of the coding for it, there was no attempt to put in even basic non-interactive fog from my understanding.

    Havok costs money. In the world of shrinking profits in the PC space and thus probably less opportunities for smaller studio's to secure big budget funding, PhysX does provide an attractive situation where you can not only get your physics package for free, but perhaps also get physics effects coding for free.

    Looking at the list of titles using PhysX, there's only a few AAA devs on there, by and large if a studio can afford to use Havok, they will use Havok if they don't wish to use an in house engine.

    It'll be interesting to see if DMM2 and/or Bullet will start getting used by studios. But I'd imagine efforts using those won't be coming out for at least 1-2 years minimum.

    Regards,
    SB
     
  4. Sxotty

    Legend

    Joined:
    Dec 11, 2002
    Messages:
    5,496
    Likes Received:
    866
    Location:
    PA USA
    I don't get why this gets people upset. It seems that optimization helps performance. It does for all configurations. The relative results are the same. Nothing appears surprising about it.
     
  5. brain_stew

    Regular

    Joined:
    Jun 4, 2006
    Messages:
    556
    Likes Received:
    0
    No they're not. In all previous "hardware PhysX" games using the CPU was much slower, to the point that it was completely unplayable in every game that used it, no matter how fast your CPU was. However in both this test and Metro 2033 (were multi-threaded PhysX is used) using a CPU is actually faster than using a GPU for anyone with just one GPU (i.e. the vast majority of users). That's a huge change.
     
  6. Davros

    Legend

    Joined:
    Jun 7, 2004
    Messages:
    17,884
    Likes Received:
    5,334
    Actually it doesnt physx on the cpu could be much better
     
  7. Arnold Beckenbauer

    Veteran Subscriber

    Joined:
    Oct 11, 2006
    Messages:
    1,756
    Likes Received:
    722
    Location:
    Germany
    #27 Arnold Beckenbauer, May 22, 2010
    Last edited by a moderator: May 22, 2010
  8. Silent_Buddha

    Legend

    Joined:
    Mar 13, 2007
    Messages:
    19,426
    Likes Received:
    10,320
    Looks like still some bugs with the GPU rendering according to the few comments that are up at the second link. CPU works correctly with higher number of emitters, but GPU doesn't.

    Wonder if anyone has gotten this to work with ATI + Nvidia GPU.

    Regards,
    SB
     
  9. Sxotty

    Legend

    Joined:
    Dec 11, 2002
    Messages:
    5,496
    Likes Received:
    866
    Location:
    PA USA
    Hello GPU PhysixX

    The results in arnolds post are completely different than the prior results. They make GPU physics look way better than CPU physics. I guess brainstews jumping to conclusions was completely off base.

     
  10. Arnold Beckenbauer

    Veteran Subscriber

    Joined:
    Oct 11, 2006
    Messages:
    1,756
    Likes Received:
    722
    Location:
    Germany
    http://www.forum-3dcenter.org/vbulletin/showpost.php?p=8040131&postcount=73
    My results:
     
    #30 Arnold Beckenbauer, May 22, 2010
    Last edited by a moderator: May 22, 2010
  11. Davros

    Legend

    Joined:
    Jun 7, 2004
    Messages:
    17,884
    Likes Received:
    5,334
    Nvidia Removes Hybrid PhysX Blockage

    Nvidia has removed the hybrid PhysX blockage in the recent 257.15 GeForce drivers. However, it is still a mystery for us if it was done intentionally or perheps it is just a bug. "I'm seriously amazed. I've tested it by myself on Windows XP and Windows 7 x64 - it really works out of the box with 257.15. And even more surprising - timebomb issue is gone too.

    Timebomb issue ?
     
  12. DieH@rd

    Legend

    Joined:
    Sep 20, 2006
    Messages:
    6,387
    Likes Received:
    2,411
    Phenom II X2 550 @ default 3.1ghz
    Radeom 4670
    3gb ram


    60k particles, cpu mode, 3 emitters = score 34
    60k particles, cpu mode, 31 emitters = score 59
     
  13. PSU-failure

    Newcomer

    Joined:
    May 3, 2007
    Messages:
    249
    Likes Received:
    0
    After some testing, a 3GHz Penryn is almost as fast as a 8800GT... when using async mode.

    The bottleneck seems to be an interop penalty as I went from 7fps/7sps to 1fps/45sps.

    There's only 1 explanation to this: the rendering is really heavy on the host side (CPU/bus).

    I'll test with an i7 soon, but I expect better performance than a GTX480 as a PPU.
     
  14. Sxotty

    Legend

    Joined:
    Dec 11, 2002
    Messages:
    5,496
    Likes Received:
    866
    Location:
    PA USA
    I hope so that will be great news for physics calculations in general if there are a variety of ways to get fast calculations.
     
  15. Malo

    Malo Yak Mechanicum
    Legend Subscriber

    Joined:
    Feb 9, 2002
    Messages:
    8,931
    Likes Received:
    5,533
    Location:
    Pennsylvania
    Nvidia reverses the physics with ATI cards in previous versions.

    I guess we'll see in the next couple of versions whether this omission remains?
     
  16. PSU-failure

    Newcomer

    Joined:
    May 3, 2007
    Messages:
    249
    Likes Received:
    0
    Except if the cause is a really bad efficiency from the start, which the async result suggests.

    If I'm not too stupid, even a 1000x faster "PPU" would bring next to zero speedup if most of the performance hit is due to really heavy interop.

    45 -> 7sps with 1 -> 7fps means the time taken to transfer data to and from the GPU is about equal to the CPU time required to simulate the ~60k particles, so that even an infinite amount of computing power will never translate to more than twice the simulation speed (perhaps 4x as I only have PCI-E 1.1 atm).


    Now, what if they used Compute Shaders in place of CUDA? No interop penalty, leading to better performance, and available to both vendors. I clearly see no future for PhysX if it doesn't improve.
     
  17. PSU-failure

    Newcomer

    Joined:
    May 3, 2007
    Messages:
    249
    Likes Received:
    0
    Some more results, with 10k particles so that the warm-up isn't too long and in 800x600:

    Code:
    Emitters	     Sync		    Async	
    	fps	sps	fps	sps
    1	21	21	237	20
    2	41	41	35	58
    4	52	52	2	261
    8	63	63	1	760
    16	71	71	1	820
    
    Remember, that's a stock E8400 and the simulation is WAY faster with 8-16 emitters, or in other words, when the communication with the GPU becomes difficult due to many threads stealing CPU time to the graphics driver.

    Simulation is OK, so that's not a bug, just a proof PhysX has an insane interop penalty.
     
  18. Kej

    Kej
    Newcomer

    Joined:
    Oct 27, 2007
    Messages:
    14
    Likes Received:
    0
    Location:
    Sweden
    It's a bug, not a feature!

    Source

     
  19. Fox5

    Veteran

    Joined:
    Mar 22, 2002
    Messages:
    3,674
    Likes Received:
    5
    Nvidia needs to make physx into a technical solution, not just marketing.

    They should bundle it under some kind of SLI lite branding, at least for any cards less capable than the Fermi series.
    It should have full multithreading support whether or not an nvidia gpu is used.
    It should allow a secondary card to be dedicated for physx even with an ati card in place. A crossfire system running a secondary nvidia card is better than no nvidia card at all.

    If physx was more value add rather than "nvidia proprietary platform", nvidia might have/had success with it. It would be back to what nvidia does best, offer slightly higher framerates at the same software its competitors can run.
     
  20. Grall

    Grall Invisible Member
    Legend

    Joined:
    Apr 14, 2002
    Messages:
    10,801
    Likes Received:
    2,176
    Location:
    La-la land
    While Nvidia keeps pulling unpardonable crap like this (plus locking common features like AA to their own cards, etc) I won't be throwing any more money their way.

    I've paid for my NV boards, and thus have the right to run their shitty physx junk on 'em, regardless of what GPU actually displays the graphics. There's no excuse.

    Nvidia's just shooting themselves in the foot. They've been insufferable liars and bullies ever since they became the leading 3D vendor, I for one am not putting up with it any more.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...