No PhysX on OpenCL soon

Discussion in 'Graphics and Semiconductor Industry' started by neliz, Dec 4, 2009.

  1. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    10,428
    Likes Received:
    426
    Location:
    New York
    Why do you think PhysX performance is bad on PC's? It's slow when trying to do stuff that GPUs can do but so is every other CPU based physics engine.

    I highly doubt that running on a single core will doom an API. I'm sure many games out there only allocate one thread for physics calcs with other threads used for other things - AI, scene construction etc. The toolset, featureset, ease of use, extensibility etc are all more important than how many cores it runs on.
     
  2. Lightman

    Veteran Subscriber

    Joined:
    Jun 9, 2008
    Messages:
    1,802
    Likes Received:
    473
    Location:
    Torquay, UK
    Running on a single core in Pentium MMX compatibility mode will. As you know physics tasks are parallel in nature and can get very good speed boost from vectorized code.
    I wouldn't mind buying GeForce as my PhysX accelerator if that would be possible (other company main 3D accelerator) if it would give me tangible and honest reasons for that, but still having as an option to run everything on CPU in a optimal fashion is a must for good multiplatform API.

    If I can bring somewhat similar analogy here! We have Battleforge with SSAO in DX10 and DX11. Both will produce exactly same picture, but DX10 implementation is slower due to HARDWARE/API limitations. Someone with DX11 card will have better performance thanks to CS5.0, yet person with older DX10 card can still get same effects only slower. In case he want's better performance he can choose to get faster DX10 card (2nd card, 3rd card) and they still will improve performance of software, or jump to DX11. SSAO implementation is not crippled to run only on half a shaders on your GPU or anything like that.
    With PhysX you can't get more cores to improve your experience, you only can get a GeForce on PC... This is :cry:

    On the other hand with OpenCL Bullet Physics you will be able to utilize your GeForce's with OpenCL driver or Radeon's or simply fall back to CPU patch with multicore support.
    If only PhysX would move to OpenCL all my objections would be obsolete. :wink:
     
  3. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    10,428
    Likes Received:
    426
    Location:
    New York
    I really don't follow your reasoning. Are you saying that because PhysX on the CPU can't keep up with PhysX on the GPU that it's worse off than other CPU only APIs? That doesn't really make any sense to me.

    Bullet Physics does present an interesting alternative option for hardware accelerated physics but we'll have to wait until it matures a bit (and for IHV OpenCL support to mature) before we can make a comparison.
     
  4. Lightman

    Veteran Subscriber

    Joined:
    Jun 9, 2008
    Messages:
    1,802
    Likes Received:
    473
    Location:
    Torquay, UK
    This is only true because they choose to do so and not because GPU's are so much better in doing it.

    I agree with Bullet point though.
     
  5. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    10,428
    Likes Received:
    426
    Location:
    New York
    Well we're back to square one. You're stating something with no supporting evidence which is what started this whole discussion. Until you can point to another physics engine that has better performance than PhysX then those assertions don't hold any water.
     
  6. willardjuice

    willardjuice super willyjuice
    Moderator Veteran Alpha Subscriber

    Joined:
    May 14, 2005
    Messages:
    1,373
    Likes Received:
    242
    Location:
    NY
    Just because something is relatively good, doesn't make it good.
     
  7. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    10,428
    Likes Received:
    426
    Location:
    New York
    When it comes to technology it does. The merits of any technology by definition are relative - that's why everything has to be "better" than the previous solution else it's crap.

    Was R600 bad or just bad relative to G80? Are SSD's good or just good relative to spinning platters? Etc, etc.
     
  8. willardjuice

    willardjuice super willyjuice
    Moderator Veteran Alpha Subscriber

    Joined:
    May 14, 2005
    Messages:
    1,373
    Likes Received:
    242
    Location:
    NY
    But the difference here was R600 was ATi's best shot (at that moment in time). The CPU implementation in PhysX is clearly not Nvidia's best shot.
     
  9. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    10,428
    Likes Received:
    426
    Location:
    New York
    Maybe not but why does that matter? Depending on the resources and efficiency of a given firm they could expend less effort than needed to match or beat the competition's products. Does that make their products any worse because they didn't put their best foot forward? I must admit, you guys come up with some interesting angles to try to beat down this stuff :)

    Effort is meaningless. It's only the result that matters. Case in point is Intel's less than aggressive clockspeeds on their parts. They could have clocked them higher but for what reason? They were still fast enough at the lower speeds and better than the competition.
     
  10. willardjuice

    willardjuice super willyjuice
    Moderator Veteran Alpha Subscriber

    Joined:
    May 14, 2005
    Messages:
    1,373
    Likes Received:
    242
    Location:
    NY
    But again, that's not the case here. Nvidia purposely "cripples" the CPU implementation because they want their GPU implementation to look that much better. It is not because they lack the resources to do so. And it is certainly not because they lack competition considering most games do not use PhysX (I would argue PhysX is actually "losing" in this regard).

    And I personally don't have a problem with Nvidia doing this. It's a company; they are just trying to "pay the bills". Having said that, as a community we should striving/pushing for something (a little bit) better than PhysX.
     
  11. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    10,428
    Likes Received:
    426
    Location:
    New York
    But what I'm saying is that it doesn't matter if it's crippled or not. That's something most consumers would never know or care about. The only thing that matters is performance and I haven't seen any evidence of Havok for example being faster than PhysX.

    Yeah but why target PhysX specifically for improvement when all other physics engines are in the same boat feature/perf wise?
     
  12. willardjuice

    willardjuice super willyjuice
    Moderator Veteran Alpha Subscriber

    Joined:
    May 14, 2005
    Messages:
    1,373
    Likes Received:
    242
    Location:
    NY
    Ah the ole "but most people won't care/notice" card. I don't see how that's relevant to the discussion. I already said I don't have a problem with Nvidia doing it (and as you pointed out, the masses don't appear to notice). But as an (educated) community we should desire more (I don't assert this community is in any way the majority).

    Who says PhysX is being singled out? I'm just merely stating that Nvidia has the power to make the CPU implementation of PhysX (a lot) better. The same could be applied to other physics API's.
     
  13. Lightman

    Veteran Subscriber

    Joined:
    Jun 9, 2008
    Messages:
    1,802
    Likes Received:
    473
    Location:
    Torquay, UK
    What metrics do you propose to use?

    Anyway this is a moot point. Game dev would need to step in who used more than one SDK and explain pros and cons for each implementation.
     
  14. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    10,428
    Likes Received:
    426
    Location:
    New York
    The discussion in this thread is how PhysX fares in a comparison to other physics libraries. We weren't discussing whether or not Nvidia is trying as hard as they could or whether customers care that they aren't.

    @Lightman, yes agreed.
     
  15. willardjuice

    willardjuice super willyjuice
    Moderator Veteran Alpha Subscriber

    Joined:
    May 14, 2005
    Messages:
    1,373
    Likes Received:
    242
    Location:
    NY
    Actually I believe the discussion was why PhysX is being used more efficiently on consoles than the (CPU implementation on the) PC. Your answer to this question was the premise of the question itself was incorrect (i.e. your point was, comparatively speaking, PhysX is plenty efficient on the PC).

    I disagree with your answer. It is fair to say the CPU implementation of PhysX is rather inefficient. It's designed to be that way. You may accept this effort by Nvidia as reasonable (and that's perfectly fine). But surely you can see why others are not happy with the current environment.
     
  16. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    10,428
    Likes Received:
    426
    Location:
    New York
    Based on which metric though? Inefficiency would imply that PhysX uses more resources to accomplish the same task compared to other similiar physics solutions. Or alternatively it uses the same resources but accomplishes less. Is there evidence of either scenario out there?
     
  17. Silent_Buddha

    Legend

    Joined:
    Mar 13, 2007
    Messages:
    16,051
    Likes Received:
    5,003
    Based on the fact that they are fully capable of making it multi-threaded (if console rumors are anything to go by) and fully capable of doing at least minimal optimisations for platforms (again if console rumors are anything to go by), and yet are plainly opposed to do anything of the sort for the PC platform because it would erode the marketing impact of GPU <-> CPU PhysX comparisons.

    Regards,
    SB
     
  18. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    10,428
    Likes Received:
    426
    Location:
    New York
    What people think they can do is sorta irrelevant if their current product is competitive - "fast enough but not as fast as we think you can be" is not a valid critism.
     
  19. Silent_Buddha

    Legend

    Joined:
    Mar 13, 2007
    Messages:
    16,051
    Likes Received:
    5,003
    Fast enough is just as much guesswork as slower than the competition but free. And as Havok is paid and isn't excactly cheap, I still hold that PhysX is likely to be noticeably slower than Havok... Either way, noone will ever know for sure, as I'm sure there's masses of NDA's signed to make sure performance numbers are never released.

    Regards,
    SB
     
    #39 Silent_Buddha, Dec 8, 2009
    Last edited by a moderator: Dec 8, 2009
  20. ChrisRay

    ChrisRay <span style="color: rgb(124, 197, 0)">R.I.P. 1983-
    Veteran

    Joined:
    Nov 25, 2002
    Messages:
    2,234
    Likes Received:
    26
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...