Havok Q&A @ GameSpot: GPUs for physics

Discussion in 'GPGPU Technology & Programming' started by SlmDnk, Oct 27, 2005.

  1. Fodder

    Fodder Stealth Nerd
    Veteran

    Joined:
    Jul 12, 2003
    Messages:
    1,112
    Likes Received:
    9
    Location:
    Sunny Melbourne
    If physics takes off, the same will apply. You're still better off with a chip that can do something else constructive when physics processing isn't required.
     
  2. jvd

    jvd
    Banned

    Joined:
    Feb 13, 2002
    Messages:
    12,724
    Likes Received:
    9
    Location:
    new jersey
    The problem is these cards use alot of power and put out alot of heat .

    Sure i might be able to get some power out of an older card . However depending on peopls buying habbits the cards may be 2 or 3 generations old or may be sold to keep the new upgrades in .

    I much rather have a new card in my pc that is built for physics and hopefully is passively cooled .


    I doubt anyone will want to keep a second x1800xt 512 meg card in thier pc just to do physics . along with another even more powerfull card that will most likely produce tons of heat and draw tons of power
     
  3. pc999

    Veteran

    Joined:
    Mar 13, 2004
    Messages:
    3,628
    Likes Received:
    31
    Location:
    Portugal
    Heat would be a problem but I think it still would be beter someone having a X1800 and a X13/600 for physics and GPGPU than only a physics card, but to be fair I think that some benchmarks or something like would be usefull, anyway I think that a multiporpuse card (once it could also do sound, gfx, AI (by some) or whatever) would be my preference.
     
  4. jvd

    jvd
    Banned

    Joined:
    Feb 13, 2002
    Messages:
    12,724
    Likes Received:
    9
    Location:
    new jersey
    I don't see why u think that .

    The current graphics cards have tons of things in hardware that wont be usefull in physics like the rops and texture units , video acceleration and other hardwired devices .

    I would expect a physics chip would be on par with the current high end video cards but i guess we have to see .
     
  5. IgnorancePersonified

    Regular

    Joined:
    Apr 12, 2004
    Messages:
    778
    Likes Received:
    18
    Location:
    Sunny Canberra
    Would a PPU style card be useful for AI or video trans-coding? That's be interesting.

    Fodder the same argument applies to either multicore chips or PPU's or GPU's. Most of the time my system is 'on' the 3D parts of the graphics chips are "idle" so are most of the execution units in a dual core rig(let alone quad core), so would be a PPU and so is the Sound Card (if I had one which I don't atm) so is the fire-wire ports and USB ports and +512MB of Ram etc. They are currently integrated onto the mobo for most items but there's nothing stoping a ppu unit doing the same. I actually thought nvidia was going somewhere with the APU unit on the south bridge of the nforce1+2 and was hoping they were going to expand on that concept. "Platform Processer" I think was the marketing in the pdf's...I bet they will now. Putting a PPU style unit into a south bridge actually makes a lot of sense and, at least for the AMD side, gives chipset makers the ability to add extra features above the usual stuff. At least as useful as a disabled intergrated video option. I would rather buy a ppu that is used for the small amount of time I game (small in a % of hours per day - that is about 3-4 hours a day) particularly if it it is indeed a coprocessor like the AMD Forward looking Statements indicate or is in the south bridge or an add in card. Certainly if, and we don't have any benchmarks, but if the performance of a dedicated unit was far above that of a hack job done on either a general purpose unit or takes resources from a video card. Let's face it- SLI or xFire or whatever is never going to hit as a high a penetration rate as peeps are making out plus it needs an SM3+ card from what I read. Is this going to be another subject for dynamic branching arguments? God I hope not. If that is a matter then maybe the 1st gen of SM3 cards won't be that usefual at all. My new system only has one 16x PCI-E slot and so will my next one. While it's interesting in principal if it detracts from my frame rate I'm not going there and there better be an option to turn that shit off in the "Havok panel".

    My "old" video cards go out to other people. Not everyone is like me but why use a small fraction of its capabilty when it can be used to it's fullest?
     
  6. pc999

    Veteran

    Joined:
    Mar 13, 2004
    Messages:
    3,628
    Likes Received:
    31
    Location:
    Portugal
    I agree, whitout some benchmarks I think it still hard to make some definitive opinions, but I think that we should also take in consideration what are they doing in the next gen PPU (the answer are there IMO) if it is programable (probably it will, I guess) or not if it is it could be the case here "GPPPU" is possible then I really think that it will make a big problem to the industry or at least the users, anyway I always think that having the chance to more things is good but without more data in both the PPU and the real GPGPU ability of the chips (if it really make the difference) is hard to say more.

    But I guess that future GPUs will be better for physics as long as I remember JC as saying that ATI had said to him that gfx are done so I expect them to move for designs more suitable for physics, GPGPU etc... once more time will tell.
     
  7. soylent

    Newcomer

    Joined:
    May 4, 2005
    Messages:
    165
    Likes Received:
    8
    Location:
    Sweden
    One issue I never see mentioned is physics quality. Look at HL2, how often does physics get b0rked when you do more than push around a few objects at a time?

    Just adding a few constraints things tend to oscillate and start moving from being in what should be an equilibrium and so on. Put a small object ontop of another and then push the lower object upwards, you expect the object on top to never sink through the lower one unless the lower object is wafer thin and velocities are high, but that happens all the time in HL2 even with a foot thick door and not overly fast movement. It's really fugly and it's obvious that the number of iterations used to resolve forces and collisions could be much improved.

    For single player all that would happen is that you would retain the same level of physics quality as we have today, and with a PPU you would experience less funky stuff. In a multiplayer environment you often DON'T have physics on the client at all except for things like ragdolls. Physics is resolved entirely on the server which is what makes playing HL2DM so painful. With a PPU you could afford to do physics prediction on the client and then recieve periodic corrections from the server as is done with player positions when server/player get to out of sync with each other. Online play is already inconsistent, improving this when possible for those who have the hardware or are willing to take a hit to framerate and do it on a CPU is not unfair for the same reason it is not unfair for you to play any current game with a higher end system than others.

    PPU's could also be very good for game servers themselves as they have to deal with a lot more physics than the clients will ever do, they have to deal with all physics on the entire map while the client would only ever deal with what can be seen.

    With a PPU you could have very nice quality physics at a better framerate VS lower quality CPU physics at higher CPU usage. CPU usage for physics tends to be very sporadic; a huge bomb goes off and suddenly one frame takes 100 ms as a bunch of stuff is subjected to very high forces that require a lot of iterations etc. I see a lot of potential for PPU's to reduce that kind of sporadic CPU usage.
     
  8. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,921
    Likes Received:
    221
    Location:
    Seattle, WA
    That is an assumption that I don't believe has any merit.

    First of all, Ageia is not pushing higher-quality physics as an application for their processor, they are pushing more physics. I have yet to see a demo where they purport to solve the problems of multiple interacting physics objects.

    Secondly, the problem that you describe is much more of a software development issue than it is a computational issue. That is to say, it is simply very challenging to deal with simulating objects in contact with one another. It's very easy to simulate objects that are in contact with rigid surfaces, and also very easy to deal with objects bouncing off of one another. It just becomes very difficult when you are sitting simulated objects, for example, on top of one another.

    One way to look at the problem is this. Consider that a physics engine is built from the ground-up to support colliding objects. It is meant to do a good job of simulating objects bouncing off of rigid surfaces and one another. Now you place two of these objects in contact. If they are in contact, then they are continually colliding: the engine wants to calculate a new collision every frame, but (for example), gravity keeps pushing the object on top back down. So you get the object on top vibrating.

    In the meantime, it is unlikely, due to numerical issues, that this object is vibrating straight up and down: it's probably got some very small horizontal momentum. So, it will start to move off of the object on bottom, and the object on bottom will start to be pushed in the opposite direction.

    What you really need to deal with this is some sort of data structure system that can efficiently deal with creation and destruction of objects: you want to be able to combine two objects into one single object when they come into prolonged contact (one object is destroyed). Then, you want to have a good measurement of when this compound object should be broken apart.

    This doesn't necessarily take much of any more resources to calculate, but it does mean that you have to have a much more sophisticated data structure for your physics objects, if you want to be efficient about memory accesses.

    Edit:
    And then, once you've built a data structure that solves the problem of physics objects coming to a stop in contact with one another, you have the other problem of dealing with interacting objects moving slowly but still in contact. You definitely don't want vibrations in this situation, either (imagine a ball rolling on a triangle-shaped block of wood that is on a slick surface).
     
  9. soylent

    Newcomer

    Joined:
    May 4, 2005
    Messages:
    165
    Likes Received:
    8
    Location:
    Sweden
    http://www.ode.org/ode-0.039-userguide.html#ref70 is a good guide to what issues the ODE physics engine suffers from(section 11 and 12) and what options it has as well as things to avoid. I find it likely that havok and novodex support using smaller time steps at the very least(preferably per object or per cluster of objects so an explosion and objects affected by it may recieve many more iterations per frame than a stack of crates tipping over) as well as possibly having support for varying levels of contact force approximations and so on. If you are only simulating a small number of objects and using a PPU it makes sense for the developer to crank up the quality very far if possible.
     
  10. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,921
    Likes Received:
    221
    Location:
    Seattle, WA
    Just putting more processing power won't really help things much. You need fundamentally new ways of doing the physics calculations for objects in continual contact. I glanced through it pretty quickly, but I didn't see anything that deals with objects in continual contact (that weren't originally designed by the artist to be so).
     
  11. Lux_

    Newcomer

    Joined:
    Sep 22, 2005
    Messages:
    206
    Likes Received:
    1
    IMHO plain old friction fits right in.
     
  12. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,921
    Likes Received:
    221
    Location:
    Seattle, WA
    Sure, but that's a fundamentally different way of doing things than collision-based calculations. And it's not an easy thing to actually do on a computer.
     
  13. soylent

    Newcomer

    Joined:
    May 4, 2005
    Messages:
    165
    Likes Received:
    8
    Location:
    Sweden
    Interpenetration issues clearly have a lot to do with time step length. Issues with things starting to oscillate when there are a lot of constraints put on them(such as springs and rigid ropes and axes) ought to benefit greatly from reduced timestep and more accurate solving.

    These are the main quality issues I have experienced in HL2 and fixing those is much more important than a mostly cosmetic jittering about of surfaces in continuous contact.
     
  14. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,921
    Likes Received:
    221
    Location:
    Seattle, WA
    Note that the "jittering" is a best-case scenario. Objects in constant contact can also be numerically unstable. The more times there is a collision, the more numerically unstable things will become. This scenario really needs smarter programming, not more proccessing power.
     
  15. Fred

    Newcomer

    Joined:
    Feb 18, 2002
    Messages:
    210
    Likes Received:
    15
    Ive said it before, but physics is not something readily helped by parrellization, you can think of a million places where a hardware solution is less good than a software solution.

    Rigid body, elastic scattering/collisions is the one area where it might work, but even then its not clear how much it buys you as you will need a tremendous amount of fallbacks ot make things look acceptable.

    Its just a bad idea in general and will lead to lazy programming and bad physics engines.
     
  16. jvd

    jvd
    Banned

    Joined:
    Feb 13, 2002
    Messages:
    12,724
    Likes Received:
    9
    Location:
    new jersey
    not for nothing but aren't the physics engines already bad
     
  17. wireframe

    Veteran

    Joined:
    Jul 14, 2004
    Messages:
    1,347
    Likes Received:
    33
    Isn't part of the problem that current physics implementations have to limit themselves, due to CPU processing power, by creating lots of exceptions? These exceptions must play havok (I had to...) with the stability of the model in certain cases. By having more processing power you should be able to use a better and more universal model and this has to count for something in terms of stability.
     
  18. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,921
    Likes Received:
    221
    Location:
    Seattle, WA
    Sure it is. You just break up the simulated area into multiple separate areas that are non-interacting (an easy way to do this is, for example, to use existing portal-based systems for the separation). Add in some conditions for overlapping regions, and you're set.

    What's bad in general?
     
  19. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,921
    Likes Received:
    221
    Location:
    Seattle, WA
    Er, you make use of the exceptions to improve the stability of an incomplete universal model. This isn't really going to change, because there are always going to be precision issues due to the approximations used in the physics processing.
     
  20. wireframe

    Veteran

    Joined:
    Jul 14, 2004
    Messages:
    1,347
    Likes Received:
    33
    Ok, but what I meant was that surely physics in games today are very limited and limited to points of interest in the game (This explosion happens and this debris is supposed to fall in this area). When you "play" with this and break it you will get unexpected results. Or, is it possible, in the case of the two sheets of metal example, that these are not "taught" how to interect with each other because this was outside the scope of the exercise and devs found the compromise acceptable in order to achieve a certain level of performance?

    When you have massive power dedicated to the task you should be able to make a more unified and complete model, without exceptions in areas or per item, and this should improve stability and make development more manageable. (Note that I don't just mean the calculations and their precision, but the "big picture" and being able to think of everything as an object that acts in physical space instead of "oh, this exception happens here, here and here, but not here.") It should be more predictable.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...