GeForce PhysX for all GF8/9 with FW175.16

Discussion in '3D Hardware, Software & Output Devices' started by AnarchX, Jun 25, 2008.

  1. Unknown Soldier

    Veteran

    Joined:
    Jul 28, 2002
    Messages:
    4,047
    Likes Received:
    1,670
    I guess the question now is if Nvidia can release drivers that will allow for higher fps when using a second GPU to do PhysX. At the moment, it does seem a bit low but if it can be improved then all the better i'd say.

    US
     
  2. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,059
    Likes Received:
    3,119
    Location:
    New York
    That's hardly unique to physics. Same could be said for DX9, DX10, HDR etc...

    Listening to some of you guys it seems a lot of you think physics in games shouldn't improve beyond what we have today....strange perspective to say the least.
     
  3. SirPauly

    Regular

    Joined:
    Feb 16, 2002
    Messages:
    491
    Likes Received:
    14
    I am having trouble following some posters.

    If you feel that GPU Physics is a detriment, well, you don't have to use it. Or, you could use a second GPU for Physics only -- great value for older products instead of collecting dust. A simple context: It's like a feature -- if the game has enough performance with it on I'll use it for the added dynamic game-play. As with any feature there are ways of garnering more performance with future upgrades or Sli, etc. Personally think it is impressive that a GPU can offer 3d and Physics at the same time for end-users -- adds additional value and a feature.

    What's not to like? Really?
     
  4. fbomber

    Newcomer

    Joined:
    Jun 9, 2004
    Messages:
    156
    Likes Received:
    17
    I´m not following either. Many people are posting saying how physics on GPU is useless, or how it adds just debris to the graphics.

    But, from what I´ve seen from many videos from the games already accelerated, that same people either didn´t play a game with Physx GPU enabled, or didn´t bother to watch the videos to look at the immersion.

    Again, their opinion could, after they do any of the two, stay the same. But, FOR ME, the extra immersion is incredible. And I think that it´s worth it, FOR ME, to buy a 9800GTX just for physx. If someone´s 8800GTS hasn´t enough juice to spare for new effects, it´s really not my problem. Games generally are maxed out for those that have the highend. Those with lower end cards have to lower their settings; it´s really good to have the option to choose between lowering graphics or physics, adapting the game to the user´s taste.

    Some people in this thread are overgeneralizing their opinion, trying to make it everyone´s; and worst, looking how they don´t have the hardware to handle the extra physics, they don´t want those that have to enjoy; some even have the GPU capable of Physx, but don´t think the added effects are worth the performance drop. It´s really simple: if you´re happy with the cpu physics you have now, stay with the settings you play at, because either your computer will not be able to adequately run the game (CPU can´t keep up with the added physx workload) , or you don´t like the fps drop that comes with your pc config when enabling extra physics on your GPU (because you don´t have a dedicated gpu for physx, although it stays playable).

    It would be really cool if Physx takes off, and gets supported by many more games. Those with the harware necessary to max out the "graphics" and physics settings (having a SLI G200b + 9800gtx for physx, for example), will enjoy the full experience. The others would have a choice between lowering graphics or physics, to fit the game to their view of what´s important, FOR THEM.

    So, you that don´t find the new effects interesting, will just have to lower the physx settings (to fit your quad core), and max out your "graphics" settings.

    Ps.: I put the graphics settings between commas because I think it´s impossible to max out the graphics without maxing out the physics. They´re linked. It´s more like: max out AA, AF, in-game graphics settings.
     
  5. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    This is the point that you are missing:
    You can have your cake and eat it too.
    GPUs are so much better at physics that it's better to reserve a bit of GPU power for physics calculations than to have the CPU do all physics.
    With the *current* level of physics, CPUs can cope, because games were designed that way (which leads to ridiculously oversimplified physics, such as in that GRAW video, where you shoot the wooden fence and *boom* only 2 or 3 pieces are left, hardly convincing).
    However, as soon as you want to bump up the physics detail, you're running into the same problem as you're describing: CPU horsepower is limited, not infinite. This can be seen nicely in the UT3 video, where the game starts to jerk when a wall is blown to pieces.

    This is where GPU physics come in: the GPU has far less of a framedrop processing the same physics as the CPU does, giving a better, smoother overall gaming experience.
    It also scales better, so developers could choose to put in more physics detail in future titles. On CPU that's not possible, because the framerates drop below playable levels.

    Not going to work. If you look at the results of 3DMark Vantage (the closest synthetic physics test as we have currently), GPUs are 10 to 20 times faster than the fastest quadcore on the market (and this is while still doing some graphics workload on the side aswell).
    If your GPU is THAT much faster, then obviously it's useless to debate the use of a CPU. It's obvious that if your GPU can get 10-20 times faster physics, that it's better to dedicate 5-10% GPU to physics than it is to dedicate 100% CPU to physics (since you don't have those 100% to begin with).

    As for the PPU being dead... well, there was a big problem with having to spend a few hundred bucks on an extra card that is supported in only one or two games. I never bought one because of that.
    nVidia has solved the first part by giving accelerated physics to everyone with an 8-series card or newer. So I don't have to buy extra hardware. They are now set on solving the second part: getting game developers involved in the world of PhysX and releasing titles that make this technology worthwhile.
    As I already said, what I've seen from GRAW looks fine by me... Framerate drops a bit, but it was too high to start with (my monitor is only 75 Hz, why would I want 100+ fps anyway?). In return, you get more realistic physics, making you feel more immersed in the game world. If nVidia can get more titles with extra physics like this, I think it's nice added value, and I'd have it enabled.
    If someone gave me a PPU for free, I'd be using it aswell.
     
    #145 Scali, Aug 11, 2008
    Last edited by a moderator: Aug 11, 2008
  6. Florin

    Florin Merrily dodgy
    Veteran Subscriber

    Joined:
    Aug 27, 2003
    Messages:
    1,707
    Likes Received:
    345
    Location:
    The colonies
    As opposed to the great marketing prop that is multi-core CPU, which you and others happen to subscribe to. Does this make you feel like the more discerning consumer?

    There are specific workloads where GPUs happen to be very efficient, and physics might be one of them. Like say folding@home, where the inexpensive 8800GT does easily twice the number of points per day of any quad core CPU.
     
  7. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    In which case they're only kidding themselves, because the gains of quadcores in games are even less impressive than the gains of GPU/PPU PhysX so far.
    If you think there are revolutionary multicore game engines around the corner, you're living in a dream world, I suppose.
    Aside from a few exceptions (Supreme Commander for example), quadcores haven't added significant performance in any of the multicore-optimized games released in the past few years. In some cases, games actually run slower on quadcores than on dualcores that are otherwise the same specs.
    Especially in the FPS-genre, there just isn't a whole lot of opportunity for multicore optimizations. Physics is one of them, but all major physics libraries have been multithreaded for years. There's little more to gain there. Certainly not the factor 10-20 that GPUs have already demonstrated.
     
  8. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    Well, Intel is working on Havok.
    However, as far as I can tell, there currently isn't such a large gap between PhysX and Havok when running on the CPU, and I believe that Havok is pretty much state-of-the-art in terms of optimization.
    I really don't think any amount of optimization can get CPUs anywhere near the level of performance that GPUs have demonstrated so far. Something in the form of 20-40% better performance, I'd believe that... but what they need is closer to 1000% performance, and it's just unrealistic to think that some optimization will get us there. It's just not possible for a CPU to deliver that amount of computational power, not even in theory.
     
  9. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
    I agree and I will go a step ahead. I am going to make a somewhat provocative statement. If you ask me, CPU's are a dead end.

    Yes, a dead end.

    Why?

    2006 - Kentsfield comes out. Theoretical max = ~3G * 4 cores * 4-wide SSE ~ 50G

    2009 - Nehalem will come out. Theoretical max = ~3G * 4 cores * 4-wide SSE ~ 50G

    So three years of growth and Intel (founded by Moore) gives us zero (or near zero) growth. GPU's are scaling relentlessly on the other hand because they are unconstrained by running legacy/serial code. Yup, old shaders can be passed to new compilers for new cards to run faster. They arent frozen into binaries (atleast with GLSL. dont know about HLSL)

    That's the bad news. The worser part is that, GPU's can unlock more growth than this because the graphics pipeline looks well set to become fully programmable. So, the current die area used for fixed function blocks will be converted to ALU's. So, they have the potential to show super-Moore growth. I agree that fully programmable graphics pipeline will come slowly so this super-Moore growth may be hard to spot.

    You may ask, wtf has this got to do with nvidia/gpu physics/physx?

    Well my point is that physics is important part of game play today in a maybe a few games. But, I am sure that you will agree that realistic physics has the potential for improving gameplay in a fair amount. CPU's have their constraints. GPU's rock at physics. You may not want to turn on gpu physics today. But as dev's start baking better physics into gameplay assuming wide availability of fast physics solvers, better physics will become increasingly important.

    So, may be gpu physx is a little ahead of its time today. But I have no doubts that gpu physics will become important in the time ahead.

    But yes, nvidia absolutely needs to get devs to write more and more games with physx support. Lack of software can kill any hardware idea, no matter how great.
     
    #149 rpg.314, Aug 11, 2008
    Last edited by a moderator: Aug 11, 2008
  10. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    In the case of physics you could be right. Just as CPUs proved to be a dead end in terms of 3d rendering. I recall back in the early days of the Voodoo card, when people would say "Oh, but CPUs have MMX, and if that's fully optimized, they'll be just fine with rendering, games just need to be more optimized".
    Then we had the same with T&L... "Oh, but CPUs have SSE, so they'll be faster when the game is fully optimized" (in this case there was actually some truth to that statement initially... but obviously it was only a matter of time until GPUs became powerful enough).

    Now it's "Oh, but we have quadcores, we'll just need some optimizations".

    Since Intel has now acquired Havok and is working on Larrabee, I think it's safe to assume that even Intel thinks that (GP)GPU is the way forward for physics (even though they may currently pretend that they are interested in optimizing for Skulltrail and such).
     
  11. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
    I am arguing that they are not good enough for almost any compute bound codes. In fact, I dont know of any code (that is not a contrived example) that is compute bound but hard to parallelize. And if you are parallelizing it, why not port it to gpu's instead of cpu's

    a - cpu thread creation and switch is costly (wrt gpu's)

    b- cpu simd means writing in assembly for the most part. Compiler intrinsics mean the same thing with some syntactic sugar.
     
  12. Davros

    Legend

    Joined:
    Jun 7, 2004
    Messages:
    17,884
    Likes Received:
    5,334
    it was nick on page 2 of this thread
     
  13. BRiT

    BRiT (>• •)>⌐■-■ (⌐■-■)
    Moderator Legend Alpha

    Joined:
    Feb 7, 2002
    Messages:
    20,516
    Likes Received:
    24,424
    Thank you. This is exactly the post I had in mind. It seems so long ago.

     
  14. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,059
    Likes Received:
    3,119
    Location:
    New York
    If CPU's are so great at physics why haven't we seen anything previously that's comparable to what has been shown to be possible with hardware acceleration? And physics on the CPU has had a hell of a head start to boot. Notions of "cheating" are just silly scare tactics.
     
  15. Skrying

    Skrying S K R Y I N G
    Veteran

    Joined:
    Jul 8, 2005
    Messages:
    4,815
    Likes Received:
    61
    A head start? Graphics cards have been around just as long, but that's entirely not the point. In the case of both it's about the availability of processing resources. As multiple core processors have emerged we've seen more use of them for physics, as GPUs have greatly increased in power using them for applications beyond graphics has become reasonable. Crysis for example is still the single most impressive game physics wise and it runs them entirely on the CPU. Crysis doesn't even scale that well onto quad core processors.

    To me it's not about the GPU being better for physics than the CPU, it's about which of those two are more available to do the processing. My utilization comment wasn't about task manager numbers but rather basically no game using what is available with these cores that are going entirely unused. On the other hand doing the calculations on the GPU, something being what is basically fully utilized, puts overhead on the unit and takes away, directly, processing from the graphics tasks.

    So, what would you rather see in the immediate future? Games making use of those two extra cores on a quad core processor for physics or games putting overhead on the graphics card? I'd personally rather pay the $250 for a quad core processor and $250 for a graphics card than $180 for a processors and $500 on graphics cards, but maybe it's just me. Just looking at the advances games have made, shown nicely by Crysis, by using entirely the CPU for physics very much indicates to me there is a lot of untapped power and potential. I'm not against work and effort on improving physics on the GPU but as of right the results have been incredibly sub-optimal.

    Some of you seem to think adding the physics processing to the GPU doesn't hurt the GPU's graphics processing and that's entirely not the case. Results show this numerous times, and as the load increases the side effect of decreased processing dedicated to graphics will increase and the framerate will suffer. Being impressed by a new technology sometimes seems to overshadow the real results, which at the end of the day are the absolute most important issue.
     
  16. SirPauly

    Regular

    Joined:
    Feb 16, 2002
    Messages:
    491
    Likes Received:
    14
    Your point is like adding AA or not; adding AA usually offers a hit compared to no AA........but by adding AA offers more immersion to many end-users. By using PhysX -- sure there is going to be some performance hit compared to no PhysX at this time..........but it offers more immersion for many end-users.

    You're not allowing GPU's to improve through time yet; driver optimizations, improved coding from developers, SLI and multi-GPU flexibility -- in other words this is in its infancy and just started to be offered for the end-user. Give it some time! It's only been months!
     
    #156 SirPauly, Aug 12, 2008
    Last edited by a moderator: Aug 12, 2008
  17. Skrying

    Skrying S K R Y I N G
    Veteran

    Joined:
    Jul 8, 2005
    Messages:
    4,815
    Likes Received:
    61
    But it's not like AA. AA can only be done on the GPU, it can't be offloaded to some other part of the system that is not being fully used.

    I can see driver optimizations and coding improvements helping the situation. But multi-GPU solutions are just throwing more money at it. The cost of another GPU is pretty great if you're going to need a mid-range card for the acceleration to be reasonable. I'm not entirely against the technology, I do think it'll make sense at some point in the future to have a processing unit better suited for the task handle it but not right now and not at least for another new generation or two of GPUs.
     
  18. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    That post is nonsense though.
    As I said, even if we assume that all physics libraries are poorly optimized, it's still not realistic to think CPUs can perform physics 10-20 times faster than they do today, and that's what they need to do to match GPU physics.
    So if we assume that they're cheating, even without cheating CPUs wouldn't stand a chance.
     
  19. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    And that's where you go wrong.
    A CPU 'saturates' much quicker than a GPU does.
    The thing is that you need to throw more than 100% CPU at physics to get anywhere near what a GPU can do, which means the framerate will drop off much quicker with a CPU than with a GPU.
    The only cases where a CPU can win is when the physics are so simplistic that the game doesn't get CPU-limited. But these are exactly the cases that aren't interesting, because that is where we are in games today. We want to move forward.

    It's like this.
    Let's assume that X is the maximum amount of physics that a CPU can handle, without limiting the framerate (which is pretty much what modern games do... in fact, in some cases the framerates do drop already, as can be seen in the UT3 video. Crysis also gets CPU-limited in the CPU-benchmarks that are supplied with the game for example).
    If we have X amount of physics, the CPU will cope okay.
    Running X on the GPU might give it a small performance hit (but probably no more than 5-10%, since the GPU is a factor 10-20 faster at processing than the CPU... even so, you could easily opt to keep running the physics on the CPU in this case, and lose nothing).
    Now we want to increase the physics to 2X.
    The CPU now has to do twice the workload, meaning the framerate drops in half, as the game is completely CPU-limited.
    The GPU however only takes a 10-20% hit at this point, which means the framerates are still quite acceptable.
    So the more physics you throw on, the better GPUs will look, because CPUs will bring the game to a screeching halt, while the framerate on the GPU will only reduce gradually.
    (And this is ofcourse worst case for GPU physics: using a single GPU. With two GPUs the results are even better, even if that second GPU is a simple 9600GT or such).

    Conclusion: I'd rather like to see GPU physics, because CPUs don't have the headroom to make any significant advances in physics.
    Aside from that, the next generation of GPUs will probably have so much extra performance over the current generation, that the hit taken from physics becomes insignificant in the greater scheme of things. This is something that is guaranteed NOT to happen with CPUs. Nehalem will only be a modest improvement over the current quadcores.
    Just think about it... if a GPU today takes 10-20% of a hit for something... Tomorrow's GPU, which is twice as fast, will only take 5-10% of a hit for the same thing.
    Either that, or you can increase the physics load even more, making the level of physics even MORE unattainable for any CPU.
     
    #159 Scali, Aug 12, 2008
    Last edited by a moderator: Aug 12, 2008
  20. suryad

    Veteran

    Joined:
    Aug 20, 2004
    Messages:
    2,479
    Likes Received:
    16
    I do not care if the physics workload is being dealt with by the CPU or the GPU as long as the games are still very playable. I think that with the GPU there is a big chance of that not happening as Skyring suggested simply because new games are pushing the existing GPUs to their limits and there is not much headroom for the GPU to go and thrown in the fancy physics effects. Especially considering a game like say Crysis. Most GPUs struggle to even play that game at high at 1920 x 1200 just imagine what GPU phsyics would do to it!

    I am a bit on the fence on this one. I see what you are saying Scali but I am not entirely sold yet. I guess I am either close minded or short sighted and cannot see what you see. But right now though it sounds cool I see this as a bit of a gimmick. What are we really gaining in terms of gameplay with a fence breaking in a particular way as shown in the GRAW video? It looks cool yeah but I wonder if it is doing anything to enhance the gameplay...ie the character that possibly could have been hiding behind the fence...just not sure I guess until I try it out. Hopefully driver/code optimizations, and newer hardware will make this feature really awesome without sacrificing much needed fps at 2560 x 1600 max settings type resolutions but I dont think even the next generation vid cards will allow for that.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...