NVIDIA GF100 & Friends speculation

Bill Dally sez:
"PCGH: Would it make sense to use double precision in the future? Maybe for physics calculations? Is there a need to have higher definition?

Bill Dally: I don't see a pressing need for that. Now, game physics and physics that are done for industrial and research applications are different in a big way in the sense that for game physics it only needs to look good, it doesn't actually have to be right. Whereas if you're designing an aircraft, it has to be right. And for that reason, the people who do game physics are usually pretty satisfied with single precision."

http://www.pcgameshardware.com/aid,...chnology-DirectX-11-and-Intels-Larrabee/News/
 
Isn't the "continuously vibrating object" issue a non-issue?

Unless you continuously simulate physics for each object, which is somewhat counter-productive, you don't have to worry about that and could simply store physical properties of the said object to only use them when a parent object is being physically "excited". Kind of simplifications done in classical acceleration structures used in RT where a dummy object contains "real" objects.
 
Given how relatively rare such errors are in current games, I'd be willing to bet that a switch to double precision would cause them to all but disappear.

True story, at one point Pixar was getting odd errors, tracked it down to precision, and then went double. Never looked back.
 
Yes, basically it's a problem that can only reasonably arise in the case of an object that is allowed to oscillate in place for a while. A chain attached to a wall or ceiling, for instance. My suspicion with the Crysis "magic board" is that they may simulate some oscillation modes across the board's surface that allow the board to bend, and if those vibrations get perturbed enough, you could easily get strange-looking behavior as it bounces off of things.

In any case, if nVidia chooses to allow support for double-precision in PhysX, any problems related to numerical errors that exist in the current implementations should simply disappear. This isn't to say that double precision is completely devoid of numerical errors, but rather that it's enough of an improvement that the few situations in which it occurs for single-precision are likely to disappear entirely with the move to double.

Now, since the situations where physics simulations break down are relatively few, devs may choose to just ignore the potential of using double precision due to the 2x performance hit. I don't know.

A cheaper method might be to just add more friction. It only needs to look right in games, not be right.
 
Isn't the "continuously vibrating object" issue a non-issue?

Unless you continuously simulate physics for each object, which is somewhat counter-productive, you don't have to worry about that and could simply store physical properties of the said object to only use them when a parent object is being physically "excited". Kind of simplifications done in classical acceleration structures used in RT where a dummy object contains "real" objects.
It sounds good, but it doesn't always work in reality. First, you want your physics-simulated objects to keep moving after a little bit of perturbation. For example, if I shoot a chain, it should swing and keep swinging, for at least a little while.

The problem arises when numerical errors add energy to the system. So I shoot the chain, and it starts swinging, but due to numerical errors in the joints of the chain, it starts swinging faster and faster, with motion that becomes more and more erratic, until the chain is just hopping around haphazardly.

One way to deal with this is to add friction to the system. This is what the link silent_guy was talking about:
http://www.newtondynamics.com/wiki/index.php5?title=NewtonJointSetStiffness

The basic idea here is that they simulate friction by artificially reducing the force. The problem is that there is tension here between getting the chain to swing in a way that looks realistic, and dealing with numerical errors. The way the chain looks in the game may look like it should have very stiff joints, for example, but then those joints will be susceptible to numerical errors adding energy to the system. You could reduce the "stiffness", but then the chain stops swinging far too quickly, and it just doesn't look right to the player.

So it becomes a balancing act between getting in-game objects to behave in a way that looks good and preventing numerical errors from creeping in and ruining the look entirely. If game devs switched to double precision, the problem would all but disappear.
 
From which they learned nothing if the reported memory frequencies are correct.
Perhaps it's simply not necessary, as GF100 has quite a low max throughput, so even with twice Cypress' efficiency it'll have plenty of bandwidth already at ~1GHz.

With that and the power issue in mind, it would make sense to use LV GDDR5.


Chalnoth > Thanks for the welcome precision about this precise precision issue, I was still refering to the "brickwall" scenario exposed earlier :oops:
 
Last edited by a moderator:
Bill Dally sez:
"PCGH: Would it make sense to use double precision in the future? Maybe for physics calculations? Is there a need to have higher definition?

Bill Dally: I don't see a pressing need for that. Now, game physics and physics that are done for industrial and research applications are different in a big way in the sense that for game physics it only needs to look good, it doesn't actually have to be right. Whereas if you're designing an aircraft, it has to be right. And for that reason, the people who do game physics are usually pretty satisfied with single precision."

http://www.pcgameshardware.com/aid,...chnology-DirectX-11-and-Intels-Larrabee/News/

And I agree completely (not that my opinion matter :LOL:).
 
And I agree completely (not that my opinion matter :LOL:).
Obviously I rather disagree. Granted, this is the conclusion that game devs have come to for some time, that double precision simply isn't worth it: remember that for most of the time since physics has made its way into games, it's been done by the CPU, where double precision has always been available.

But GPU's are vastly faster at performing the calculations, so with only a ~50% performance hit for physics calculations in games, it may become worth it to switch to double precision, at least for some objects in games (specifically objects that are prone to such numerical errors).
 
It sounds good, but it doesn't always work in reality. First, you want your physics-simulated objects to keep moving after a little bit of perturbation. For example, if I shoot a chain, it should swing and keep swinging, for at least a little while.

The problem arises when numerical errors add energy to the system. So I shoot the chain, and it starts swinging, but due to numerical errors in the joints of the chain, it starts swinging faster and faster, with motion that becomes more and more erratic, until the chain is just hopping around haphazardly.

One way to deal with this is to add friction to the system. This is what the link silent_guy was talking about:
http://www.newtondynamics.com/wiki/index.php5?title=NewtonJointSetStiffness

The basic idea here is that they simulate friction by artificially reducing the force. The problem is that there is tension here between getting the chain to swing in a way that looks realistic, and dealing with numerical errors. The way the chain looks in the game may look like it should have very stiff joints, for example, but then those joints will be susceptible to numerical errors adding energy to the system. You could reduce the "stiffness", but then the chain stops swinging far too quickly, and it just doesn't look right to the player.

So it becomes a balancing act between getting in-game objects to behave in a way that looks good and preventing numerical errors from creeping in and ruining the look entirely. If game devs switched to double precision, the problem would all but disappear.

I would like to know why exactly incresing the precission solves the problem in this case.

If you are all about results then you are happy if it works. However, given the nature of these algorithms, there is still no proof that these errors won't happen under certain parameters and a reconsideration of the model may be necessary.

But as previously mentioned, in the context of games it may be enough to see it working OK and that's that.
 
No. GTX 470 = 448 SPs, GTX 480 = 512 SPs

Clock frequencies and TDP are still up in the air, although in terms of TDP, the more reasonable 250 is probably what should be expected, given the PSU recommendation.

If an Ultra exists, it will be a tweaked chip in the future or simply the cherry picked cores that can be highly overclocked, although with the supply constraints, I don't think that's feasible for a while.

And you are making this very assertive claim based on... what exactly?
 
Obviously I rather disagree. Granted, this is the conclusion that game devs have come to for some time, that double precision simply isn't worth it: remember that for most of the time since physics has made its way into games, it's been done by the CPU, where double precision has always been available.

But GPU's are vastly faster at performing the calculations, so with only a ~50% performance hit for physics calculations in games, it may become worth it to switch to double precision, at least for some objects in games (specifically objects that are prone to such numerical errors).

Well I would at least explore the effect of preconditioning before throwing away that double or 4x(on older gpus) performance.

People go through a lot of trouble just to get 10%more performance or slighty better scaling on multiple cores, why shoud they just abandon the posibility of running so much faster when the extra precission is not proven to be necessary?
 
I would like to know why exactly incresing the precission solves the problem in this case.

If you are all about results then you are happy if it works. However, given the nature of these algorithms, there is still no proof that these errors won't happen under certain parameters and a reconsideration of the model may be necessary.

But as previously mentioned, in the context of games it may be enough to see it working OK and that's that.
Single precision has an accuracy of up to about 7 decimal places. Double precision has an accuracy of up to about 16 decimal places. In a situation where you lose 2 decimals of precision in single precision, you'll end up also losing 2 decimals of precision in double precision. So while you still have 5 decimal places left for single precision, you'll have 14 decimal places remaining for single precision.

Since we don't really care about getting things slightly wrong with physics calculations, it's okay until your errors are of order one: you have to lose all 7 decimal places of precision for anybody to really care. In that case, you still have 9 decimal places of precision left for doubles: you would quite literally have to have a situation where the error is a billion times the correct values for single precision if double precision is to fail.
 
Well I would at least explore the effect of preconditioning before throwing away that double or 4x(on older gpus) performance.

People go through a lot of trouble just to get 10%more performance or slighty better scaling on multiple cores, why shoud they just abandon the posibility of running so much faster when the extra precission is not proven to be necessary?
For the exact same reason that games have gone from 16-bit framebuffers to 32-bit framebuffers: it makes games look better. Bear in mind that we wouldn't be talking about a 50% drop in performance, since this is a choice made on the game developer end, but instead a conscious developer choice to have fewer physics objects. But what's more, since the types of objects that have these sorts of problems are extremely specific (objects with rigid joints, objects that can vibrate), double precision doesn't necessarily have to be used for every physics calculation, just the ones where it's important.
 
Single precision has an accuracy of up to about 7 decimal places. Double precision has an accuracy of up to about 16 decimal places. In a situation where you lose 2 decimals of precision in single precision, you'll end up also losing 2 decimals of precision in double precision. So while you still have 5 decimal places left for single precision, you'll have 14 decimal places remaining for single precision.

Since we don't really care about getting things slightly wrong with physics calculations, it's okay until your errors are of order one: you have to lose all 7 decimal places of precision for anybody to really care. In that case, you still have 9 decimal places of precision left for doubles: you would quite literally have to have a situation where the error is a billion times the correct values for single precision if double precision is to fail.

To put what I have in mind in short:

- Going DP may appear to solve the problem (with no guarantees before further insight into the problem).
- Going DP may not the only solution.
- If a different solution is found to going SP -> DP, then performance will be at least double and as much as 4 times.

Edit: spell/grammar.
 
To put what I have in mind in short:

- Going DP may appear to solve the problem (with no guarantees before further insight into the problem).
- Going DP may not the only solution.
- If a different solution is found to going SP -> DP, then performance will be at least double and as much as 4 times.

Edit: spell/grammar.
These aren't exactly new problems I'm talking about. They've been known about in gaming physics simulations from the very beginning (certain types of physics just don't work at all in games without taking into account numerical errors). There are well-known solutions, but they aren't free. Typically they change the behavior of the system (by adding friction), which may in some cases lead to very unrealistic-looking behavior.

If you want to fix the problem without changing the behavior of the system, you're going to have to add processing steps to the calculation, obviously for a performance hit.

There are no easy answers, in other words. Obviously game physics, as it is today, usually works and looks good most of the time. But it seems that it almost always finds a way to break down somewhere in games. Switching to double precision would get rid of that almost entirely.
 
Fermi on Adobe Mercury Playback Engine

Don't game, work smart. Or something. ;) I do find it quite interesting that Adobe is already talking about support for Fermi, whereas AMD doesn't seem to get as much love (there is merely a link to AMD about a beta plug-in dated june 2009).

http://www.adobe.com/products/creativesuite/production/performance/

Adobe is planning on supporting additional cards as they become available, including some of the new NVIDIA solutions based on the upcoming Fermi parallel computing architecture.

Gamers doesn't care, but for those doing video editing in Adobe Premiere Pro it sure is interesting.
 
Don't game, work smart. Or something. ;) I do find it quite interesting that Adobe is already talking about support for Fermi, whereas AMD doesn't seem to get as much love (there is merely a link to AMD about a beta plug-in dated june 2009).

http://www.adobe.com/products/creativesuite/production/performance/



Gamers doesn't care, but for those doing video editing in Adobe Premiere Pro it sure is interesting.

Even though not listed there, there's ATI STREAM plugin for Adobe Premiere Pro, too.
Though I think it's in beta-stage at the moment
 
Back
Top