Numerical stability

K.I.L.E.R

Retarded moron
Veteran
Is it true that if computers had increased bitsizes (64b -> 256b for instance) that numerical stability wouldn't be a problem in non-scientific computing?

This is a very general question and I wont be reading that part of my book until I read another 600 pages first.
 
I don't think so. Some numerically unstable algorithms accumulate errors so fast that 256 bits won't help you much.
 
Although it is possible if you had enough bits(anything short of inifinity) that you can avoid numerical instability?
 
I don't think data accuracy is usually the problem. The lack of stability arises from the algorithms used. Check Wikipedia for a nice example.

What exactly do you mean by "non-scientific computing"? Games and such still use similar, if not more unstable, numerical methods to solve things.
 
Only because we have to use approximations due to the real thing being more expensive.
I was thinking accuracy could solve the problem.


Mintmaster said:
I don't think data accuracy is usually the problem. The lack of stability arises from the algorithms used. Check Wikipedia for a nice example.

What exactly do you mean by "non-scientific computing"? Games and such still use similar, if not more unstable, numerical methods to solve things.
 
You can't have the real thing on a digital computer, unless all you need are integers (or equivalent, such as rationals or algebraic numbers). You can't express real numbers in exact form on a digital computer.
 
K.I.L.E.R said:
Only because we have to use approximations due to the real thing being more expensive.
I was thinking accuracy could solve the problem.

What do you mean by "accuracy"?
 
Say in your calculations you need only 3 decimal place accuracy.
Say you have only 2 decimal place accuracy. This can lead to:

On an n bit machine.
2.999+ 1.999= 4.998

On a n-1 bit machine.
2.1 + 1.1 = 3.2


nutball said:
What do you mean by "accuracy"?
 
K.I.L.E.R said:
Only because we have to use approximations due to the real thing being more expensive.
No, that's not right at all. Have you done differential equations? Read up a bit on that and this example will make more sense. Then you'll know what we're talking about in terms of algorithms.

For example, imagine simulating the motion of three bodies in space. You know the force at any instant on a body due to the other two, but you don't have an exact expression for the path they follow. So what you do is take a little time step, assume the force doesn't change during that time step, and by translating that into a constant acceleration, you know what the bodies acceleration and position will be after that time step. Then you can repeat. (There are better ways of approximating than assuming a constant, but don't worry about that for now.)

The problem is that the force does change during that time step, as not only is that body moving, but the others are as well. Even if you had infinte computational accuracy, you just don't know the exact forces and positions throughout that step. If your time step was too big, it could do some strange things.

Computational accuracy just isn't the limiting factor most of the time.
 
Of course I've done ODEs.
I've already done Laplace transforms and other stuff too.
When we get back we are starting fourier analysis.

I see what you're saying though.
It's going to be a little while until I get to the computational numerical stuff in my book.


Mintmaster said:
No, that's not right at all. Have you done differential equations? Read up a bit on that and this example will make more sense. Then you'll know what we're talking about in terms of algorithms.

For example, imagine simulating the motion of three bodies in space. You know the force at any instant on a body due to the other two, but you don't have an exact expression for the path they follow. So what you do is take a little time step, assume the force doesn't change during that time step, and by translating that into a constant acceleration, you know what the bodies acceleration and position will be after that time step. Then you can repeat. (There are better ways of approximating than assuming a constant, but don't worry about that for now.)

The problem is that the force does change during that time step, as not only is that body moving, but the others are as well. Even if you had infinte computational accuracy, you just don't know the exact forces and positions throughout that step. If your time step was too big, it could do some strange things.

Computational accuracy just isn't the limiting factor most of the time.
 
Only because we have to use approximations due to the real thing being more expensive.
Irrespective of the algorithm, we are always using approximations because we can only store a finite number of bits. The thing is whether the algorithm causes these errors to snowball and accumulate and how quickly can it happen? And it's entirely possible for faster approximations to outdo exhaustive solutions in this by actually having positive and negative errors cancel each other out.

If a particular method allows errors to accumulate quickly, no number of bits of precision will help you in any way. Because the error will still be there and it will still build up. In theory, it might take more steps to get there, but depending on what it is, that may not be true at all. Mintmaster's example of Euler stepping is one such example. The method itself inherently diverges from the correct solution, and the only way to improve the results without using some other method is to shrink your step-size. More bits means you can theoretically go yet smaller stepsize than you could with less precision, but at the same step size, it doesn't do anything for you.

Now if a method very slowly accumulates errors that tend to stay in the bottom most bits, or you have something where errors only build up in the bottommost bits in some stages, but then carry some comparatively significant weight in the later stages, then more precision helps you. Since a digit of precision is exponential in its impact having the same relative impact of error means that having more decimal places down the way for error to exist is safer. For instance, if error builds up in the 7th decimal place and affects the final result to the 3rd decimal place (which is very possible if you're dealing with square roots anywhere), then having that error only build up to the 12th decimal place means it will only affect the final result in the 6th decimal place. Of course, the latter case is only possible if you can represent 12 significant digits of precision.

Say in your calculations you need only 3 decimal place accuracy.
Say you have only 2 decimal place accuracy. This can lead to:

On an n bit machine.
2.999+ 1.999= 4.998

On a n-1 bit machine.
2.1 + 1.1 = 3.2
Ummmm... explain how 2.999 gets rounded to 2.1, or are you just bringing up two different examples? BTW, the word you're looking for is not "accuracy", it's "precision." Accuracy has to do with how close you are to the solution, whereas precision is about how exact you can be with your values.
 
While numerical stability seems to be a pretty frequent issue in computing, it's usually the case that re-examining your algorithms can fix the issues before going to higher-precision storage formats.

Unfortunately, some problems just cannot be computed without tremendous amounts of numerical accuracy, and as such there are libraries that allow nearly infinite numerical accuracy at a significant cost in performance. One obvious example is the computation of pi to arbitrary precision.
 
Back
Top