Cell: Vertex and phisics need single or doble precision ?

So, the DP module included in the SPE what is intended for ? what could it be used for that couldn´t be achieved with SP ? - I mean in the PS3 and not in a server -
 
Love_In_Rio said:
So, the DP module included in the SPE what is intended for ? what could it be used for that couldn´t be achieved with SP ? - I mean in the PS3 and not in a server -
CELL is targeted to different applications: ie. scientific calculations often need DP.
 
Love_In_Rio said:
So, the DP module included in the SPE what is intended for ? what could it be used for that couldn´t be achieved with SP ?

Rememeber CELL isn't just for PS3...it has other uses in computing e.g. IBM-SONY CELL workstations, super-computing arena etc. Double precision is used for higher accuracy for engineering simulations, scientific calculations, off-line rendering, visual simulations where realtime is not a priority. Anyway I advise you read the above link as it's been discussed already...

EDIT: nAo beat me to it! :p
 
Tacitblue said:
The PPE has a DP element in it as well, identical to the 8 others on the SPE's.

No. The one in the PPE has a throughput of one FMADD every cycle for a total of 8 GFLOPS.

The DP unit in the SPEs can issue a 2-vector FMADD every 7 (or 6?!) cycles for a total of ~18GFLOPS for all 8 SPEs or 26 GFLOPS in total for the entire CELL processor.

Cheers
Gubbi
 
Also, as far as the topic is concerned... double precision is definitely helpful for physics. However, the reality of the matter is that if single-precision affords you 10x the performance, you're not ever going to use double precision no matter the benefits.

Most current physics engines are limited to mass ranges of 5 orders of magnitude (i.e. if you have a root mass of x, the heaviest object cannot be more massive than 100x and the lightest cannot be lighter than 0.01x). Beyond which they'll get horribly unstable and start blowing up. It would actually be slightly less of a problem if not for using iterative refinement methods. Also, using vector ops decreases the effective precision a fair bit because modern scalar FPUs operate internally at DP, so SP ops get the benefit of having accuracy over all their mantissa bits (unless you use a specifically not-fully-accurate op) -- doesn't really mean you're free of single-precision instabilities since bits get cut off on write anyway. However, with packed SIMD data formats, you can't really do that short of building 4 separate FPUs that partition the vector data.
 
ShootMyMonkey said:
Also, as far as the topic is concerned... double precision is definitely helpful for physics. However, the reality of the matter is that if single-precision affords you 10x the performance, you're not ever going to use double precision no matter the benefits.

Most current physics engines are limited to mass ranges of 5 orders of magnitude (i.e. if you have a root mass of x, the heaviest object cannot be more massive than 100x and the lightest cannot be lighter than 0.01x). Beyond which they'll get horribly unstable and start blowing up. It would actually be slightly less of a problem if not for using iterative refinement methods. Also, using vector ops decreases the effective precision a fair bit because modern scalar FPUs operate internally at DP, so SP ops get the benefit of having accuracy over all their mantissa bits (unless you use a specifically not-fully-accurate op) -- doesn't really mean you're free of single-precision instabilities since bits get cut off on write anyway. However, with packed SIMD data formats, you can't really do that short of building 4 separate FPUs that partition the vector data.


I have a friend of mine, one of the things he has stated is that if you find yourself requiring double precision for a real time physics simulation you should go back and do your numeric analysis, because your doing it wrong.

For the most part I agree, double precision has it's places, and it can certainly make some things easier, however more often or not it's used to fix a precision problem that should have been eliminated by taking a different approach to the problem..

Programmers just don't do numeric analysis anymore.
 
ERP said:
Programmers just don't do numeric analysis anymore
That's why it's cool to have some background in physics ;)
I had a lot of numerical analysis/errors propagation lectures in my university days..
 
I have a friend of mine, one of the things he has stated is that if you find yourself requiring double precision for a real time physics simulation you should go back and do your numeric analysis, because your doing it wrong.
Probably true, but there's a difference between realtime and fast enough, especially when you're developing something that has to run on PC AND console. Something like Havok is actually surprisingly stable even at a 15 Hz simulation rate. Whereas, say NovodeX doesn't even get stable until near 50 Hz. Note that physics simulation time is separate from rendering framerate. However, Havok is probably stable because it's got all these extra hooks and covers as many corner cases as possible. And hence, Havok is probably not even fast enough to run at a 50 Hz simulation rate.

Still, a lot of things are inaccurate because we all have to take the fast and inaccurate approach to make the PS2 happy.

If we had true IEEE single-precision ops even on packed vector data that would make things nice. But in the land of games, it's generally considered more important to do the fast approach (unless it's something that's done offline) and never ever in a million years give a damn about accuracy -- and instead you just modify the data to suit your limitations.

Programmers just don't do numeric analysis anymore.
Well, anyone who works on physics is pretty well aware of all the numerical analysis issues. It's just that simulating something like a single element in realtime (regardless of complexity) is completely different from simulating 25-30 separate elements and also integrating these simulations into gameplay... killing and re-enabling certain specific features for specific objects on a time basis... and then messing with the properties of an object on-demand... It's always enough of a mess that any number of nanoseconds you can shave off of computation is not considered wasted time.

But it is certainly true that the amount of numerical analysis taught in school these days is dwindling. When I did my undergrad, I was enrolled in what was then the "old" program, in which I was required to take 3 numerical analysis courses. People in the then "new" program were required to take 1. Worse yet are the gaming trade schools (at least FWIW). In a lot of programmer interviews for people coming out of "game programming" curricula, I see all too much straight memorized practice and no theoretical foundation to back it up. In short, knowing a whole lot about the "how" and basically nothing about the "why." I hope it was just a matter of bad luck of the draw, but if not, that points to a lot of crap programmers trying to get into the industry.
 
Back
Top