Deano Calver
Newcomer
As multi-processor is the hot topic, I'd thought I'd mention Amdahl's Law...
A clever man once said
"Overhead alone would then place an upper limit on throughput of five to seven times the sequential processing rate, even if the housekeeping were done in a separate processor"
Sounds very relevant to Cell, no? That man was Gene Amdahl and in wrote that in 1967, nearly 30 years ago!
Amdahl's Law
Where N is the number of processors, B is the strictly serial percentage of an algorithm and S is the speed up factor.
B determines how well parellel architecture will benefit an algorithm. To see why Armdahl's Law is so important try B=0.5 (50%)
Not try it with N=8
Which means 2 processors are 1.34 times faster but 8 processors will only make the algorithm 1.78 times faster.
Fundementally the problem of multi-processing programming is to find algorithms with as low B as possible. This is extremely hard for most problems, luckily for us though some problems are naturally parellel. The obvious one is 3D rendering, here the strictly serial percentage is tiny and you have an near linear growth speed-up to processors. Let say B=10% for rendering with 8 processors
Better than S=1.77 but still lots of reason to reduce B even further. GPUs shaders are essentially hardware restrictions keeping B as low as possible.
Check out for more details.
http://home.wlu.edu/~whaleyt/classes/parallel/topics/amdahl.html
A clever man once said
"Overhead alone would then place an upper limit on throughput of five to seven times the sequential processing rate, even if the housekeeping were done in a separate processor"
Sounds very relevant to Cell, no? That man was Gene Amdahl and in wrote that in 1967, nearly 30 years ago!
Amdahl's Law
Code:
S= N
----------
(B*N)+(1-B)
B determines how well parellel architecture will benefit an algorithm. To see why Armdahl's Law is so important try B=0.5 (50%)
Code:
S = 2
----
(0.5*2)+(1-0.5)
S=1.33333
Code:
S = 8
----
(0.5*8)+(1-0.5)
S=1.777777
Fundementally the problem of multi-processing programming is to find algorithms with as low B as possible. This is extremely hard for most problems, luckily for us though some problems are naturally parellel. The obvious one is 3D rendering, here the strictly serial percentage is tiny and you have an near linear growth speed-up to processors. Let say B=10% for rendering with 8 processors
Code:
S=8
----
(0.1*8)+(1-0.1)
S=4.7
Check out for more details.
http://home.wlu.edu/~whaleyt/classes/parallel/topics/amdahl.html