Amdahl's law

related topics
{system, computer, user}
{math, number, function}
{rate, high, increase}
{law, state, case}
{area, part, region}

Amdahl's law, also known as Amdahl's argument,[1] is named after computer architect Gene Amdahl, and is used to find the maximum expected improvement to an overall system when only part of the system is improved. It is often used in parallel computing to predict the theoretical maximum speedup using multiple processors.

The speedup of a program using multiple processors in parallel computing is limited by the time needed for the sequential fraction of the program. For example, if a program needs 20 hours using a single processor core, and a particular portion of 1 hour cannot be parallelized, while the remaining promising portion of 19 hours (95%) can be parallelized, then regardless of how many processors we devote to a parallelized execution of this program, the minimum execution time cannot be less than that critical 1 hour. Hence the speed up is limited up to 20×, as the diagram illustrates.



Amdahl's law is a model for the relationship between the expected speedup of parallelized implementations of an algorithm relative to the serial algorithm, under the assumption that the problem size remains the same when parallelized. For example, if for a given problem size a parallelized implementation of an algorithm can run 12% of the algorithm's operations arbitrarily quickly (while the remaining 88% of the operations are not parallelizable), Amdahl's law states that the maximum speedup of the parallelized version is 1/(1 – 0.12) = 1.136 times as fast as the non-parallelized implementation.

More technically, the law is concerned with the speedup achievable from an improvement to a computation that affects a proportion P of that computation where the improvement has a speedup of S. (For example, if an improvement can speed up 30% of the computation, P will be 0.3; if the improvement makes the portion affected twice as fast, S will be 2.) Amdahl's law states that the overall speedup of applying the improvement will be

To see how this formula was derived, assume that the running time of the old computation was 1, for some unit of time. The running time of the new computation will be the length of time the unimproved fraction takes, (which is 1 − P), plus the length of time the improved fraction takes. The length of time for the improved part of the computation is the length of the improved part's former running time divided by the speedup, making the length of time of the improved part P/S. The final speedup is computed by dividing the old running time by the new running time, which is what the above formula does.

Here's another example. We are given a task which is split up into four parts: P1 = 11%, P2 = 18%, P3 = 23%, P4 = 48%, which add up to 100%. Then we say P1 is not sped up, so S1 = 1 or 100%, P2 is sped up 5×, so S2 = 500%, P3 is sped up 20×, so S3 = 2000%, and P4 is sped up 1.6×, so S4 = 160%. By using the formula P1/S1 + P2/S2 + P3/S3 + P4/S4, we find the running time is

or a little less than ½ the original running time which we know is 1. Therefore the overall speed boost is 1 / 0.4575 = 2.186 or a little more than double the original speed using the formula (P1/S1 + P2/S2 + P3/S3 + P4/S4)−1. Notice how the 20× and 5× speedup don't have much effect on the overall speed boost and running time when 11% is not sped up, and 48% is sped up by 1.6×.

Full article ▸

related documents
Machine code
Object Linking and Embedding
Parrot virtual machine
GNU Privacy Guard
MMX (instruction set)
Wikipedia:Free On-line Dictionary of Computing/L - N
Fourth-generation programming language
Locality of reference
Enterprise Objects Framework
Vim (text editor)
Journaling file system
Java Servlet
Baudot code
Complex instruction set computer
Software bug
Signal-to-noise ratio
Microsoft Access