Parallelism, Amdahl’s Law and Super-Linear Speedups

Pabasara Surasinghe
5 min readNov 17, 2021

--

Photo by Patrik Kernstock on Unsplash

Hello there! Today I am going to discuss Parallelism, Speedup, and Amdahl’s Law. Before getting started let’s know some basic concepts.

Serial Computing

Suppose we have a computational problem to be solved. Traditionally, the problem is broken into a series of instructions that are executed sequentially, one after the other using a single processor. Only one instruction can be executed at a time. This takes a lot of time to finish executions and is less efficient. In that case, Serial Computing is not applicable to modern-day problem-solving. Think about a problem with processing billions of transactions. Serial computing will take decades to do that. That’s why we need another efficient method to do so.

Parallel Computing

And here is the solution to that issue. Unlike in serial computing, here we are using multiple processors. It’s about breaking the problem into multiple parts with multiple instructions, each to be executed by each processor. With coordination among processors, the problem can be solved within a short time. That's a more efficient way and applicable for modern-day problems like big data Analysis, genetics and bioinformatics, graphics, and simulations. When talking about parallel computing we need to know some terms.

Speedup

Suppose the execution time using a single processor is Ts (Serial Execution Time) and the execution time using N number of processors is Tp (Parallel Execution Time). Then the Speedup is defined as the ratio of Ts to Tp.

Efficiency

We have used N number of processors for the parallel execution. Then the Efficiency is defined as the ratio of Speedup to N. Therefore, to achieve maximum efficiency we need to achieve higher speedup, but with a smaller number of processors.

Now we are getting closer to Amdahl’s Law. Let's see how speedup and the number of processors are related. Assuming parallel execution time is T, serial execution time is 1 and the number of processors is N,

Serial Processing with 1 Processor:
N=1 then T = 1
Speedup = 1

Parallel Processing with 2 Processors:
N=2 then T = 1/2 = 0.5
Speedup = 1/0.5 = 2

Parallel Processing with N Processors:
N=N then T = 1/N
But Speedup = 1/(1/N) = N ?

It seems the speedup is equal to the number of processors. But is that always correct? If we use 10 processors will the speedup be 10? Well, it will be possible if the problem can be solved 100% parallelly. But in practice, that’s not possible because always there is a serial portion in the program. This means increasing the number of processors does not linearly increase speedup as expected.

Amdahl’s Law

Supposes there is a problem that is to be solved parallelly. Let’s assume that the parallel portion of the program is P, the serial portion of the program is S, and the number of processors is N. In here, we consider the program which consists of S and P as 1. In that case S = (1-P) and P = (1-S). The parallel execution time (Tp) is,

Tp = S + P/N

Also, it can be written as,

Tp = (1-P) + P/N
Tp = S + (1-S)/N

As we consider ratios, let’s take the serial time as 1. The maximum speedup that can be achieved by this program can be defined as,

This means the speedup is less than or equal to 1 / (S + P / N) and this is Amdahl’s Law. Let’s take a look at an example for a better understanding.

A program can be 70% parallelised. What is the maximum speedup using 5 processors?

N = 5
P = 70% = 0.7
S = 30% = 0.3

Maximum speedup = 1 / (0.3 + (0.7/5)) = 2.272

Similarly let’s calculate maximum speedups for N = 2,4,6,8,10 numbers keeping the parallel portion as it is, 70%. Then results are as follows.

N = 2, Maximum speedup = 1.538
N = 4, Maximum speedup = 2.105
N = 6, Maximum speedup = 2.4
N = 8, Maximum speedup = 2.58
N = 10, Maximum speedup = 2.702

If we create a chart according to the above results, we can clearly see that maximum speedup (y-axis) varies as a curve (not linearly) in respect to the number of processors (x-axis).

Maximum Speedup curve of a 70% parallelised program

Alright. That’s all about Amdahl’s Law. We have observed that in most cases parallel program's maximum speedup will be less than the number of processors.

Super-Linear Speedups

What if we can achieve maximum speedup greater than the number of processors. Well, that’s also possible. Those kinds of speedups are called Super-Linear Speedups. A popular problem to explain this anomaly is Depth First Search.

Here, the goal is to find the search element. Let’s find the speedup, counting time units to search, using 1 processor and 2 processors. Assume that searching starts with the root node. Firstly searching the left side and then move to the right side.

Speedup using 2 processors = 5/2 = 2.5

The speedup (2.5) is greater than the number of processors used (2). Therefore it's a super-linear speedup. Here, using 2 processors, the left branch is assigned to one processor, and the right branch is assigned to the other.

That’s it. Let’s do a quick recap. Today we’ve learned why parallel computing is better than serial computing, basic concepts like speedup and efficiency, Amdahl’s Law, and finally super-linear speedups.

Thanks for reading and hope you enjoyed it. If you find this explanation useful please give a clap!

--

--