Insertion sort

related topics
{math, number, function}
{system, computer, user}
{rate, high, increase}
{mi², represent, 1st}

Insertion sort is a simple sorting algorithm: a comparison sort in which the sorted array (or list) is built one entry at a time. It is much less efficient on large lists than more advanced algorithms such as quicksort, heapsort, or merge sort. However, insertion sort provides several advantages:

  • Simple implementation
  • Efficient for (quite) small data sets
  • Adaptive, i.e. efficient for data sets that are already substantially sorted: the time complexity is O(n + d), where d is the number of inversions
  • More efficient in practice than most other simple quadratic, i.e. O(n2) algorithms such as selection sort or bubble sort; the best case (nearly sorted input) is O(n)
  • Stable, i.e. does not change the relative order of elements with equal keys
  • In-place, i.e. only requires a constant amount O(1) of additional memory space
  • Online, i.e. can sort a list as it receives it

Most humans when sorting—ordering a deck of cards, for example—use a method that is similar to insertion sort.[1]



Every repetition of insertion sort removes an element from the input data, inserting it into the correct position in the already-sorted list, until no input elements remain. The choice of which element to remove from the input is arbitrary, and can be made using almost any choice algorithm.

Sorting is typically done in-place. The resulting array after k iterations has the property where the first k + 1 entries are sorted. In each iteration the first remaining entry of the input is removed, inserted into the result at the correct position, thus extending the result:

Array prior to the insertion of x


Array after the insertion of x

with each element greater than x copied to the right as it is compared against x.

The most common variant of insertion sort, which operates on arrays, can be described as follows:

Pseudocode of the complete algorithm follows, where the arrays are zero-based and the for-loop includes both the top and bottom limits (as in Pascal):

insertionSort(array A)
{ This procedure sorts in ascending order. }
    for i := 1 to length[A]-1 do
        value := A[i];
        j := i - 1;
        done := false;
            { To sort in descending order simply reverse
              the operator i.e. A[j] < value }
            if A[j] > value then
                A[j + 1] := A[j];
                j := j - 1;
                if j < 0 then
                    done := true;
                done := true;
        until done;
        A[j + 1] := value;

Below is the pseudocode for insertion sort for a zero-based array (as in C):

1. for j ←1 to length(A)-1
2.     key ← A[ j ]
3.     > A[ j ] is added in the sorted sequence A[1, .. j-1]
4.     i ← j - 1
5.     while i >= 0 and A [ i ] > key
6.         A[ i +1 ] ← A[ i ]
7.         i ← i -1
8.     A [i +1] ← key

[edit] Best, worst, and average cases

Graphical example.

The best case input is an array that is already sorted. In this case insertion sort has a linear running time (i.e., Θ(n)). During each iteration, the first remaining element of the input is only compared with the right-most element of the sorted subsection of the array.

The worst case input is an array sorted in reverse order. In this case every iteration of the inner loop will scan and shift the entire sorted subsection of the array before inserting the next element. For this case insertion sort has a quadratic running time (i.e., O(n2)).

The average case is also quadratic, which makes insertion sort impractical for sorting large arrays. However, insertion sort is one of the fastest algorithms for sorting very small arrays, even faster than quick sort; indeed, good quick sort implementations use insertion sort for arrays smaller than a certain threshold, also when arising as subproblems; the exact threshold must be determined experimentally and depends on the machine, but is commonly around ten.

Example: The following table shows the steps for sorting the sequence {5, 7, 0, 3, 4, 2, 6, 1}. For each iteration, the number of positions the inserted element has moved is shown in parentheses. Altogether this amounts to 17 steps.

5 7 0 3 4 2 6 1 (0)

5 7 0 3 4 2 6 1 (0)

0 5 7 3 4 2 6 1 (2)

0 3 5 7 4 2 6 1 (2)

0 3 4 5 7 2 6 1 (2)

0 2 3 4 5 7 6 1 (4)

Full article ▸

related documents
L'Hôpital's rule
Complete metric space
Standard ML
Equivalence relation
Cantor's diagonal argument
Tail recursion
Kernel (matrix)
Natural number
Icon (programming language)
Square root
Cholesky decomposition
Analysis of algorithms
Hausdorff dimension
Template (programming)
Extended Euclidean algorithm
Abstraction (computer science)
Taylor's theorem
Chaitin's constant
Breadth-first search
Goldbach's conjecture
Euclidean space
Homology (mathematics)
Vigenère cipher
Dirac delta function
Probability density function