How to implement Merge Sort in an easy way

In this article, let us discuss what is merge sort algorithm and how to implement with an example, with complexity analysis.


Merge Sort is another Divide and Conquer algorithm similar to Quick Sort, which can help efficiently sort a given set of elements in required order. The algorithm takes a constant O(NlogN) time for all cases, which makes it a bit more efficient than Quick Sort, which sometimes falls above O(NlogN) in worst cases.

Implementing Merge Sort is similar to Quick Sort, except that the Merge Sort divides the given input set into pieces as small as individual elements, and then recursively merges these partitioned pieces – while placing the elements in these subsets into proper order.

Merge Sort Algorithm Example Step by Step

The algorithm works in four simple steps:

  1. For a given input set of elements A consisting of N elements, place two pointers left and right which represent the opposites of the set. Compute the Midpoint mid such that mid = (left + right) / 2
  2. Partition the set into two subsets AL[left…mid] and AR[mid+1…right]
  3. recursively apply Step 1 and Step 2 on these two subsets AL and AR with left, mid and mid+1, right as the new pointers. This is repeated till the left and right pointers overlap (meaning the subsets now contain individual elements).
  4. Merge the new created subsets AL and AR into the actual set A such that the elements from AL and AR are sorted in order.

The implementation goes like this:

public void MergeSortFunction(int[] array, int left, int right)
    // for each call of the function
    // divide the original array set array[left...right]
    // into two subarrays based on a median
    // mid = (left+right)/2
    // call MergeSort() function on both the halves
    // array[left...mid] and array[mid+1...right]
    // do this untill the smallest set (single element)
    // is reached (left == right)
    // once it is reached, call the MergeFunction() to 
    // join back the elements into the new array
    if (left >= right)

    var mid = (left + right) / 2;
    MergeSortFunction(array, left, mid);
    MergeSortFunction(array, mid + 1, right);
    MergeFunction(array, left, mid, right);

public void MergeFunction(int[] array, int left, int mid, int right)
    // copy the sub arrays under merging into new arrays
    // iterate through the sub arrays and place the elements
    // into the original array in their proper order
    // the first sub array runs from array[left...mid]
    // second sub array runs from array[mid+1...right]

    var n1 = (mid - left) + 1;
    var n2 = (right - mid);

    var leftArray = new int[n1];
    var rightArray = new int[n2];

    // copy elements from the original array
    // into the sub arrays

    for (int i = 0; i < n1; i++)
        // left subarray runs on array[left...mid]
        leftArray[i] = array[left + i];

    for (int i = 0; i < n2; i++)
        // right subarray runs on array[mid+1...right]
        rightArray[i] = array[(mid + 1) + i];

    // copy the sub array elements into the original array
    // based on their order
    // original array focuses between left and right
    int p = 0, j = 0, k = left;
    while (p < n1 && j < n2)
        // if leftArray has smaller element 
        // push into the original array
        // otherwise push the rightArray element
        // do this until one of the subarrays
        // runs out of elements
        if (leftArray[p] <= rightArray[j])
            array[k] = leftArray[p];
            array[k] = rightArray[j];

    // still there might be some elements
    // left in either leftArray or rightArray

    while (p < n1)
        array[k] = leftArray[p];
        p++; k++;

    while (j < n2)
        array[k] = rightArray[j];
        j++; k++;

And is invoked as below:

var arr = new int[] { 5, 9, 2, 4, 7, 8 };

// initially the algorithm takes the start and 
// end indexes (0, arr.Length-1) as the left and right
// for the sorting function
MergeSortFunction(arr, 0, arr.Length - 1);

Complexity Analysis – Merge sort vs Quicksort

When you look at the difference in this sorting versus Quick Sort, one can find that most of the concept between the two algorithms are similar – they’re dividing the input set into smaller subsets recursively and then join them together. But the difference lies in how they’re being sorted from bottom-up.

In Quick Sort, we have our Partition function, which works within a subset of the actual array and then swaps the elements within that subset into their actual positions. This process happens “in-place” and so doesn’t require any temp arrays to hold these sub arrays.

If you closely look at the sequence of calls that are executed: in Quick Sort we have our Partition function which sorts all the elements to the left and right of the mid – called as “pivot” such that the elements to the left and right of the pivot are smaller and larger than itself respectively. Following this, we have the algorithm move to the smaller subarrays to the left and right of this pivot.

In Merge Sort, the division happens first, till the individual elements are reached, and then the “merge” happens from the leaves till the original set is reconstructed. We merge them using two temp arrays to hold these sub arrays.

Hence in the case of Merge Sort, the time taken for the work done is equal to:

T(n) = 2T(n/2) + n

which is roughly equal to O(NlogN) time. But the space required for this algorithm is O(n), since we make use of temp space to hold these subarrays.

Hence this algorithm is efficient time-wise, but when need required to work on large-scale datasets, the algorithm implementation is supplied with auxiliary storages such as tape drives or disks.


Merge Sort is an “external sorting” algorithm by definition. Which means that the algorithm requires auxiliary memory for its implementation. Though this is a slight overhead, the time taken for the sorting is always around O(NlogN) which is faster than most of the other algorithms.

Merge Sort is also a Stable Sort algorithm – which means that in case of repeated elements, their original order is preserved. The divide-and-conquer technique used in this case exactly divides the array into equal parts in all the iterations, causing the recursion tree to work in T(n/2) time at every level of the call stack, resulting in its perfect logarithmic behavior.

Buy Me A Coffee

Found this article helpful? Please consider supporting!


I'm a full-stack developer and a software enthusiast who likes to play around with cloud and tech stack out of curiosity. You can connect with me on Medium, Twitter or LinkedIn.

Leave a Reply

Your email address will not be published. Required fields are marked *