1. Classical sorting algorithm

Sorting algorithm Average time complexity The best situation The worst Spatial complexity The stability of
Bubble sort O (n squared) O(n) O (n squared) O(1)
Selection sort O (n squared) O (n squared) O (n squared) O(1)
Insertion sort O (n squared) O(n) O (n squared) O(1)
Hill sorting ㏒ n O (n) O (n ㏒ squared n) O (n ㏒ squared n) O(1)
Merge sort ㏒ n O (n) ㏒ n O (n) ㏒ n O (n) O(n)
Quick sort ㏒ n O (n) ㏒ n O (n) O (n squared) O (㏒ n)
Heap sort ㏒ n O (n) ㏒ n O (n) ㏒ n O (n) O(1)
Count sorting O(n + k) O(n + k) O(n + k) O(k)
Bucket sort O(n + k) O(n + k) O (n squared) O(n + k)
Radix sort O (n * k) O (n * k) O (n * k) O(n + k)

Stability: Assuming a=1, b=4, C =2, D =2, e=5, in order from smallest to largest, there are two correct results:

  • a c d b ecdEquivalent, in the original arraycIn the formerdAfter, after sortingcIn the formerdAfter, so sortstable.
  • a d c b ecdEquivalent, in the original arraycIn the formerdIn the back, in the back orderdIn the formercAfter, so sortunstable.

The following sorting algorithms are based onThe array goes from small to largeSort to demonstrate



2. Select sort

Whatever goes in is order n ^ 2 time. So when you use it, the smaller the data, the better.

[Algorithm steps]

  1. Locate the smallest element in the unsorted sequence and store it at the beginning of the sorted sequence.
  2. Find the smallest element from the remaining unsorted elements and place it at the end of the sorted sequence.
  3. Repeat Step 2 until all elements are sorted.

[GIF Demo]

[Code implementation]

public class SelectionSort {
    public void sort(int[] sourceArray) {
		if (sourceArray.length <= 1) return;
        for (int i = 0; i < sourceArray.length-1; i++) {
            int minIndex = i;
            for (int j = i + 1; j < sourceArray.length; j++) {
                // Records the index of the smallest element that can be found so far
                if (sourceArray[j] < sourceArray[minIndex]) minIndex = j;
            }

            if (i == minIndex) continue;
            // Swap the minimum found with the value of the I position
            inttmp = sourceArray[i]; sourceArray[i] = sourceArray[minIndex]; sourceArray[minIndex] = tmp; }}}Copy the code

[Algorithm analysis]

Time complexity: The algorithm iterates n-1 rounds in total. The time complexity of selecting the minimum value in each round and then switching to the left side is O(n), so the time complexity is O(n²).

Spatial complexity: The algorithm sorts in place and does not make use of additional data structures, so the spatial complexity is O(1).

Stability: unstable. When a sequence contains more than one element of equal value, selection sort can disrupt their original order. The green 5 was originally ranked before the orange 5, but as element 3 and green 5 were swapped in the first round, the green 5 was ranked after the orange 5.



3. Insert sort

It’s like sorting out cards. It works by building an ordered sequence. For unordered data, scan from back to front in the sorted sequence to find the corresponding position and insert it.

[Algorithm steps]

  1. The first element of the entire sequence to be sorted is regarded as an ordered sequence, and the second element to the last element is regarded as an unsorted sequence.
  2. Find the leftmost element in the unsorted sequence and keep a temporary copy of its value.
  3. From right to left in the sorted sequence, the temporary element is compared to forward if greater, otherwise it is overwritten to vacancy.
  4. Repeat step 2,3 until all elements are sorted.

[GIF Demo]

[Code implementation]

public class InsertionSort {

    public void sort(int[] sourceArray) {
        if (sourceArray.length <= 1) return;
        // Start with the element with index 1 and work backwards
        for (int i = 1; i < sourceArray.length; i++) {
            // Save a copy temporarily to reduce the number of exchanges
            int tmp = sourceArray[i];
            int j;
            // Start with the tag element and work backwards
            for (j = i; j > 0 && sourceArray[j - 1] > tmp; j--) {
                // If the previous > temporary element, the current is overwritten with the previous data
                sourceArray[j] = sourceArray[j - 1];
            }
            if (i == j) continue;
            // Insert the temporary element into the appropriate positionsourceArray[j] = tmp; }}}Copy the code

[Algorithm analysis]

Time complexity: The algorithm iterates n-1 rounds in total, and the time complexity of comparison assignment in each round is O(n), so the time complexity is O(n²).

Spatial complexity: The algorithm sorts in place and does not make use of additional data structures, so the spatial complexity is O(1).

Stability: Stable.



4. Merge sort

The classic divide-and-rule idea, a very typical application of divide-and-rule.

  • Return: recursion from top to bottom (note that the depth of recursion causes stack overflow).
  • And: consolidation from the bottom up.

[Algorithm steps]

  1. Apply for a space that is the sum of two sorted sequences and is used to store the combined sequence.
  2. Sets two Pointers, starting at the start of two sorted sequences.
  3. Compare the elements to which the two Pointers point, place the smaller element into the merge space, and move the pointer to the next position.
  4. Repeat Step 3 until a pointer reaches the end of the sequence.
  5. Copies all remaining elements of the other sequence directly to the end of the merged sequence.

[GIF Demo]

[Code implementation]

public class MergeSort {

    public void sort(int[] array) {
        final int length = array.length;
        if (length < 2) return;

        // Use recursion
        mergeSort(array, 0, length - 1);

        // Use the iterative method
// for (int i = 1; i <= length; i += i) {
// for (int j = 0; j + i < length; j += i + i) {
/ / / / the array [j, j + I - 1) interval and array [j + I, j + I * 2-1] range to merge
// merge(array, j, j + i - 1, Math.min(j + i + i - 1, length - 1));
/ /}
/ /}
    }

    // Use recursion to merge array [l,r]
    private void mergeSort(int[] array, int l, int r) {
        if (l >= r) return;
        // Split into two small sets, recurse separately
        int mid = l + (r - l) / 2;
        mergeSort(array, l, mid);
        mergeSort(array, mid + 1, r);
        // If the maximum value of the set on the left does not exceed the minimum value of the set on the right, then the array is already sorted and no operation is required
        if (array[mid] <= array[mid + 1]) return;
        // Merge two ordered small sets into one large set
        merge(array, l, mid, r);
    }

    // Merge array [l,m] and array [m+1,r]
    private void merge(int[] array, int l, int m, int r) {
        // Create a temporary large set and initialize it as an unsorted array
        int[] tmpArray = Arrays.copyOfRange(array, l, r + 1);
        int p1 = l;     // The pointer points to the first element of the left range
        int p2 = m + 1; // The pointer points to the first element of the right-hand range
        // For a new iteration, the interval is reordered and the original array is replaced
        for (int i = l; i <= r; i++) {
            if (p1 > m) {
                // if p1 is out of bounds, it means that all the elements on the left have been used up, so put the right set at the end of the large set
                array[i] = tmpArray[p2 - l];
                p2++;
            } else if (p2 > r) {
                // p2 is out of bounds, indicating that all elements on the right are used up, so put the left set at the end of the large set
                array[i] = tmpArray[p1 - l];
                p1++;
            } else if (tmpArray[p1 - l] < tmpArray[p2 - l]) {
                array[i] = tmpArray[p1 - l];
                p1++;
            } else{ array[i] = tmpArray[p2 - l]; p2++; }}}}Copy the code

[Algorithm analysis]

Time complexity: The recursion depth of the algorithm is ㏒n and the complexity of each round of sorting is O(n), so the time complexity is O(n㏒n).

Spatial complexity: The algorithm uses additional temporary data structures of the same size for each round of sorting, so the spatial complexity is O(n).

Stability: Stable.



5. Quicksort

The classic divide-and-rule idea, a very typical application of divide-and-rule. Unlike merge sort, merge sort is a simple dichotomy; Quicksort dynamically calculates the appropriate segmentation points during segmentation.

[Algorithm steps]

  1. Start by taking a number from the sequence as the base number.
  2. Iterate through the column of numbers, placing the number greater than the number to its right and the number less than or equal to the number to its left.
  3. Steps 1,2 are repeated for the left and right intervals until a pointer reaches the end of the sequence.

[Illustration]

Like bubble sort, the most basic operation of quicksort is element swap

private void swap(int[] array, int i1, int i2) {
    int tmp = array[i1];
    array[i1] = array[i2];
    array[i2] = tmp;
}
Copy the code

5.1. Lite (no optimization)

[Code implementation]

public class QuickSort {

    public void sort(int[] array) {
        final int length = array.length;
        if (length < 2) return;

        quickSort(array, 0, array.length - 1);
    }

    // Quicksort the [l,r] range of the array
    private void quickSort(int[] array, int l, int r) {
        if (l >= r) return;
        // Find the split point in the middle
        int p = partition(array, l, r);
        // Quicksort the left and right sub-intervals recursively
        quickSort(array, l, p - 1);
        quickSort(array, p + 1, r);
    }

    // Calculate the appropriate calibration position for the array [l,r] interval
    private int partition(int[] array, int l, int r) {
        // Select the leftmost element as the base element
        int v = array[l];
        // array[j+1,r] < v and array[j+1,r] > v
        int j = l;
        // run through the [l,r] interval of the array
        for (int i = l + 1; i <= r; i++) {
            // Start with the second element on the left
            if (array[i] <= v) {
                // If the current value <= the base element
                // Swaps the current value with the next position of j, and moves both the I and j Pointers to the rightswap(array, ++j, i); }}// Finally swap the base element with the j pointer element, which is the correct position for the final sorting
        swap(array, l, j);
        // Returns the index value of the split point
        returnj; }}Copy the code

[Algorithm analysis]

For completely random data, quicksort is faster than merge sort.

Random number group Length: 1000000 Max: 1000000

[MergeSort] 293.772 ms

[QuickSort] 180.077 ms

5.2. Base Edition (Random reference Elements)

If the given initial data almost ordered (extreme cases for array has ordered), because each time fixed by the most on the left side of the first element as a benchmark elements, interval will disappear after segmentation left, there is only one right interval, in each round of recursive lined up only one element, so O n ㏒ (n) algorithm will degenerate into a O (n squared) algorithm.

Random number group length: 1000000, Max: 1000000, swapTimes: 10

[MergeSort] 75.397 ms

[QuickSort] 84.493 s stack overflow, and takes 1000 times more time than merge sort

[Solution]

Use randomness to avoid degenerate into O(n ^ 2) when selecting base elements.

[Code implementation]

public class QuickSort {

    public void sort(int[] array) {
        final int length = array.length;
        if (length < 2) return;

        quickSort(array, 0, array.length - 1);
    }

    // Quicksort the [l,r] range of the array
    private void quickSort(int[] array, int l, int r) {
        if (l >= r) return;

        // Find the split point in the middle
        int p = partition(array, l, r);
        // Quicksort the left and right sub-intervals recursively
        quickSort(array, l, p - 1);
        quickSort(array, p + 1, r);
    }

    // Calculate the appropriate calibration position for the array [l,r] interval
    private int partition(int[] array, int l, int r) {
        // 💡 add 2 lines of code. Select an element at random as the base element and swap places with the leftmost element
        int baseIndex = (int) (Math.random() * (r - l + 1) + l);
        swap(array, l, baseIndex);
        // Select the leftmost element as the base element
        final int v = array[l];
        // array[j+1,r] < v and array[j+1,r] > v
        int j = l;
        // run through the [l,r] interval of the array
        for (int i = l + 1; i <= r; i++) {
            // Start with the second element on the left
            if (array[i] <= v) {
                // If the current value <= the base element
                // Swaps the current value with the next position of j, and moves both the I and j Pointers to the rightswap(array, ++j, i); }}// Finally swap the base element with the j pointer element, which is the correct position for the final sorting
        swap(array, l, j);
        // Returns the index value of the split point
        returnj; }}Copy the code

[Algorithm analysis]

This optimization is much better for nearly ordered data. But it’s still slower than merge sort, because merge sort, when two intervals are already ordered, you don’t have to merge.

Random number group length: 1000000, Max: 1000000, swapTimes: 10

[MergeSort] 54.817 ms

【QuickSort】102.640 ms

5.3. Standard Edition (Dual-way Quicksort)

If the given initial data have a large number of duplicate data (extreme cases of array elements completely equal), since each element and benchmark comparison, swapping if equal to will and to the left of the element (either on the left or right interval is the same problem), then after about two interval is very uneven, So the O(n㏒n) algorithm will degenerate into the O(n ^ 2) algorithm.

Random number group Length: 1000000, Max: 10

[MergeSort] 224.483 ms

【QuickSort】39.529 s

[Solution]

The number group is traversed from both directions to the middle at the same time, and the elements equal to the reference element are evenly dispersed into two intervals to prevent the imbalance between the left and right intervals and avoid degenerating into O(n²).

[Key steps]

  1. iThe pointer goes from front to back ifCurrent element < base element, then the pointer moves backward until> =The base element.
  2. jThe pointer traverses backwards ifCurrent element > Base element, the pointer moves forward until< =The base element.
  3. If two Pointers meet, the traversal ends; Otherwise, swap the elements to which two Pointers point and move two more Pointers.
  4. Repeat steps 1,2, and 3 until the cycle is complete.

[Code implementation]

public class QuickSort {

    public void sort(int[] array) {
        final int length = array.length;
        if (length < 2) return;

        quickSort(array, 0, array.length - 1);
    }

    // Quicksort the [l,r] range of the array
    private void quickSort(int[] array, int l, int r) {
        if (l >= r) return;

        // Find the split point in the middle
        int p = partition(array, l, r);
        // Quicksort the left and right sub-intervals recursively
        quickSort(array, l, p - 1);
        quickSort(array, p + 1, r);
    }

    // Calculate the appropriate calibration position for the array [l,r] interval
    private int partition(int[] array, int l, int r) {
        // Select an element at random as the base element and swap places with the leftmost element
        int baseIndex = (int) (Math.random() * (r - l + 1) + l);
        swap(array, l, baseIndex);
        // Select the leftmost element as the base element
        final int v = array[l];
        // I meets array[L +1,i-1] <= v
        int i = l + 1;  // The l position is already occupied by the base element, so +1 is required
        Array [j+1,r] >= v
        int j = r;

        while (true) {
            // the I pointer moves backwards from front to back, if the current element < the base element, until >= the base element
            while (i <= r && array[i] < v) i++;
            // The j pointer traverses backwards, moving forward until <= the base element if the current element > the base element
            while (j >= l + 1 && array[j] > v) j--;
            // If two Pointers meet, the traversal ends
            if (i >= j) break;
            // Swap the elements to which two Pointers point, moving the pointer at the same time
            swap(array, i++, j--);
        }
        // Finally swap the base element with the j pointer element, which is the correct position for the final sorting
        swap(array, l, j);
        // Returns the index value of the split point
        returnj; }}Copy the code

[Algorithm analysis]

This optimization is much better for a lot of duplicate data. Sometimes it’s even better than merge sort.

Random number group Length: 1000000, Max: 10

[MergeSort] 217.552 ms

【QuickSort】158.082 ms

5.4. Advanced (Three-way Quicksort)

Again, the time efficiency degrades to O(n ^ 2) in order to optimize a large number of repeated data.

[Key steps]

  1. iPointers are traversed from front to back and compared to the base element:
  • ifCurrent element = base element, you only need to move rightiPointer.
  • ifCurrent element < base element, the exchangelt + 1iTwo Pointers to the element, then move it rightiPointers andltPointer.
  • ifCurrent element > Base element, the exchangegt - 1iTwo Pointers to the element, then move to the leftgtPointer.
  1. Repeat Step 1 untiliPointers andgtWhen the Pointers meet, a round of traversal ends.

[Code implementation]

public class QuickSort {

    public void sort(int[] array) {
        final int length = array.length;
        if (length < 2) return;

        quickSort3Ways(array, 0, array.length - 1);
    }

    // Quicksort the [l,r] range of the array
    private void quickSort3Ways(int[] array, int l, int r) {
        // Recursive termination condition
        if (l >= r) return;

        // Select an element at random as the base element and swap places with the leftmost element
        int baseIndex = (int) (Math.random() * (r - l + 1) + l);
        swap(array, l, baseIndex);

        // Select the leftmost element as the base element
        final int v = array[l];
        // lt meets array[l+1,lt] < v
        int lt = l;
        // gt满足array[gt,r] > v
        int gt = r + 1;
        // I meets array[lt+1,i-1] == v
        int i = l + 1;
        // Until I and gt meet, the loop ends
        while (i < gt) {
            if (array[i] < v) {
                // If the current element is < the base element, swap the elements to which the lt+1 and I Pointers point, and move the I and lt Pointers right
                swap(array, ++lt, i++);
            } else if (array[i] > v) {
                // If the current element > the base element, swap the elements pointed to by gT-1 and I, and move the GT pointer left
                swap(array, --gt, i);
            } else {
                // If the current element = the base element, just move the I pointer righti++; }}// Finally swap the base element with the last element of the left interval
        swap(array, l, lt);
        // Sort the other two intervals recursively
        quickSort3Ways(array, l, lt - 1); quickSort3Ways(array, gt, r); }}Copy the code

[Algorithm analysis]

Time complexity: the average time complexity is O(n㏒n); The worst case is that every time the element is the maximum or minimum value in the array, it degenerates into a bubble sort O(n²), but because the base element is randomly selected, the probability of the worst case is greatly reduced; So quicksort is O(n㏒n).

Spatial complexity: The algorithm is sorted in place, so the spatial complexity is O(1), but because it is a recursive call, some data needs to be saved, so the spatial complexity is O(㏒n).

Stability: unstable.

For large amounts of duplicate data, performance is better than standard quicksort and merge sort.

Random number group Length: 1000000, Max: 10

[MergeSort] 172.234 ms

【QuickSort】118.999 ms

【QuickSort3Ways】39.054 ms

In general, the performance is similar.

Random number group Length: 1000000 Max: 1000000

[MergeSort] 255.119 ms

[QuickSort] 198.002 ms

【QuickSort3Ways】250.287 ms

Almost ordered data.

Random number group length: 1000000, Max: 1000000, swapTimes: 100

[MergeSort] 267.889 ms

【QuickSort】148.813 ms

【QuickSort3Ways】164.961 ms

5.5. Iterative (non-recursive implementation)

Most of the problems with recursion can be replaced by iteration + stack: code recursion is itself a function stack on and off, so convert the recursion into a stack and store the parameters of each method call on the stack.

[Key steps]

  • To create aStackObject that simulates a recursive process and holds the order of each call.
  • Custom oneFrameClass, one for each stack frameFrameObject representation that stores a list of arguments for each call.
  • The first full call is pushed onto the stack.
  • Then loop out the stack frame in the stack until the stack is empty.

[Code implementation]

public class QuickSortIteration {

    public void sort(int[] array) {
        final int length = array.length;
        if (length < 2) return;

        quickSortIteration(array, 0, array.length - 1);
    }

    // Quicksort the [l,r] range of the array
    // Use iterative implementation
    private void quickSortIteration(int[] array, int l, int r) {
        if (l >= r) return;

        // Use a Stack instead of a recursive function Stack
        Stack<Frame> quickSortStack = new Stack<>();
        // Use Frame to represent the stack Frame and make the first call to the stack
        Frame firstFrame = new Frame(l, r);
        quickSortStack.push(firstFrame);
        // The loop ends when the stack is empty
        while(! quickSortStack.isEmpty()) {// Fetch the top element of the stack
            Frame frame = quickSortStack.pop();
            // Get the method call parameter to find the location of the base element
            int p = partition2(array, frame.left, frame.right);
            // Push the left and right parts of the interval to the stack
            if (frame.left < p - 1) {
                quickSortStack.push(new Frame(frame.left, p - 1));
            }
            if (frame.right > p + 1) {
                quickSortStack.push(new Frame(p + 1, frame.right)); }}}// Calculate the appropriate calibration position for the array [l,r] interval
    // Double pointer scanning
    private int partition2(int[] array, int l, int r) {
        // Select an element at random as the base element and swap places with the leftmost element
        int baseIndex = (int) (Math.random() * (r - l + 1) + l);
        swap(array, l, baseIndex);
        // Select the leftmost element as the base element
        final int v = array[l];
        // I meets array[L +1,i-1] <= v
        int i = l + 1;  // The l position is already occupied by the base element, so +1 is required
        Array [j+1,r] >= v
        int j = r;

        while (true) {
            // the I pointer moves backwards from front to back, if the current element < the base element, until >= the base element
            while (i <= r && array[i] < v) i++;
            // The j pointer traverses backwards, moving forward until <= the base element if the current element > the base element
            while (j >= l + 1 && array[j] > v) j--;
            // If two Pointers meet, the traversal ends
            if (i >= j) break;
            // Swap the elements to which two Pointers point, moving the pointer at the same time
            swap(array, i++, j--);
        }
        // Finally swap the base element with the j pointer element, which is the correct position for the final sorting
        swap(array, l, j);
        // Returns the index value of the split point
        return j;
    }

    // Emulates the stack frame to hold the argument list of the method call
    private static class Frame {
        public int left;
        public int right;

        public Frame(int left, int right) {
            this.left = left;
            this.right = right; }}}Copy the code

[Algorithm analysis]

There is little difference in performance between iterative and recursive implementations of quicksort.

Random number group Length: 1000000 Max: 1000000

[MergeSort] 255.983 ms

【QuickSort】210.200 ms

[QuickSort3Ways] 183.901 ms

【 Quicksortition 】257.451 ms



6. The heap sort

Heapsort refers to a sort algorithm designed by using heap data structure.

6.1. Binary heap concept

Comics: What is a binary heap?

Binary heap is a special kind of heap, which is a complete binary tree or nearly complete binary tree. There are two types of binary heap:

  • Maximum heap: parent >= children, and the top of the heap is the largest element in the whole heap.
  • Minimum heap: parent <= children, and the top of the heap is the smallest element in the entire heap.

[Applicable Instructions]

The worst case is O(n²) in the case of priority queue, ordinary or sequential array, but the heap can improve the efficiency of queue entry and queue exit.

The data structure The team Out of the team
Regular array O(1) O(n)
Sequential array O(n) O(1)
The heap O (㏒ n) O (㏒ n)

Insert node

  1. Insert a new node M at the last position of the complete binary tree.

  2. Compare M to its parent:

    • If smaller than (minimum heap) or larger than (maximum heap) the parent is swapped.
    • If it’s equal or has no parent, then it’s zeroMThe final position of the node.
  3. Repeat step 2 until node M finds its final location.

【 Delete node 】

  1. Delete the root node (heap top node) of the complete binary tree and place the node in the last positionMMove to the top of the heap.
  2. letMComparison of nodes and their children:
    • If it is smaller than (the largest heap) or larger than (the smallest heap) one of the children, it switches places with that child.
    • If there are two children less than (maximum heap) or more than (minimum heap), swap places with the larger (maximum heap) or smaller (minimum heap) children.
    • If it’s equal or there are no children, then this position is zeroMThe final position of the node.
  3. Repeat Step 2 untilMThe node finds its final position.

[Build binary heap]

It’s to take a completely unordered binary tree and turn it into a binary heap, and essentially make all the non-leaf nodes sink in sequence.

  1. We start at the last non-leaf node.
  2. Compare this node with its children:
    • If it is smaller than (the largest heap) or larger than (the smallest heap) one of the children, it switches places with that child.
    • If there are two children less than (maximum heap) or more than (minimum heap), swap places with the larger (maximum heap) or smaller (minimum heap) children.
    • If the node is equal or has no children, the position is the temporary position of the node.
  3. Repeat Step 2 until all non-leaf nodes are traversed.

[Code implementation]

Although binary heap is a complete binary tree, its storage mode is not chain storage, but sequential storage, namely array.

So if the index of a node is parent and the total number of nodes is count then:

  • The index of its parent is(the parent - 1) present 2
  • The index of the left child is2 x the parent + 1
  • The index of the right child is2 x the parent + 2
  • The index of the last non-leaf node(the count - 1) present 2
public class MaxHeap {

    private final int[] data;   // Data storage structure
    private final int capacity; // Rated capacity
    private int count = 0;      // Real-time quantity

    public MaxHeap(int capacity) {
        this.capacity = capacity;
        this.data = new int[capacity];
    }

    public MaxHeap(int[] initData) {
        this.capacity = initData.length;
        this.data = Arrays.copyOf(initData, initData.length);
        this.count = initData.length;
        // the last non-leaf node :(count-1) / 2
        // Start sinking from the last non-leaf node
        for (int i = (this.count - 1) / 2; i >= 0; i--) { shiftDown(i); }}// Insert a new element
    public void insert(int element) {
        if (this.count >= this.capacity) {
            throw new IllegalStateException("The maxHeap is full");
        }

        this.count++;
        // Put the new element in the last place
        this.data[this.count - 1] = element;
        // Float up the last element
        shiftUp(this.count - 1);
    }

    // Pop up the top of the heap element
    public int popMax(a) {
        if (this.count == 0) {
            throw new IllegalStateException("The maxHeap is empty");
        }

        // Store the top element temporarily
        final int root = data[0];
        // Move the element in the last position to the top of the heap
        data[0] = data[--count];
        // Perform a sink operation on the current heap top element
        shiftDown(0);
        return root;
    }

    // Float the element
    private void shiftUp(int index) {
        // parent :(index-1) / 2
        // loop condition: the current node has a parent and the current node > the parent
        while (index > 0 && this.data[index] > this.data[(index - 1) / 2]) {
            // Switch places with the parent
            swap(this.data, (index - 1) / 2, index);
            // Update the new index of the current node
            index = (index - 1) / 2; }}// Element sink operation
    private void shiftDown(int index) {
        // Left child: 2 * index + 1
        // Right child: 2 * index + 2
        // loop condition: the current node has children (complete binary trees have no left children, much less right children)
        while (2 * index + 1< =this.count) {
            int j = 2 * index + 1;  // Switch to the left child by default
            if (2 * index + 2 < this.count && data[2 * index + 1] < data[2 * index + 2]) {
                // If there is a right child, and the right child > the left child
                j = 2 * index + 2;
            }
            if (data[index] >= data[j]) break;

            // If the current node is < the child node to be swapped, the position is swapped and the index value is updatedswap(data, index, j); index = j; }}private void swap(int[] array, int i1, int i2) {
        inttmp = array[i1]; array[i1] = array[i2]; array[i2] = tmp; }}Copy the code

6.2. Simple version (using maximum heap)

Maximum heap Each time the top element is popped, it is the largest remaining element, so the pop process is naturally ordered.

[Code Implementation 1]

public class HeapSort1 {

    public void sort(int[] array) {
        MaxHeap maxHeap = new MaxHeap(array.length);
        // Add elements one by one to the maximum heap
        for (int j : array) {
            maxHeap.insert(j);
        }
        // Loop to fetch the top element from the maximum heap
        for (int i = array.length - 1; i >= 0; i--) { array[i] = maxHeap.popMax(); }}}Copy the code

[Code Implementation 2]

public class HeapSort2 {

    public void sort(int[] array) {
        // Add all elements to the heap at once
        MaxHeap maxHeap = new MaxHeap(array);
        // Loop to fetch the top element from the maximum heap
        for (int i = array.length - 1; i >= 0; i--) { array[i] = maxHeap.popMax(); }}}Copy the code

[Algorithm analysis]

  • Method 1: Add one element at a time to the maximum heap and make it self-collate each time㏒ n O (n).
  • Method 2: Add all elements to the maximum heap at one time and let it sort itself, time complexity isO(n).
  • To sum up, the heap sort time complexity of this method is㏒ n O (n), since the original array needs to be copied again, the space complexity isO(n).

Random number group Length: 1000000 Max: 1000000

[MergeSort] 319.844 ms

【QuickSort】217.748 ms

【QuickSort3Ways】245.090 ms

【 Quicksoration 】305.352 ms

【HeapSort1】280.493 ms

【HeapSort2】169.068 ms

6.3. Standard Edition (In Situ Heap Sort)

Using the original array directly as the storage structure of the heap, there is no need to copy a set of numbers, so the space complexity can be reduced to O(1).

[Algorithm diagram]

[Code implementation]

public class HeapSort {

    public void sort(int[] array) {
        // Start sinking from the last non-leaf node
        for (int i = (array.length - 1) / 2; i >= 0; i--) {
            shiftDown(array, i, array.length);
        }
        // When finished, the raw data has been arranged in the largest heap
        // Loop out the top elements of the heap, each time one is removed, the sink is performed again
        for (int count = array.length - 1; count > 0; count--) {
            swap(array, 0, count);
            shiftDown(array, 0, count); }}// Element sink operation
    private void shiftDown(int[] array, int index, int count) {
        // Left child: 2 * index + 1
        // Right child: 2 * index + 2
        // loop condition: the current node has children (complete binary trees have no left children, much less right children)
        while (2 * index + 1 < count) {
            int j = 2 * index + 1;  // Switch to the left child by default
            if (j + 1 < count && array[j] < array[j + 1]) {
                // If there is a right child, and the right child > the left child
                j = j + 1;
            }
            if (array[index] >= array[j]) break;

            // If the current node is < the child node to be swapped, the position is swapped and the index value is updatedswap(array, index, j); index = j; }}}Copy the code

[Algorithm analysis]

Time complexity: when the disordered number fabric is built into the binary heap, the worst time complexity of sinking adjustment = the height of the binary heap, i.e. O(㏒n); Loop out the top element of the heap and sink it again, O(n㏒n), and O(n㏒n) for the combination of two steps.

Spatial complexity: The algorithm sorts in place and does not make use of additional data structures, so the spatial complexity is O(1).

Stability: unstable.

Random number group Length: 1000000 Max: 1000000

[MergeSort] 272.868 ms

[QuickSort] 237.082 ms

[QuickSort3Ways] 189.954 ms

【 Quicksortition 】232.458 ms

【HeapSort】208.720 ms