1. Bubble sort algorithm

1.1 Algorithm idea

The basic idea of Bubble Sort is:

I (I = 1, 2…) Start from the first element of the first n – I + 1 element in the sequence, and compare the two adjacent elements. If the former is greater than the latter, they exchange positions, otherwise they do not exchange positions.

1.2 Algorithm Steps

  1. So let’s take the number one in the sequence1The elements and the2If the former is greater than the latter, the two will exchange their positions, otherwise they will not exchange;
  2. Then the first2The elements and the3If the former is greater than the latter, the two will exchange their positions, otherwise they will not exchange;
  3. And so on up to number onen - 1The elements and thenElements are compared (or swapped). After such a sequence, so thatnThe element with the highest value is placed at the beginning of the sequencenIn one place.
  4. After that, beforen - 1The elements perform the same process such that then - 1The element with the highest value of the element is placed at the firstn - 1In one place.
  5. And then go aheadn - 2The above process is repeated for each element until the sequence ends when no element changes position.

1.3 Animation Demonstration

1.4 Algorithm Analysis

  • In the best case, the initial sequence is already ordered from smallest to largest (ascending), so it only takes one tripn - 1The algorithm can end the sorting without moving the elements. In this case, the time complexity of the algorithm is
    O ( n ) O(n)
    .
  • The worst case is when the initial sequence involved in sorting is in reverse order, or when the least valued element is at the end of the sequencen - 1So we sort all of them
    i = 2 n ( i 1 ) = n ( n 1 ) 2 ∑ ^ n_ {2} I = (I – 1) = \ frac {n (n – 1)} {2}
    Therefore, the average time complexity of the bubble sort algorithm is
    O ( n 2 ) O(n^2)
    .
  • Bubble sort requires moving elements more times during sorting. Therefore, bubble sorting method is more suitable for the small amount of data participating in sorting sequence, especially when the initial state of the sequence is basically ordered. For the general case, this method is the least efficient way to sort time.
  • Since the exchange of elements is carried out between adjacent elements without changing the relative positions of elements with the same value, bubble sorting is a stable sorting method.

1.5 Code Implementation

class Solution:
    def bubbleSort(self, arr) :
        for i in range(len(arr)):
            for j in range(len(arr) - i - 1) :# If the former is greater than the latter, the two switch places
                if arr[j] > arr[j + 1]:
                    arr[j], arr[j + 1] = arr[j + 1], arr[j]

        return arr

    def sortArray(self, nums: List[int]) - >List[int] :
        return self.bubbleSort(nums)
Copy the code

2. Select the sorting algorithm

2.1 Algorithm idea

Basic idea of Selection Sort:

The i-th sort starts from the last n − I + 1 (I = 1, 2… , n − 1) select the element with the smallest value to swap places with the uppermost element of the n-i + 1 element, that is, swap places with the element at the ith position of the whole sequence. And so on, until I == n − 1, the sorting ends.

It can be summarized as: in each sort, select the smallest element from the remaining unsorted elements and swap places with the first element of the unsorted elements.

2.2 Algorithm Steps

  1. Sets integer variables in the algorithmi, can be used as the calculation of the number of sort runs, but also as the execution of theiWhen sorting, join the last of the sortsN − I + 1The first digit of the element1The position of the element.
  2. Integer variablesmin_iRecord theN − I + 1The position of the least valued element in.
  3. Each time you sort, you start with the othermin_i = i(That is, assume the order of the sequence for the momentiThe value of the element is the smallest, and the position of the minimum element can be determined formally after comparison according to the actual situation).
  4. The firstiAt the end of the sorting comparison, thisN − I + 1The true value of the smallest element is the subscriptmin_iThe corresponding element. At this point, if there aremin_i == iThat means that the least valued element is right hereN − I + 1The first digit of the element1Number of elements, meaning that there is no element swap for this sort.

2.3 Animation Demonstration

2.4 Algorithm Analysis

Selecting a sequence of n elements requires n-1 sorting.

  • When the original sequence is a value-increasing sequence (ascending), the element has the least number of moves, which is0Times. The element moves the most when the sequence starts as a descending sequence of values (in reverse order)
    3 ( n 1 ) 3 (n – 1)
    Time (3Is the switchingarr[i]arr[min_i]Number of executions).
  • But no matter what the initial state of the elements in the sequence is, no.iSorting is all about finding the smallest elementN - i.Comparison between subelements. Therefore, the number of comparisons between elements required in the whole sorting process is the same, and is
    i = 2 n ( i 1 ) = n ( n 1 ) 2 ∑^n_{I =2}(I – 1) = \frac{n(n−1)}{2}
    Times.
  • This shows that the number of comparisons between elements carried out by the selection sorting method has nothing to do with the original state of the sequence, and the time complexity of the algorithm can be determined as O(n2)O(n^2)O(n2).
  • Because the value of the smallest element is the first in the unsorted element1The exchange action of 2 elements is carried out between non-adjacent elements, so it is likely to change the front and back positions of elements with the same value. Therefore, the selective sorting method is a kind of unstable sorting method.

2.5 Code Implementation

class Solution:
    def selectionSort(self, arr) :
        for i in range(len(arr) - 1) :# Record the index of the smallest number in an unsorted sequence
            min_i = i
            for j in range(i + 1.len(arr)):
                if arr[j] < arr[min_i]:
                    min_i = j
            If the minimum number is found, swap the element at position I with the element at position min
            ifi ! = min_i: arr[i], arr[min_i] = arr[min_i], arr[i]return arr

    def sortArray(self, nums: List[int]) - >List[int] :
        return self.selectionSort(nums)
Copy the code

3. Insert sorting algorithm

3.1 Algorithm Idea

Insertion Sort

The whole sequence is divided into two parts: the first I-1 element is an ordered sequence, and the last N-i + 1 element is an unordered sequence. Each time, the first element of the unordered sequence is inserted into the ordered sequence.

It can be summarized as: in each sort, the first element of the remaining unordered sequence is inserted into the appropriate position of the ordered sequence.

3.2 Algorithm Steps

  1. Will be the first1Element as an ordered sequence, the first2 ~ n - 1Element as unordered sequence.
  2. Take the first element from an unordered sequence and scan back to front in an already ordered sequence.
  3. If the scanned element is larger than the retrieved element, the scanned element is moved to the right1Bit, and then continue moving forward until you find the appropriate place in the ordered sequence that is less than or equal to the extracted element.
  4. Insert the fetched element into the appropriate position.
  5. Repeat 2 to 4 steps until all elements are in an ordered sequence.

3.3 Animation Demonstration

3.4 Algorithm Analysis

  • Those who havenThe insertion sort method has to be carried out altogethern - 1The sorting.
  • For the insertion sort algorithm, the whole sorting process requires only one auxiliary spacetemp.
  • When the original sequence is a value-increasing sequence (ascending), the corresponding eachiValue is compared between elements only once, so the total number of comparisons is the least, is
    i = 2 n 1 = n 1 ∑^n_{i = 2}1 = n − 1
    , there is no need to move elements (records), which is the best case scenario.
  • In the worst case, the sequence starts as a decrement sequence (in reverse order) corresponding to eachiAll values are going to be carried outi - 1The total number of comparisons between elements reaches the maximum value, which is
    i = 2 n ( i 1 ) = n ( n 1 ) 2 ∑^n_{i=2}(i − 1) = \frac{n(n−1)}{2}
    .
  • If the initial condition of the sequence is random, that is, the various permutations of the elements in the sequence participating in the sorting have the same probability, then the average of the minimum and maximum values above is taken as the number of comparisons between the elements in the insertion sorting, which is about n2/4N ^2/4n2/4. Thus, the time complexity of the insertion sort algorithm O(n2)O(n^2)O(n2).
  • Insertion sort method belongs to stability sort method.

3.5 Code Implementation

class Solution:
    def insertionSort(self, arr) :
        for i in range(1.len(arr)):
            # temp is the first element fetched from the unordered sequence
            temp = arr[i]
            j = i
            Move the scanned element to the right and find the appropriate insertion position
            while j > 0 and arr[j - 1] > temp:
                arr[j] = arr[j - 1]
                j -= 1
            Insert the fetched element into the appropriate position
            arr[j] = temp

        return arr

    def sortArray(self, nums: List[int]) - >List[int] :
        return self.insertionSort(nums)
Copy the code

4. Hill sorting algorithm

4.1 Algorithm Idea

Basic idea of Shell Sort:

The whole sequence is divided into several sub-sequences according to certain interval values, and each sub-sequence is inserted and sorted separately. Then gradually narrow the interval for the next round of molecular sequence and insertion sort. Insertion sort is performed on the whole sequence until the last sort interval is 1.

4.2 Algorithm Steps

  1. First determine an element spacing numbergap, and then the sequence participating in the sort is numbered from the first to the second1The elements are divided into subsequences at a time, that is, all positions are separated bygapThe element of is regarded as a subsequence, and insertion sort is carried out in each subsequence using some sort method.
  2. Then reduce the number of intervals, and re-divide the whole sequence into several sub-sequences according to the new number of intervals, and then sort each sub-sequence separately, and so on, until the number of intervalsgap = 1.

4.3 Graphic Demonstration

4.4 Algorithm Analysis

  • The speed of Hill’s sorting method is a series of intervals
    g a p i gap_i
    It’s not easy to figure out how to compare The Times of phi with phigapAnd gives a complete mathematical analysis.
  • In the above algorithm, due to the adoption
    g a p i = g a p i 1 / 2 gap_i = \lfloor gap_{i-1}/2 \rfloor
    The method reduces the number of intervals for havingnThe sequence of elements, if
    g a p 1 = n / 2 gap_1 = \lfloor n/2 \rfloor
    , after
    p = l o g 2 n p = \lfloor log_2 n \rfloor
    After one sort
    g a p p = 1 gap_p = 1
    , therefore, the total number of sort lies of Hill’s sorting method is
    l o g 2 n \lfloor log_2 n \rfloor
    .
  • It can also be seen from the algorithm that the outermost while loop is
    l o g 2 n log_2 n
    Order of magnitude, the mid-level do-while loop isnOrders of magnitude. When the subsequence is divided more, there are fewer elements in the subsequence, and the innermost for loop has fewer times. On the contrary, when the number of subsequences decreases, the elements in the subsequence also increase, but the whole sequence gradually approaches the order, and the number of cycles does not increase. Therefore, the time complexity of Hill sorting algorithm is in
    O ( n l o g 2 n ) O(n log_2 n)

    O ( n 2 ) O(n^2)
    In between.
  • Hill sorting method is an unstable sorting algorithm.

4.5 Code Implementation

class Solution:
    def shellSort(self, arr) :
        size = len(arr)
        gap = size // 2

        while gap > 0:
            # Sort each subsequence separately
            for i in range(gap, size):
                temp = arr[i]
                j = i
                while j >= gap and arr[j - gap] > temp:
                    arr[j] = arr[j - gap]
                    j -= gap
                arr[j] = temp
            # Reduce the number of intervals
            gap = gap // 2
        return arr

    def sortArray(self, nums: List[int]) - >List[int] :
        return self.shellSort(nums)
Copy the code

5. Merge sort algorithm

5.1 Algorithm Idea

Merge Sort (Merge Sort)

Using the classic divide-and-conquer strategy, the current sequence is divided into two halves recursively. The ordered sequence is then combined in pairs to form an ordered sequence.

5.2 Algorithm Steps

  1. Initially, the sequence to be sortednAs a recordnIs an ordered subsequence (every subsequence is always ordered), and each subsequence is of length1.
  2. Merge the ordered subsequences in the current sequence group in pairs, and halve the number of sorted sequences and double the length of each subsequence.
  3. Repeat the above operation for an ordered subsequence of double length, resulting in a subsequence of lengthnAn ordered sequence of.

5.3 Animation Demonstration

5.4 Algorithm Analysis

  • The time complexity of merge sort algorithm is equal to the number of merge runs and the time complexity of each merge run. The child algorithmmerge(left_arr, right_arr):The time complexity of is
    O ( n ) O(n)
    , therefore, the total time complexity of merge sort algorithm is
    O ( n l o g 2 n ) O(nlog_2n)
    .
  • The merge sort method requires the same amount of auxiliary space as the collated sequence. Therefore, the spatial complexity of the algorithm is O(n)O(n)O(n).
  • Because in the merging of two ordered subsequences, if the same element appears in both ordered sequences,merge(left_arr, right_arr):The algorithm can ensure that the relative order of the two elements does not change by copying the same element in the previous sequence first. So merge sort isStable sorting algorithm.

5.5 Code Implementation

class Solution:
    # merge sort
    def merge(self, left_arr, right_arr) :
        arr = []
        while left_arr and right_arr:
            if left_arr[0] <= right_arr[0]:
                arr.append(left_arr.pop(0))
            else:
                arr.append(right_arr.pop(0))
        while left_arr:
            arr.append(left_arr.pop(0))
        while right_arr:
            arr.append(right_arr.pop(0))
        return arr
       
    def mergeSort(self, arr) :
        # segmentation
        size = len(arr)
        if size < 2:
            return arr
        mid = len(arr) // 2
        left_arr, right_arr = arr[0: mid], arr[mid:]
        # Recursively shard and merge
        return self.merge(self.mergeSort(left_arr), self.mergeSort(right_arr))

    def sortArray(self, nums: List[int]) - >List[int] :
        return self.mergeSort(nums)
Copy the code

6. Quicksort algorithm

6.1 Algorithm Idea

Quick Sort (Quick Sort)

The unordered sequence is divided into two independent sequences by a sequence sorting, the value of the first sequence is smaller than the value of the second sequence. The two subsequences are then recursively arranged so that the entire sequence is ordered.

6.2 Algorithm Steps

  1. Find a base number from the array.
  2. We then split the array into left and right parts by moving the elements larger than the base number to the right and the elements smaller than the base number to the left.
  3. Repeat the second step for the left and right parts respectively until each part has only one number, then the sorting is finished.

6.3 Animation Demonstration

6.4 Algorithm Analysis

  • The quicksort method takes the longest when the sorting elements are already ordered at the beginning. At this point, the first element is still determined in the original position after the first sorting through n-1 comparison, and a sub-sequence with length n-1 is obtained. After the second sorting through n-2 comparisons, the second element is fixed in its original position, and a subsequence of length N-2 is obtained. And so on, the total number of comparisons is (n−1)+(N −2)+… +1= N (n−1)2(N −1) + (n− 2) +… + 1 = \frac{n(n −1)}{2}(n−1)+(n−2)+… + 1 = 2 n (n – 1). So the time complexity is O(n2)O(n^2)O(n2).

  • In another case, if after each sorting, the boundary element is positioned in the middle of the sequence, so as to divide the sequence currently participating in sorting into two sub-sequences of equal size, then the time required for quick sorting of the sequence with length N is:


T ( n ) Or less   n + 2 T ( n / 2 ) Or less   2 n + 4 T ( n / 2 ) Or less   3 n + 8 T ( n / 8 ) Or less   ( l o g 2 n ) n + n T ( 1 ) = O ( n l o g 2 n ) \ begin T (n) \} {aligned le & \ n (n / 2) + 2 T \ \ \ le & \ 2 n (n / 2) + 4 T \ \ \ le & \ 3 n + 8 T (n / 8) \ \ &… \\ \le & \ (log_2 n)n + nT(1) = O(nlog_2 n) \end{aligned}

Therefore, the time complexity of the quicksort method is O(Nlog2N)O(Nlog_2 N)O(nlog2n), which is obviously better than the sorting algorithms discussed above.

  • Whether the quicksort algorithm is recursive or not, the auxiliary space of the stack or other structures is needed to store the beginning and end of the current sequence to be sorted. In the worst case, the space is O(n)O(n)O(n).

  • If the algorithm is rewritten, the length of the two subsequences is compared after a sorting, and the shorter subsequence is quickly sorted first, the required space complexity can reach O(log2n)O(log_2 n)O(log2n).

  • Quicksort is not only an unstable sorting algorithm, but also a sorting algorithm not suitable for linked list structure.

6.5 Code Implementation

import random


class Solution:
    Select the base number at random and move the element to the correct position based on the base number
    def randomPartition(self, arr: [int], low: int, high: int) :
        i = random.randint(low, high)
        arr[i], arr[high] = arr[high], arr[i]
        return self.partition(arr, low, high)

    # take the high level as the base number and move the element to the correct position based on the base number
    def partition(self, arr: [int], low: int, high: int) :
        i = low - 1
        pivot = arr[high]

        for j in range(low, high):
            if arr[j] <= pivot:
                i += 1
                arr[i], arr[j] = arr[j], arr[i]
        arr[i + 1], arr[high] = arr[high], arr[i + 1]
        return i + 1
    
    Fast sort recursively
    def quickSort(self, arr, low, high) :
        if low < high:
            pi = self.randomPartition(arr, low, high)
            self.quickSort(arr, low, pi - 1)
            self.quickSort(arr, pi + 1, high)

        return arr

    def sortArray(self, nums: List[int]) - >List[int] :
        return self.quickSort(nums, 0.len(nums) - 1)
Copy the code

7. Heap sort algorithm

7.1 Algorithm Idea

Heap sort (Heap sort)

Borrow “heap structure” design sort algorithm. The array is converted to the big top heap, and the node with the largest value is repeatedly removed from the big top heap, and the remaining heap remains the big top heap.

7.2 Definition of heap

1. A complete binary tree that meets either of the following two conditions:

  • Large top heap: root node value ≥ child node value.
  • Small top heap: root node value ≤ child node value.

7.3 Algorithm Procedure

  1. First, an unordered sequence is constructed as a no1A large top heap (initial heap), so thatnIs at the top of the sequence1A location.
  2. And then swap the ordinances of the sequence1Number of elements (maximum element) and position of the last element.
  3. After that, put the sequence beforen - 1The subsequence of the elements is adjusted to a new big top heap, which again yields the number2Is the maximum element of the subsequence1Number element (maximum element) and numbern - 1Zero elements swap places.
  4. And then put the sequence in frontn - 2The elements are adjusted into a new big top heap… And so on, until the whole sequence transforms into an ordered sequence.

It can be seen that heap sort algorithm mainly involves “initial heap building method” and “heap adjustment method”.

7.3.1 Heap adjustment method

The heap adjustment method is to reconstruct the sequence of remaining elements after removing the maximum element into a new heap. The specific steps are as follows:

  • Starting at the root node, adjust the position of the nodes from top to bottom to make it a heap. That is, the serial number isiAnd its left subtree node (serial number is2 * i), right subtree node (serial number is2 * i + 1) the node swap position with the largest value.
  • Because the position is changed, the original accumulation characteristics of the left and right subtrees of the current node are destroyed. Then, similar adjustments continue top-down, starting with the left and right subtree nodes of the current node.
  • And so on until the whole binary tree is a big top heap.

7.3.2 Initial heap establishment method

  • If the depth of the complete binary tree (not necessarily the heap) corresponding to the original sequence isd, fromd - 1The right-most branch node of layer (serial number is
    n / 2 \lfloor n/2 \rfloor
    (The beginning of the season
    i = n / 2 i = \lfloor n/2 \rfloor
    Call the heap adjustment algorithm.
  • Each time the heap adjustment algorithm is called, it is executedi = i - 1Until thei == 1, which adjusts the original sequence to an initial heap.

7.4 Animation Demonstration

7.4.1 Demonstration of heap adjustment method

7.4.2 Demonstration of initial heap establishment method

7.4.3 Demonstration of heap sort method

7.5 Algorithm Analysis

  • The time of heap sorting is mainly spent in two aspects:
    • The original sequence is constructed as an initial heap.
    • The sorting process continually removes the maximum element and then rearranges the remaining elements into a new heap.
  • Assume that the depth of the complete binary tree corresponding to the original sequence is D, and the algorithm consists of two independent cycles:
    • In the first1A cycle constructs the initial stack fromi = d - 1Layer start, goi = 1Layer is called once for each branch nodeadjustAlgorithm, every timeadjustAlgorithm, for the firstiLayer a node to the firstdThe maximum distance that all nodes can move is when the root node of the subheap moves to the last layer (nodThe distance of layer) isd - i.
    • And the firstiNodes at the layer have at most
      2 i 1 2^{i-1}
      So every timeadjustThe maximum moving distance of the algorithm is
      2 i 1 ( d i ) 2^{i-1} * (d-i)
      . Therefore, the heap-sort algorithm of the first1The time required for each cycle should be the sum of the product of the number of nodes on each layer and the maximum distance that nodes on the layer can move, namely:
      i = d 1 1 2 i 1 ( d i ) = j = 1 d 1 2 d j 1 x j = j = 1 d 1 2 d 1 x j 2 j Or less n j = 1 d 1 j 2 j < 2 n \sum_{i = d – 1}^1 2^{i-1} (d-i) = \sum_{j = 1}^{d-1} 2^{d-j-1} \times j = \sum_{j = 1}^{d-1} 2^{d-1} \times {j \over 2^j} \le n \sum_{j = 1}^{d-1} {j \over 2^j} < 2n
      . This part of the time is going to take 1, 0
      O ( n ) O(n)
      .
    • At the top of the algorithm2Each call in the loopadjustAlgorithm once, the maximum distance of node movement is the depth of the complete binary tree
      d = l o g 2 ( n ) + 1 d = \lfloor log_2(n) \rfloor + 1
      , called altogethern - 1adjustAlgorithm, so, no2The time cost of each cycle is zero
      ( n 1 ) ( l o g 2 ( n ) + 1 ) = O ( n l o g 2 n ) (n-1)(\lfloor log_2 (n)\rfloor + 1) = O(n log_2 n)
      .
  • Therefore, the time complexity of heap sort is O(Nlog2n)O(Nlog_2 n)O(nlog2n).
  • Since only one record size auxiliary space is required in heap sort, the space complexity of heap sort is: O(1)O(1)O(1).
  • Heap sort is an unstable sort algorithm. Heap sort is also a sort that is not suitable for implementation on linked lists.

7.6 Code Implementation

class Solution:
    # Adjust to big top heap
    def heapify(self, arr: [int], index: int, end: int) :
        left = index * 2 + 1
        right = left + 1
        while left <= end:
            The current node is a non-leaf node
            max_index = index
            if arr[left] > arr[max_index]:
                max_index = left
            if right <= end and arr[right] > arr[max_index]:
                max_index = right
            if index == max_index:
                # if no exchange is required, the exchange is complete
                break
            arr[index], arr[max_index] = arr[max_index], arr[index]
            Continue to adjust the subtree
            index = max_index
            left = index * 2 + 1
            right = left + 1

    Initialize the big top heap
    def buildMaxHeap(self, arr: [int]) :
        size = len(arr)
        # (size-2) // 2 is the last non-leaf node
        for i in range((size - 2) / /2, -1, -1):
            self.heapify(arr, i, size - 1)
        return arr

    # ascending heap sort
    # 1. Build the big top heap first
    # 2. Swap the top element with the last, then adjust the first element to the penultimate element to get the maximum value
    # 3. Swap the top element with the penultimate element, and then adjust the first element to the penultimate element, which gets the second largest value
    # 4. And so on until the last element is swapped.
    def maxHeapSort(self, arr: [int]) :
        self.buildMaxHeap(arr)
        size = len(arr)
        for i in range(size):
            arr[0], arr[size - i - 1] = arr[size - i - 1], arr[0]
            self.heapify(arr, 0, size - i - 2)
        return arr

    def sortArray(self, nums: List[int]) - >List[int] :
        return self.maxHeapSort(nums)
Copy the code

8. Counting sort algorithm

8.1 Algorithm Idea

The basic idea of Counting Sort is

Use an additional array counts, where the ith element counts[I] is the number of elements in the array arr to be sorted with a value equal to I. We then rank the elements in arR in the correct position according to the array COUNTS.

8.2 Algorithm Steps

  1. Find the largest and smallest element in the array to be sorted.
  2. Each value in the statistics array isiThe number of occurrences of the element in the arrayiItems.
  3. The sum of all counts (fromcountsStart with the first element in, adding each term to the previous one.
  4. Reverse populates the target array: places each elementiPut it at the top of the new arraycounts[i]Term, every time you put an element, you put thecounts[i] -= 1.

8.3 Animation Demonstration

8.4 Algorithm Analysis

  • When the input element isn0 ~ kWhen the integer between, the time complexity of counting sort is
    O ( n + k ) O(n + k)
    .
  • Because of the array used to countcountsThe length depends on the range of data in the array to be sorted (equal to the maximum of the array to be sorted minus the minimum plus1). So counting sort takes a lot of time and memory for a large array.
  • Counting sort is generally used to sort integers, not names alphabetically.
  • Counting sort is a stable sort algorithm.

8.5 Code Implementation

class Solution:
    def countingSort(self, arr) :
        The maximum and minimum elements in the array to be sorted
        arr_min, arr_max = min(arr), max(arr)

        size = arr_max - arr_min + 1
        counts = [0 for _ in range(size)]
        # Count the number of occurrences of each element in the array with the value 'num', and store the num-arr_min item in the array
        for num in arr:
            counts[num - arr_min] += 1
        # All counts add up
        for j in range(1, size):
            counts[j] += counts[j - 1]
        
        # Reverse populate the target array
        res = [0 for _ in range(len(arr))]
        for i in range(len(arr) - 1, -1, -1):
            res[counts[arr[i] - arr_min] - 1] = arr[i]
            counts[arr[i] - arr_min] -= 1

        return res

    def sortArray(self, nums: List[int]) - >List[int] :
        return self.countingSort(nums)
Copy the code

9. Bucket sorting algorithm

9.1 Algorithm Idea

The basic idea of Bucket Sort is

The unsorted array is divided into buckets, and the elements of each bucket are sorted separately.

9.2 Algorithm Procedure

  1. Let’s divide the interval into twonSubintervals of the same size, each called a bucket.
  2. Iterate through the number group and put each element into the corresponding bucket.
  3. Sort the elements within each bucket individually (using insert, merge, quicksort, and so on).
  4. Finally, merge the elements in the bucket in order.

9.3 Graphic Demonstration

9.3.1 Delimit the molecular interval

9.3.2 Putting array elements into a bucket and sorting the elements in the bucket separately

9.3.3 Merge elements in a Bucket and sort them

9.4 Algorithm Analysis

  • Bucket sorting can be done in linear time when the number of input elements isnThe number of buckets is 1mIs, the bucket sorting time complexity is
    O ( n + m ) O(n + m)
    .
  • Since bucket sort uses auxiliary space, the space complexity of bucket sort iso(n + m).
  • If stable sorting algorithms such as insert sort algorithm are used in buckets, bucket sort is also stable sorting algorithm.

9.5 Code Implementation

class Solution:
    Insert sort
    def insertionSort(self, arr) :
        for i in range(1.len(arr)):
            temp = arr[i]
            j = i
            while j > 0 and arr[j - 1] > temp:
                arr[j] = arr[j - 1]
                j -= 1
            arr[j] = temp

        return arr

    def bucketSort(self, arr, bucket_size=5) :
        # delimit the numerator and create a bucket
        arr_min, arr_max = min(arr), max(arr)
        bucket_count = (arr_max - arr_min) // bucket_size + 1
        buckets = [[] for _ in range(bucket_count)]
        
        Load each element into the corresponding bucket
        for num in arr:
            buckets[(num - arr_min) // bucket_size].append(num)
        
        res = []
        Insert sort into the bucket and store the sorted result into the target array
        for bucket in buckets:
            self.insertionSort(bucket)
            res.extend(bucket)

        return res

    def sortArray(self, nums: List[int]) - >List[int] :
        return self.bucketSort(nums)
Copy the code

10. Radix sort algorithm

10.1 Algorithm Idea

Basic idea of Radix Sort:

Integers are cut into different numbers by number of digits, and then sorted by comparing each number individually.

10.2 Algorithm Steps

The cardinality sorting algorithm can be “Least Significant Digit First” or “Most Significant Digit First”. The most common is the “least important first method”.

Let’s take the least significant first method as an example to explain the algorithm steps.

  1. Iterate over the array element, get the largest element in the array, and get the number of digits.
  2. Sort the elements of an array with the bits element as the index.
  3. Merge arrays.
  4. Then in the tens place, hundreds place,… , until the highest value of the maximum element is the index, sort, merge the array, and finally finish sorting.

10.3 Animation Demonstration

10.4 Algorithm Analysis

  • The time of radix sort is zero
    O ( k n ) O(k * n)
    . Among themnIs the number of elements to be sorted,kIt’s the number of digits.kThe size of depends on the choice of numeric bits (decimal bits, binary bits) and the size of the complete set of data types to which the element to be sorted belongs.
  • Radix sort is only good for positive element sort.
  • The spatial complexity of radix sort is O(n+k)O(n +k)O(n +k).
  • Radix sort is a stable sort algorithm.

10.5 Code Implementation

class Solution:
    def radixSort(self, arr) :
        Count the longest number of digits
        size = len(str(max(arr)))
        
        The number of digits traversed from the ones digit to the highest digit
        for i in range(size):
            buckets = [[] for _ in range(10)]
            for num in arr:
                buckets[num // (10 ** i) % 10].append(num)
            arr.clear()
            Create a new array
            for bucket in buckets:
                for num in bucket:
                    arr.append(num)

        return arr

    def sortArray(self, nums: List[int]) - >List[int] :
        return self.radixSort(nums)
Copy the code