Writing in the front

Recently, I used heap sort to solve some problems when WRITING code, and then found that it was a little strange in the process of writing, SO I thought of recording some, and then I thought of simply arranging a sort of article, with the analysis of the sort and the implementation of the code, to practice my hand. If you know all the code, are you afraid of sorting? Click follow and like if you find it helpful

Bubble sort

Bubble Sort is a simple sorting algorithm in the field of computer science. It repeatedly walks through the column of elements to be sorted, comparing two adjacent elements in turn, and swapping them if they are in the wrong order (from largest to smallest, from A to Z). The task of visiting an element is repeated until no neighboring elements need to be swapped, that is, the element has been sorted.

Realize the principle of

  • Compare adjacent elements. If the first one is bigger than the second, swap them both.
  • Do the same for each pair of adjacent elements, starting with the first pair and ending with the last pair. At this point, the last element should be the largest number.
  • Repeat this step for all elements except the last one.
  • Keep repeating the above steps for fewer and fewer elements at a time until there are no more pairs to compare.

Algorithm stability

The total mean time complexity of bubble sort is O(N2) O(N^2) O(N2). Bubble sort is to move small elements forward or large elements back. A comparison is a comparison of two adjacent elements, and an exchange occurs between the two elements. So, if two elements are equal, they don’t swap; If two equal elements are not adjacent, then even if the two elements are adjacent by the previous pairwise swap, they will not swap, so the order of the same elements does not change, so bubble sort is a stable sorting algorithm.

Code implementation

public void bubbleSort(int[] sourceArray){
    for (int i = 1; i < sourceArray.length; i++){
        // Set a flag, if true, to indicate that the loop did not swap, i.e., that the queue is sorted and the sorting is complete.
        boolean flag = true;
        for (int j = 0; j < sourceArray.length - i; j++){
            if (sourceArray[j] > sourceArray[j + 1]){
                sourceArray[j] = sourceArray[j] ^ sourceArray[j + 1];
                sourceArray[j + 1] = sourceArray[j] ^ sourceArray[j + 1];
                sourceArray[j] = sourceArray[j] ^ sourceArray[j + 1];
                flag = false; }}if (flag){
            break; }}}Copy the code

Quick sort

Quicksort, also known as partition-exchange sort, is a sort algorithm first proposed by Tony Hall. Quicksort is usually significantly faster than other algorithms because its inner loop can be implemented efficiently on most architectures.

Realize the principle of

  • Pick an element from the sequence, called pivot,
  • Reorder the array so that all elements smaller than the base value are placed in front of the base value and all elements larger than the base value are placed behind the base value (the same number can go to either side). After the partition exits, the benchmark is in the middle of the array. This is called a partition operation.
  • Recursively sorts subsequences of elements less than a reference and subsequences of elements greater than a reference.

Algorithm stability

On average, sorting items takes O(n L O gn) O(nlogn) O(nlogn) O(nlogn). In fact, quicksort O(n L O gn) O(nlogn) O(nlogn) O(nlogn) is usually significantly faster than other algorithms. Swap a[r] and a[mid] to complete a quicksort. When the central element is exchanged with a[r], it is possible to disrupt the stability of the previous elements, such as the sequence 5, 3, 3, 4, 3, 8, 9, 10, 11. Now the exchange of the central element 5 and 3 (the fifth element, subscript starting from 1) will change the sequence, The stability of the three elements is disrupted (i.e. the last three goes to the front), so quicksort is an unstable sorting algorithm.

Code implementation

public void quickSort(int[] sourceArray, int left, int right){
    if (left >= right) return;
    int l = left;
    int r = right;
    int mid = sourceArray[left];
    while (l < r) {
        while (l < r && sourceArray[r] >= mid)
            r--;
        if (l < r)
            sourceArray[l++] = sourceArray[r];
        while (l < r && sourceArray[l] < mid)
            l++;
        if (l < r)
            sourceArray[r--] = sourceArray[l];
    }
    sourceArray[l] = mid;
    quickSort(sourceArray, left, l - 1);
    quickSort(sourceArray, l + 1, right);
}
Copy the code

Quick selection sort

Here is a separate hybrid algorithm, based on quicksort, sort of pruning, for the KTH K K large (small) number. The average time complexity of the fast selection algorithm is O(N) O(N) O(N). Like quicksort, this algorithm was invented by Tony Hoare, so it is also known as the Hoare selection algorithm. This method is roughly the same as quicksort. For simplicity, note that the KTH largest element is also the n-k smallest element, so the KTH smallest algorithm can be used to solve this problem.

First, we select a pivot and define its position in the sorted array in linear time. This can be done with the help of partitioning algorithms.

To achieve partitioning, move along the array, compare each element to the pivot, and move all elements smaller than the pivot to the left of the pivot.Copy the code

Thus, in the output array, the pivot is in its proper position. Everything less than the pivot is to its left, and everything greater than or equal to its right. Thus, the array is split into two parts. If it is a quicksort algorithm, it will recursively sort the two parts in order O(N L O gN) O(NlogN) O(NlogN). Here, since we know which part of the smallest n-k element we are looking for, we don’t need to deal with both parts, thus reducing the average time complexity to O(N) O(N) O(N). The final algorithm is quite straightforward:

  • Choose a pivot at random.
  • Use the partitioning algorithm to place the pivot pos at the appropriate position in the array. Move elements less than the pivot to the left and elements greater than or equal to the pivot to the right.
  • Compare POS and n-k to determine which side to continue recursive processing on.

Selection sort

Selection sort is a simple and intuitive sorting algorithm. It works by selecting the smallest (or largest) element from the data elements to be sorted and storing it at the beginning of the sequence until all the data elements to be sorted are sorted. Selection sort is an unstable sort method.

Realize the principle of

  • First find the smallest (large) element in the unsorted sequence and store it at the beginning of the sorted sequence.
  • Find the smallest (largest) element from the remaining unsorted elements and place it at the end of the sorted sequence.
  • Repeat the second step until all elements are sorted.

Algorithm stability

Selection sort is O(n2) O(n^2) O(n2). In a single selection, if an element is smaller than the current element and the smaller element appears after an element that is equal to the current element, stability is broken after the exchange. For example, in sequence 5, 8, 5, 2, 9, we know that the first selection of the first element 5 will exchange with 2, so the relative order of the two 5’s in the original sequence will be destroyed, so selection sort is an unstable sorting algorithm.

Code implementation

public void selectSort(int[] sourceArray){
    // There are N-1 rounds of comparisons
    for (int i = 0; i < sourceArray.length - 1; i++){
        int cur = i;
        // The number of comparisons per round N- I
        for (int j = i + 1; j < sourceArray.length; j++){
            if (sourceArray[cur] > sourceArray[j]) {
                // Records the index of the smallest element that can be found so farcur = j; }}// Swap the minimum found with the value of the I position
        if(i ! = cur){ sourceArray[i] = sourceArray[i] ^ sourceArray[cur]; sourceArray[cur] = sourceArray[i] ^ sourceArray[cur]; sourceArray[i] = sourceArray[i] ^ sourceArray[cur]; }}}Copy the code

Insertion sort

Insertion Sort is a simple and intuitive sorting algorithm. It works by building an ordered sequence, scanning back to front in the sorted sequence for unsorted data, finding the appropriate location and inserting it. Insertion sort is usually implemented in an in-place sort (that is, only O(1) O(1) O(1) O(1) extra space is needed). Therefore, in the process of scanning from back to front, the sorted elements need to be moved backwards repeatedly to provide insertion space for the latest elements.

Realize the principle of

  • Start with the first element, which can be considered sorted
  • Takes the next element and scans back to front in the sorted element sequence
  • If the element (sorted) is larger than the new element, move the element to the next position
  • Repeat step 3 until you find a place where the sorted element is less than or equal to the new element
  • After inserting the new element into the location
  • Repeat steps 2 to 5

Algorithm stability

On average, the insertion sort algorithm complexity is O(n2) O(n^2) O(n2). If an insert element is equal to the insert element, the insert element puts the element to be inserted after the equal element. Therefore, the order of equal elements does not change, and the order that comes out of the original unordered sequence is the order that comes out of the sorted sequence, so insertion sort is stable.

Code implementation

public void insertSort(int[] sourceArray){
    // Select the appropriate place to insert from the element with subscript 1, since there is only one element with subscript 0 and the default is ordered
    for (int i = 1; i < sourceArray.length; i++){
        // Record the data to be inserted
        int temp = sourceArray[i];
        // Start from the rightmost of the sorted sequence and find the number less than it
        int j = i;
        while (j > 0 && temp < sourceArray[j - 1]){
            sourceArray[j] = sourceArray[j - 1];
            j--;
        }

        // There is a smaller number, insert
        if (j != i) sourceArray[j] = temp;
    }
}
Copy the code

Merge sort

Merge-sort is an effective sorting algorithm based on MERGE operation. It is a typical application of Divide and Conquer. The ordered subsequences are combined to obtain a fully ordered sequence. That is, each subsequence is ordered first, and then the subsequence segments are ordered. If two ordered tables are merged into one ordered table, it is called binary merge.

Realize the principle of

  • The first step is to apply for a space equal to the sum of the two sorted sequences. This space is used to store the combined sequence
  • Step 2: Set two Pointers, starting at the start of two sorted sequences
  • Step 3: Compare the two Pointers to the element, select the smaller element into the merge space, and move the pointer to the next position
  • Repeat step 3 until a pointer exceeds the end of the sequence
  • Copies all remaining elements of the other sequence directly to the end of the merged sequence

Algorithm stability

It’s always O(n L O gn) O(nlogn) O(nlogn) at the cost of extra memory. You can see that with 1 or 2 elements, 1 element will not be swapped, and if 2 elements are equal in size, no one will swap them on purpose, and this will not break the stability. So, is stability compromised in the process of merging short ordered sequences? No, during the merge process we can ensure that if two current elements are equal, we store the elements in the preceding sequence in front of the resulting sequence, thus guaranteeing stability. So merge sort is also a stable sort algorithm.

Code implementation

public int[] mergeSort(int[] sourceArray){
    if (sourceArray.length < 2) return sourceArray;
    int mid = (int) Math.floor(sourceArray.length / 2);
    int[] left = Arrays.copyOfRange(sourceArray, 0, mid);
    int[] right = Arrays.copyOfRange(sourceArray, mid, sourceArray.length);

    return merge(mergeSort(left), mergeSort(right));
}

private int[] merge(int[] left, int[] right){
    int[] nums = new int[left.length + right.length];
    int l = 0;
    int r = 0;
    int i = 0;
    while (l < left.length && r < right.length){
        if (left[l] < right[r]) {
            nums[i] = left[l];
            l++;
        }else {
            nums[i] = right[r];
            r++;
        }
        i++;
    }

    while(l < left.length){ nums[i] = left[l]; l++; i++; }while(r < right.length){ nums[i] = right[r]; r++; i++; }return nums;
}
Copy the code

Radix sort

Radix sort belongs to “distribution sort”, also called “bucket sort” or “bin sort”, as its name implies, it allocates the sorted elements to some “buckets” through partial information of key values. Radix sort is a stable sort whose time complexity is O (nlog®m), where R is the radix taken and M is the number of heaps. In some cases, radix sort is more efficient than other stable sort.

Realize the principle of

Suppose there is an original string of values as follows: 73, 22, 93, 43, 55, 14, 28, 65, 39, 81. First assign them to buckets numbered 0 to 9 when visiting values based on the single-digit values:

0
1 81
2 22
3 73 93 43
4 14
5 55 65
6
7
8 28
9 39
Copy the code

The buckets are then reconcatenated to form the following sequence of numbers: 81, 22, 73, 93, 43, 14, 55, 65, 28, 39. The allocation is made again, this time by ten digits:

0
1 14
2 22 28
3 39
4 43
5 55
6 65
7 73
8 81
9 93
Copy the code

The values in these buckets are then reconcatenated to form the following sequence:14, 22, 28, 39, 43, 55, 65, 73, 81, 93At this time, the whole sequence has been sorted; If the sorted object has more than three digits, the above actions continue until the highest digit is reached.LSDThe cardinality sort of is used for sequences with small bits, or if there are many bitsMSDWill be more efficient.MSDThe way toLSDInstead, allocations are made from the base of the high digit, and instead of being merged back into an array immediately after allocation, subbuckets are created within each bucket, and the values in each bucket are assigned to the next digit. After allocating the lowest number of digits, merge it back into a single array.

Algorithm stability

The time complexity of radix sort is O(n L O g(r)m) O(nlog(r)m) O(nlog(r)m). Sometimes, some attributes have priority order. They are sorted first by low priority and then by high priority, and the last order is the highest priority. The high priority has the same low priority and the high priority comes first. Radix sort is based on sorting separately, collecting separately, so it is a stable sorting algorithm.

Code implementation

public void radixSort(int[] sourceArray){
    int maxDigit = getMaxDigit(sourceArray);
    int mod = 10;
    int dev = 1;

    for (int i = 0; i < maxDigit; i++, dev *= 10, mod *= 10) {int[][] counter = new int[mod * 2] [0];
        for (int j = 0; j < sourceArray.length; j++){
            int bucket = ((sourceArray[j] % mod) / dev) + mod;
            counter[bucket] = arrayAppend(counter[bucket], sourceArray[j]);
        }

        int pos = 0;
        for (int[] bucket : counter){
            for (intvalue : bucket){ sourceArray[pos++] = value; }}}}private int getMaxDigit(int[] arr){
    int maxValue = getMaxValue(arr);
    return getNumLength(maxValue);
}

private int getMaxValue(int[] arr){
    int maxValue = arr[0];
    for (int value : arr){
        if(maxValue < value){ maxValue = value; }}return maxValue;
}

private int getNumLength(long num){
    if (num == 0) {return 1;
    }
    int length = 0;
    for (longtemp = num; temp ! =0; temp /= 10){
        length++;
    }
    return length;
}

private int[] arrayAppend(int[] arr, int value){
    arr = Arrays.copyOf(arr, arr.length + 1);
    arr[arr.length - 1] = value;
    return arr;
}
Copy the code

Hill sort (shell)

Shell’s Sort, also known as Diminishing Increment Sort, is a more efficient and improved version of direct insert Sort. Hill sort is an unstable sort algorithm. The method is named after D.L.Shell, who proposed it in 1959.

Realize the principle of

So let’s take something less thannThe integerd1As the first increment, group all the records of the file. All distances ared1Records that are multiples of are placed in the same group. Direct insertion sequencing was performed in each group. And then we take the second changeD2 = 1 (<... <d2<d1), that is, until all records are placed in the same group for direct insertion sort.

Algorithm stability

The time complexity of Hill sort is better than that of O(n2) O(n^2) O(n2). Because of multiple insertion sort, we know that a single insertion sort is stable and does not change the relative order of the same elements, but the same elements may move in their respective insertion sorts during different insertion sorts. Finally, its stability is disturbed, so shell sorting is unstable.

Code implementation

public void shellSort(int[] sourceArray){
    int gap = sourceArray.length;

    while (true){
        gap /= 2;
        for (int i = 0; i < gap; i++){
            for (int j = i + gap; j < sourceArray.length; j += gap){
                int temp = sourceArray[j];
                int k = j - gap;
                while (k >= 0&& sourceArray[k] > temp){ sourceArray[k + gap] = sourceArray[k]; k -= gap; } sourceArray[k + gap] = temp; }}if (gap == 1) break; }}Copy the code

Heap sort

Heapsort (English: Heapsort) refers to a sort algorithm designed by using heap data structure. Heap is a nearly complete binary tree structure, and also satisfies the property of heap: that is, the key value or index of the child node is always smaller (or larger) than its parent node.

Realize the principle of

  • Create a heap H[0…… n-1];
  • Swap the heap head (maximum) with the heap tail;
  • Reduce the size of the heap by 1 and call shift_down(0) to move the top of the new array into position.
  • Repeat step 2 until the heap size is 1.

Algorithm stability

The average time complexity of heap sort is O(N L O G N) O(nlogn) O(nlogn). We know that the structure of the heap is that the children of node I are 2 * I and 2 * I + 1 nodes. The large top heap requires that the parent node be greater than or equal to 2 of its children, and the small top heap requires that the parent node be less than or equal to 2 of its children. In a sequence of length n, the process of heap sorting starts from the n / 2 value and its children choose the largest (big top heap) or the smallest (small top heap). The choice between these three elements will not destroy the stability of course. But when n / 2-1, n / 2-2… 1 When these parent nodes select elements, they break stability. It is possible that the NTH / 2nd parent swaps the last element, and the NTH / 2-1 parent swaps the last identical element, so the stability between the two identical elements is broken. So heap sort is not a stable sort algorithm.

Code implementation

public void heapSort(int[] sourceArray){
    int len = sourceArray.length;
    buildMaxHeap(sourceArray, len);
    for (int i = len - 1; i > 0; i--){
        swap(sourceArray, 0, i);
        len--;
        heapify(sourceArray, 0, len); }}private void buildMaxHeap(int[] arr, int len){
    for (int i = (int) Math.floor(len / 2); i >= 0; i--){ heapify(arr, i, len); }}private void heapify(int[] arr, int i, int len){
    int left = 2 * i + 1;
    int right = 2 * i + 2;
    int largest = i;

    if (left < len && arr[left] > arr[largest]){
        largest = left;
    }

    if (right < len && arr[right] > arr[largest]){
        largest = right;
    }

    if (largest != i){
        swap(arr, i, largest);
        heapify(arr, largest, len);
    }
}

private void swap(int[] arr, int i, int j){
    arr[i] = arr[i] ^ arr[j];
    arr[j] = arr[i] ^ arr[j];
    arr[i] = arr[i] ^ arr[j];
}
Copy the code

Count sorting

Counting sort is a stable linear time sort algorithm. Counting sort uses an extra array C, C, C, where the ith, I, I element is the number of elements in the array A, A, A whose value is equal to I, I, I. And then I’m going to sort the elements of A, A, A into the correct position based on the array C, C, C.

Realize the principle of

  • Find the largest and smallest elements in the array to be sorted
  • Count the number of occurrences of each element of value I in the array, stored in the i-th entry of array C
  • Add up all the counts (starting with the first element in C, adding each term to the previous one)
  • Reverse populating the target array: place each element I in the C(I) item of the new array, subtracting C(I) by 1 for each element

Algorithm stability

The time complexity of counting sort is O(n+k) O(n+k) O(n+ K) O(n+ K). Since the length of the array C used to count depends on the range of data in the array to be sorted (equal to the difference between the maximum and minimum value of the array to be sorted plus 1), this makes counting sort for arrays with large data range, It takes a lot of time and memory. For example, counting sort is the best algorithm for sorting numbers between 0 and 100, but it is not suitable for sorting names alphabetically. However, counting sort can be used to sort large arrays of data using the algorithm in radix sort. Age duplication requires special handling (to ensure stability), which is why we end up reverse-populating the target array and subtracting 1 from the statistics of each number. So counting sort is stable.

Code implementation

private static void countingSort(int[] arr, int maxValue) {
    int bucketLen = maxValue + 1;
    int[] bucket = new int[bucketLen];

    for (int value : arr) {
        bucket[value]++;
    }

    int sortedIndex = 0;
    for (int j = 0; j < bucketLen; j++) {
        while (bucket[j] > 0) { arr[sortedIndex++] = j; bucket[j]--; }}}private static int getMaxValue(int[] arr) {
    int maxValue = arr[0];
    for (int value : arr) {
        if(maxValue < value) { maxValue = value; }}return maxValue;
}
Copy the code