preface

Search and sort algorithm is the introduction of the algorithm, its classical ideas can be used in many algorithms. Because its implementation code is short, the application is more common. So it’s common to ask questions about sorting algorithms in interviews. But ten thousand changes do not leave its ancestor, as long as familiar with the idea, flexible use is not difficult. Generally in the interview is the most common test is quicksort and merge sort, and often have the interviewer asked to write the code of these two kinds of sorting. The code for both sorts must be handy. There are insert sort, bubble sort, heap sort, radix sort, bucket sort and so on. The interviewer may be asked to compare the pros and cons, the ideas of the various algorithms and their usage scenarios. You also need to be able to analyze the time and space complexity of algorithms. Search and ranking algorithms are often the beginning of the interview. If these questions are not answered well, it is likely that the interviewer will not be interested in continuing the interview. So if you want to get started, you need to master the common sorting algorithm ideas and their characteristics, and write code proficiently if necessary.

Let’s take a look at common sorting algorithms and their usage scenarios. Limited by space, please refer to the detailed demonstration and illustration of some algorithms.

Bubble sort

Bubble sort is one of the simplest sorts, and the general idea is to swap small numbers to the front by comparing and swapping them with neighboring elements. This process is similar to a blister rising, hence the name. For example, bubble sort the unordered sequence 5,3,8,6,4. First bubble from back to front, compare 4 to 6, swap 4 to front, and the sequence becomes 5,3,8,4,6. The same way 4 and 8 swap, 5,3,4,8,6,3 and 4 don’t swap. Swap 5 and 3 to become 3,5,4,8,6,3. The bubble is done, bringing the smallest number,3, to the front. Bubbly the rest of the sequence and you get an ordered sequence. Bubble sort is O(n^2) in time.

Implementation code:

/** * @description :<p> </p> *@author *@time 2016-3-3 PM 8:54:27 */ public class BubbleSort {public static void bubbleSort(int[] arr) {if(arr == null || arr.length == 0)
            return ;
        for(int i=0; i<arr.length-1; i++) {
            for(int j=arr.length-1; j>i; j--) {
                if(arr[j] < arr[j-1]) { swap(arr, j-1, j); } } } } public static void swap(int[] arr, int i, int j) { int temp = arr[i]; arr[i] = arr[j]; arr[j] = temp; }}Copy the code

Selection sort

The idea of selection sort is similar to bubble sort, in that the smallest element is placed first after sorting. But the process is different. Bubble sort is done by comparing and swapping neighbors. And selection sort is by selection of the whole. For example, if you do a simple selection sort on 5,3,8,6,4, you first swap 5 with the smallest number other than 5. That is, swap 3 with 5. After a sorting, it becomes 3,5,8,6,4. Selecting and swapping the remaining sequences at a time will result in an ordered sequence. In fact, the selection sort can be regarded as the optimization of bubble sort, because its purpose is the same, but the selection sort can only be exchanged under the premise of determining the minimum number, greatly reducing the number of exchanges. The time of selection sort is O(n^2).

Implementation code:

/** * @description :<p> implementation of simple SelectSort algorithm </p> *@author *@time 2016-3-3 PM 9:13:35 */ public class SelectSort {public static void selectSort(int[] arr) {if(arr == null || arr.length == 0)
            return ;
        int minIndex = 0;
        for(int i=0; i<arr.length-1; I ++) {// minIndex = I;for(int j=i+1; j<arr.length; J++) {// start with I +1, since minIndex defaults to I, there is no need to compare I.if(arr[j] < arr[minIndex]) { minIndex = j; }}if(minIndex ! MinIndex = I) {// If minIndex is not I, a smaller value has been found. swap(arr, i, minIndex); } } } public static void swap(int[] arr, int i, int j) { int temp = arr[i]; arr[i] = arr[j]; arr[j] = temp; }}Copy the code

Insertion sort

Insert sort is done not by swapping places but by comparing to find the right place to insert elements. I believe that we have played poker experience, especially the number of cards. May want to sort out their own cards when the cards are more than how to sort out? You get a card, you find a good place to insert it. It’s the same principle as insertion sort. For example, if you do a simple insertion sort for 5,3,8,6, and 4, you assume that the first number is in the right position, and you don’t have to rearrange it when you get the first card. Then the 3 should be inserted in front of the 5, and the 5 should be moved back one place to become 3,5,8,6,4. And then we don’t move the 8, we put the 6 in front of the 8, we move the 8 back one place, we put the 4 in front of the 5, we move the 5 back one place. Be careful when inserting a number to make sure that the number in front of it is in order. Simple insertion sort also takes O(n^2).

Implementation code:

/ * * *@Description:<p> Simple insertion sort algorithm implementation </p> *@authorWang *@time2016-3-3 9:38:55 PM */
public class InsertSort {

    public static void insertSort(int[] arr) {
        if(arr == null || arr.length == 0)
            return ;

        for(int i=1; i<arr.length; i++) { // If the first position is correct; To move back, you have to assume the first one.

            int j = i;
            int target = arr[i]; // To be inserted

            / / move backward
            while(j > 0 && target < arr[j-1]) {
                arr[j] = arr[j-1];
                j --;
            }

            / / insertarr[j] = target; }}}Copy the code

Quick sort

Quicksort sounds like a fancy name, and it’s actually the best performing sort algorithm in practice. Bubble sort, although very advanced, actually comes from the idea of bubble sort, where bubble sort is comparing and swapping neighboring elements to get the smallest bubble to the top, whereas quicksort is comparing and swapping decimals and large numbers, so that not only decimals bubble up but large numbers sink down as well.

For example: to quickly sort the unordered sequence 5,3,8,6,4, the idea is to find the right pointer is smaller than the base number, the left pointer is larger than the base number, swap them.

5,3,8,6,4 using 5 as a baseline for comparison will eventually move 5 smaller to the left of 5 and 5 larger to the right of 5.

5,3,8,6,4 first set I and j pointer to both ends respectively,j pointer to scan first (think why?) 4 is smaller than 5. And then I scans, 8 is bigger than 5 stops. Swap I and j.

5,3,4,6,8 and then the j pointer scans again. Then the two Pointers meet when j scans 4. Stop. And then swap the 4 and the base number.

The division of 4,3,5,6,8 achieves the purpose that the left side is smaller than 5 and the right side is larger than 5. Then, the left and right sub-sequences are sorted recursively to obtain the ordered sequence.

So the question is why do we have to move the j pointer first? First of all, this is not absolute. It depends on the position of the base number, because when the last two Pointers meet, the base number is swapped to the position where they meet. Usually you pick the first number as the base number, so it’s on the left, so the last number that you meet has to be swapped with the base number, so the number that you meet has to be smaller than the base number. So the j pointer moves first to find something smaller than the base number.

Quicksort is unstable and its time average time complexity is O(NLGN).

Implementation code:

/ * * *@Description:<p> Implements the quicksort algorithm </p> *@authorWang *@time2016-3-3 5:07:29 */
public class QuickSort {
    // a partition
    public static int partition(int[] arr, int left, int right) {
        // The logic of the partition is the same as that of the partition of the quicksort subproblem, so it is encapsulated as a function
        int pivotKey = arr[left];
        int pivotPointer = left;

        while(left < right) {
            while(left < right && arr[right] >= pivotKey)
                right --;
            while(left < right && arr[left] <= pivotKey)
                left ++;
            swap(arr, left, right); Swap the big ones to the right and the small ones to the left.
        }
        swap(arr, pivotPointer, left); // Finally switch the pivot to the middle
        return left;
    }

    public static void quickSort(int[] arr, int left, int right) {
        // Recursive boundary conditions
        if(left >= right)
            return ;
        int pivotPos = partition(arr, left, right);
        quickSort(arr, left, pivotPos-1);
        quickSort(arr, pivotPos+1, right);
    }
	// This function is encapsulated in quicksort, making the function more user-friendly
    public static void sort(int[] arr) {
        // This is an invalid input and not a recursive boundary condition
        if(arr == null || arr.length == 0)
            return ;
        quickSort(arr, 0, arr.length-1);
    }

    public static void swap(int[] arr, int left, int right) {
        inttemp = arr[left]; arr[left] = arr[right]; arr[right] = temp; }}Copy the code

In fact, the above code can be further optimized. The base number in the above code has been saved in pivotKey, so there is no need to set a temp variable for each exchange. When the left and right Pointers are exchanged, they can be overwritten successively. This reduces both space usage and the number of assignment operations. The optimized code is as follows:

/** * @description :<p> Implement QuickSort algorithm · *@author *@time 2016-3-3 PM 5:07:29 */ public class QuickSort {/** ** partition * @param arr *  @param left * @param right * @return
     */
    public static int partition(int[] arr, int left, int right) {
        int pivotKey = arr[left];

        while(left < right) {
            while(left < right && arr[right] >= pivotKey) right --; arr[left] = arr[right]; // Move the small one to the leftwhile(left < right && arr[left] <= pivotKey) left ++; arr[right] = arr[left]; } arr[left] = pivotKey; // Finally, pivot is assigned to the centerreturnleft; } @param arr @param left @param right public static void quickSort(int[] arr, int left int right) {if(left >= right)
            return ;
        int pivotPos = partition(arr, left, right);
        quickSort(arr, left, pivotPos-1);
        quickSort(arr, pivotPos+1, right);
    }

    public static void sort(int[] arr) {
        if(arr == null || arr.length == 0)
            return; quickSort(arr, 0, arr.length-1); }}Copy the code

Summary quick sort idea: bubble + dichotomy + recursive divide and conquer, slowly experience.

Heap sort

Heap sort is a selection sort with the help of heap to achieve, the idea is the same as a simple selection sort, the following to the big top heap for example. Note: Use the big top heap if you want to sort in ascending order and the small top heap if you want to sort in ascending order. The reason is that the heap-top element needs to be swapped to the end of the sequence.

First, implementing heap sort requires solving two problems:

\1. How to form a heap from an unordered sequence?

\2. How to adjust the remaining elements to make a new heap after the top element is output?

The first problem is that you can represent a heap directly with a linear array, and building a heap out of an initial unordered sequence requires you to start from the bottom up, starting with the first non-leaf element.

Second question, how to adjust the heaps? The first is to swap the top of the heap with the last element. Then compare the left and right child nodes of the current top element of the heap, because except for the current top element of the heap, the left and right child heaps all meet the condition, then select the current top element of the heap with the larger (big top heap) of the left and right child node, until the leaf node. We call this adjustment from the top of the heap to the leaf a filter.

The process of building a heap from an unordered sequence is a process of iterative filtering. If this sequence is regarded as a complete binary tree, then the last non-terminal node is n/2 to take the base element, which can be filtered. Here’s an example:

The process of building the initial heap and adjusting it is as follows:

Implementation code:

/** * @description :<p> Implementation of heap sort algorithm, using the big top heap as an example. </p> *@author 王旭 *@time 2016-3-4 am 9:26:02 */ public class HeapSort {/** ** HeapSort {/ * Adjusted start to end is called a big top heap. Public static void heapAdjust(int[] arr, int start, int end) { int temp = arr[start];for(int i=2*start+1; i<=end; I *=2) {// the nodes of the left and right children are 2* I +1,2* I +2 // select the lower indices of the left and right childrenif(i < end && arr[i] < arr[i+1]) {
                i ++; 
            }
            if(temp >= arr[i]) {
                break; // Already large top heap, = remain stable. } arr[start] = arr[i]; Start = I; } arr[start] = temp; } public static void heapSort(int[] arr) {public static void heapSort(int[] arr) {if(arr == null || arr.length == 0)
            return; // Create a big top heapfor(int i=arr.length/2; i>=0; i--) {
            heapAdjust(arr, i, arr.length-1);
        }

        for(int i=arr.length-1; i>=0; i--) { swap(arr, 0, i); heapAdjust(arr, 0, i-1); } } public static void swap(int[] arr, int i, int j) { int temp = arr[i]; arr[i] = arr[j]; arr[j] = temp; }}Copy the code

Hill sorting

Hill sort is an efficient implementation of insertion sort, also known as reduced increment sort. In simple insertion sort, if the sequence to be sorted is in positive order, the time complexity is O(n), and if the sequence is basically ordered, the efficiency of direct insertion sort is very high. Hill sort takes advantage of this feature. The basic idea is that the whole sequence of records to be arranged is divided into several sub-sequences for direct insertion sort respectively. When the records in the whole sequence are basically ordered, the direct insertion sort of all records is carried out once.

Here’s an example:

As can be seen from the above sorting process, the characteristic of Hill sorting is that the subsequence is not simply segmented segment by segment, but composed of a certain record separated by a certain increment into a subsequence. In the example above, the increment of the first sort is 5 and the increment of the second sort is 3. Due to the former two insertion sort is recorded in the key words with the same sequence in the previous record of keywords, keyword smaller record, it is not so move forward step by step, but move forward by leaps and bounds, making a trip to the final order, have basic and orderly, the sequence as long as a small amount of comparison and mobile for record. So Hill sort is more efficient than direct insertion sort.

The analysis of Hill sort is complex, and the time complexity is a function of the increments taken, which involves some mathematical difficulties. However, based on a large number of experiments, the time complexity can reach O(n^1.3) when N is in a certain range.

Implementation code:

/** * @description :<p> </p> *@author *@time 2016-3-3 10:53:55 */ public class ShellSort {/** ** ShellSort */ Public static void shellInsert(int[] arr, int d) {public static void shellInsert(int[] arr, int d) {for(int i=d; i<arr.length; i++) { int j = i - d; int temp = arr[i]; // Record the data to be insertedwhile(j > = 0 && arr [j] > temp) {/ / from the back forward, find the number of smaller than its location arr [j] + d = arr [j]; // move j -= d; }if(j ! Arr [j+d] = temp; } } public static void shellSort(int[] arr) {if(arr == null || arr.length == 0)
            return ;
        int d = arr.length / 2;
        while(d >= 1) { shellInsert(arr, d); d /= 2; }}}Copy the code

Merge sort

Merge sort is a different sort, and it’s easier to understand because it uses the idea of recursive divide-and-conquer. The basic idea is to recursively solve the numerator problem and then combine the results. Consider the unsorted sequence as consisting of two ordered subsequences, and then combine the two subsequences, and then consider the subsequence as consisting of two ordered sequences… If you look at it backwards, it’s a twofold merger, then a four-four merger… Eventually, it forms an ordered sequence. The space complexity is O(n), and the time complexity is O(nlogn).

Here’s an example:

Implementation code:

/** * @description :<p> implementation of MergeSort algorithm </p> *@author 王旭 *@time 2016-3-4 am 8:14:20 */ public class MergeSort {public static void mergeSort(int[] arr) { mSort(arr, 0, arr.length-1); } public static void mSort(int[] arr, int left, int right) {if(left >= right)
            return; int mid = (left + right) / 2; mSort(arr, left, mid); MSort (arr, mid+1, right); // Merge (arr, left, mid, right); } /** * Merge two ordered arrays * @param arr array to merge * @param left pointer * @param mid pointer * @param right pointer */ public static void merge(int[] arr, int left, int mid, int right) { //[left, mid] [mid+1, right] int[] temp = new int[right - left + 1]; Int I = left; int j = mid + 1; int k = 0;while(i <= mid && j <= right) {
            if(arr[i] <= arr[j]) {
                temp[k++] = arr[i++];
            }
            else{ temp[k++] = arr[j++]; }}while(i <= mid) {
            temp[k++] = arr[i++];
        }

        while(j <= right) {
            temp[k++] = arr[j++];
        }

        for(int p=0; p<temp.length; p++) { arr[left + p] = temp[p]; }}}Copy the code

Count sorting

If an interviewer asks you to write an O(n) time sorting algorithm in an interview, don’t immediately say: Impossible! Although the lower limit of the previous comparison-based sort was O(nlogn). But it is true that there are linear time complexity of sorting, but there are conditions, is the number to be sorted to meet a certain range of integers, and counting sort requires more auxiliary space. The basic idea is that the number to be sorted is used as the subscript of the count array to count the number of each number. Then output in sequence to get an ordered sequence.

Implementation code:

/** * @description :<p> CountSort algorithm implementation </p> *@author 王旭 *@time 2016-3-4 PM 4:52:02 */ public class CountSort {public static void countSort(int[] arr) {if(arr == null || arr.length == 0)
            return ;

        int max = max(arr);

        int[] count = new int[max+1];
        Arrays.fill(count, 0);

        for(int i=0; i<arr.length; i++) {
            count[arr[i]] ++;
        }

        int k = 0;
        for(int i=0; i<=max; i++) {
            for(int j=0; j<count[i]; j++) {
                arr[k++] = i;
            }
        }

    }

    public static int max(int[] arr) {
        int max = Integer.MIN_VALUE;
        for(int ele : arr) {
            if(ele > max)
                max = ele;
        }

        returnmax; }}Copy the code

Bucket sort

Bucket sort is a kind of improvement and promotion of counting sort, but there are many materials on the Internet that confuse counting sort with bucket sort. Bucket sorting is a lot more complicated than counting sorting.

Analysis and explanation from the brothers to sort the barrels of articles (change) : hxraid.iteye.com/blog/647759

The basic idea of bucket sorting:

Suppose there is a sequence of unlined keywords K[1…. N] with length N. First, divide the sequence into M sub-intervals (buckets). Then, based on some mapping function, the keyword k to be sequenced is mapped to the ith bucket (that is, the subscript I of bucket array B), so that the keyword K is used as an element of B[I] (each bucket B[I] is a sequence of size N/M). All the elements in each bucket B[I] are then comparative sorted (you can use quicksort). Then enumerate the output B[0] in turn… Everything in.b [M] is an ordered sequence. Bindex =f(key) Bindex is the subscript of bucket array B (that is, the bindex bucket), and k is the keyword to be sequenced. The key to bucket sorting being efficient is the mapping function, which must do: if the keyword k1<k2, then f(k1)<=f(k2). So the smallest number in B(I) is bigger than the largest number in B(I -1). Obviously, the determination of mapping function has a great relationship with the characteristics of the data itself.

Here’s an example:

Suppose K= {49, 38, 35, 97, 76, 73, 27, 49}. These numbers are all between 1 and 100. So we customize 10 buckets and determine the mapping function f(k)=k/10. The first keyword 49 is positioned in the fourth bucket (49/10=4). All keywords are piled into the bucket in turn, and a quick sort is carried out in each non-empty bucket, as shown in the figure. Just print each B[I] sequentially to get the ordered sequence.

Bucket sorting analysis:

Bucket sort reduces almost all the comparison work by using the mapping of functions. In fact, the calculation of f(k) value of bucket sort is equivalent to partition in quicksort, subsequence in Hill sort, and subproblem in merge sort, which has divided a large amount of data into basically ordered data blocks (buckets). Then you just need to do an advanced comparison sort on a small amount of data in the bucket.

The time complexity of bucket sorting for N keywords is divided into two parts:

(1) Calculate the bucket mapping function of each keyword in a cycle, and the time complexity is O(N).

(2) The advanced comparison sorting algorithm is used to sort all the data in each bucket, whose time complexity is ∑ O(Ni*logNi). Where Ni is the data amount of the ith bucket.

Obviously, part (2) is the determinant of bucket sorting performance. Minimizing the amount of data in buckets is the only way to improve efficiency (because the best average time complexity based on comparison sorting is only O(N*logN)). Therefore, we need to try to do the following two things:

(1) The mapping function F (k) can evenly distribute N data to M buckets, so that each bucket has [N/M] data quantities.

(2) Increase the number of barrels as much as possible. In the extreme case, only one data can be obtained per bucket, thus completely avoiding the operation of “comparison” sorting of data within the bucket. Of course, it is not easy to do this. In the case of a huge amount of data, f(k) function will cause a huge number of bucket sets and a serious waste of space. This is a tradeoff between time cost and space cost.

For N data to be sorted and M buckets, the average bucket sorting time complexity of [N/M] data is as follows:

O(N)+O(M*(N/M)log(N/M))=O(N+N(logN-logM))=O(N+NlogN-NlogM)

When N=M, in the limit case there is only one data per bucket. The best efficiency of bucket sorting is O(N).

Summary: The average time complexity of bucket sorting is linear O(N+C), where C=N*(logN-logM). If the number of buckets M is larger relative to the same N, the efficiency is higher, and the optimal time complexity reaches O(N). Of course, the space complexity of bucket sorting is O(N+M). If the input data is very large and the number of buckets is also very large, the space cost is undoubtedly expensive. In addition, bucket sorting is stable.

Implementation code:

/** * @description :<p> </p> *@author Wang *@time 2016-3-4 PM 7:39:31 */ public class BucketSort {public static void bucketSort(int[] arr) {if(arr == null && arr.length == 0)
            return; int bucketNums = 10; List<List<Integer>> buckets = new ArrayList<List<Integer>>()for(int i=0; i<10; i++) { buckets.add(new LinkedList<Integer>()); } // Divide bucketsfor(int i=0; i<arr.length; i++) { buckets.get(f(arr[i])).add(arr[i]); } // Sort each bucketfor(int i=0; i<buckets.size(); i++) {
            if(! buckets.get(i).isEmpty()) { Collections.sort(buckets.get(i)); Int k = 0; int k = 0;for(List<Integer> bucket : buckets) {
            for(int ele : bucket) { arr[k++] = ele; }}} /** * mapping function * @param x * @return
     */
    public static int f(int x) {
        returnx / 10; }}Copy the code

Radix sort

Radix sort is a different sort from the previous sort, radix sort does not need to record the comparison between keywords. Radix sort is a method to sort single logical keywords with the help of multi-keyword sort idea. Multi-keyword sorting is when there are multiple keywords with different priorities. For example, the ranking of grades, if two people have the same total score, the high Chinese ranked in the front, the same Chinese scores are high math ranked in the front… If you sort numbers, the ones, tens, and hundreds places are different priority keywords. If you want to sort in ascending order, the ones, tens, and hundreds places increase in priority at once. Radix sort is achieved through multiple collections, distribution and collection, with keywords with lower priority being allocated and collected first.

Here’s an example:

Implementation code:

/**
 *@Description:<p>基数排序算法实现</p>
 *@author 王旭
 *@time 2016-3-4 下午8:29:52
 */
public class RadixSort {

    public static void radixSort(int[] arr) {
        if(arr == null && arr.length == 0)
            return ;

        int maxBit = getMaxBit(arr);

        for(int i=1; i<=maxBit; i++) { List<List<Integer>> buf = distribute(arr, i); // Assign collecte(arR, buF); }} /** * allocate * @param arr to allocate array * @param iBit to allocate the number of bits * @return
     */
    public static List<List<Integer>> distribute(int[] arr, int iBit) {
        List<List<Integer>> buf = new ArrayList<List<Integer>>();
        for(int j=0; j<10; j++) {
            buf.add(new LinkedList<Integer>());
        }
        for(int i=0; i<arr.length; i++) {
            buf.get(getNBit(arr[i], iBit)).add(arr[i]);
        }
        returnbuf; } @param buf */ public static void collecte(int[] arr, @param buf */ public static void collecte(int[] arr List<List<Integer>> buf) { int k = 0;for(List<Integer> bucket : buf) {
            for(int ele : bucket) { arr[k++] = ele; }}} /** * get the maximum number of digits * @param x * @return
     */
    public static int getMaxBit(int[] arr) {
        int max = Integer.MIN_VALUE;
        for(int ele : arr) {
            int len = (ele+"").length();
            if(len > max)
                max = len;
        }
        returnmax; } /** * gets the NTH bit of x, or 0 if not. * @param x * @param n * @return
     */
    public static int getNBit(int x, int n) {

        String sx = x + "";
        if(sx.length() < n)
            return 0;
        else
            return sx.charAt(sx.length()-n) - '0'; }}Copy the code

conclusion

In the previous introduction and analysis we mentioned bubble sort, selection sort, insert sort three simple sort and its variants quick sort, heap sort, hill sort three more efficient sort. Then we analyze the merge sort based on the idea of divide and conquer recursion and counting sort, bucket sort, radix sort three kinds of linear sort. We can know that the sorting algorithm is either simple and efficient, or improved by taking advantage of the characteristics of simple sorting, or efficient sorting by exchanging space for time in certain cases. However, these sorting methods are not fixed, and need to be combined with specific requirements and scenarios to choose or even use. In order to achieve efficient and stable purposes. There is no best sort, only the best sort.

The following summarizes the sorting algorithm of their respective use scenarios and applicable occasions.

\1. Quicksort is the most efficient in terms of average time, but the worst case time performance of quicksort is not as good as heap sort and merge sort. Compared with the latter, merge sort takes less time but uses more auxiliary space when N is large.

\2. The simple sort mentioned above includes all bubble sort, insert sort and simple selection sort except hill sort. Among them, direct insertion sort is the simplest, but the sequence is basically ordered or n is small, direct insertion sort is a good method, so it is often used together with other sorting methods, such as quicksort, merge sort, etc..

\3. The time complexity of radix sort can also be written O(d*n). Therefore, it is best used for sequences with large n values and small keywords. If the keyword is also large and the maximum keyword of most records in the sequence is different, the sequence can be divided into several small sub-sequences according to the difference of the maximum keyword, and then the direct insertion sort can be carried out.

\4. Compared with the stability of the methods, radix sort is a stable inner sorting method, and all simple sorting with O(n^2) time complexity is also stable. But quick sort, heap sort, hill sort and other time performance better sorting methods are unstable. Stability needs to be selected according to specific requirements.

\5. Most of the above algorithms are implemented using linear storage structures. Algorithms like insert sort are better implemented using linked lists, saving the time to move elements. The exact storage structure varies from implementation to implementation.

Attachment: Proof based on comparison sorting algorithm with time lower bound O(nlogn) :

The proof based on the lower limit of comparison sort is proved by the decision tree, the height of the decision tree (NLGN), thus yielding the lower limit of comparison sort.

The first step is to introduce a decision tree. First, a decision tree is a binary tree. Each node represents a set of possible orders between elements, and it is consistent with the comparison conducted by the tree. The result of comparison is the edge of the tree. So let’s say that T is a binary tree of depth D, then T has at most 2^ leaves. A binary tree with L leaves has a depth of at least log base L. So, a decision tree that sorts n elements must have n! A leaf (because n numbers have n! So the depth of the decision tree is at least log(n!). Is at least log(n!). Once more. The log (n! = logn + log (n – 1) + log (n – 2) +… + log2 + log1 > = logn + log (n – 1) + log (n – 2) +… +log(n/2) >= log(n/2) >= log(n/2) log(n/2) >= log(n/2) log(n/2) logn-n/2 =O(nlogn) so the lowest time complexity of a sorting algorithm that only uses comparison is ORDER (nlogn).

References:

  • Weimin Yan, weimin Wu, data Structure
  • Bucket sorting analysis: hxraid.iteye.com/blog/647759
  • Some sorting algorithms analysis and introduce: www.cnblogs.com/weixliu/arc…
  • Summary] [sort algorithms (www.cnblogs.com/onepixel/ar…