This is the second installment of the Data Structure & Algorithm in Swift series, which is divided into the following two parts:

  • Fundamentals of algorithms: a brief introduction to the concepts of algorithms, time complexity and space complexity, recursion, as background knowledge for the second part of this article.
  • Sort algorithm: combined with Swift code implementation to explain bubble sort, select sort, insert sort, merge sort, quick sort.

Algorithm based

This section is for those readers who are not familiar with algorithms and related knowledge. If you are already familiar with algorithms, you can skip this part and go straight to the second part of this article: sorting algorithms.

The discussion about this part does not belong to the focus of this article, so there is no too much very professional discussion, just for those readers who do not know the algorithm can have a basic understanding of the algorithm, for reading and understanding the second part of this article.

Concepts of algorithms

An algorithm is a description of the steps required to solve a particular problem, and is represented in a computer as a finite sequence of instructions, each of which represents one or more operations.

From Big Talk Data Structures

An algorithm is simply “a solution to a problem.” There may be many different solutions to the same problem. Although these solutions can get the same results, the time and space resources required for the execution of each algorithm can vary greatly.

Starting from the point of view of time spent, let’s see how different the efficiency of two different solutions to the same problem is:

Now let’s solve the problem: add up the numbers from 1 to 100.

Compare the two approaches that are easy to think of:

  1. 1 to 100 loops add up step by step
  2. Summation of arithmetic sequences

Use Swift function to implement respectively:

func sumOpration1(_ n:Int) -> Int{
    
    var sum = 0
    
    for i in 1. n { sum += i }return sum
}
sumOpration1(100)/ / 5050



func sumOpration2(_ n:Int) -> Int{
    
    return (1 + n) * n/2
}
sumOpration2(100)/ / 5050
Copy the code

In the above code, sumOpration1 uses loop traversal; SumOpration2 uses a formula for summation of arithmetic sequences.

Although both functions can get correct results, it is not difficult to see the difference in the efficiency of the implementation of the two functions:

The time required to traverse the sum is dependent on the size of n passed in, whereas the time required by the arithmetic summation is completely independent of the size of n passed in.

In the traversal sum, if the n value passed in is 100, you need to traverse it 100 times and add it up to get the result, but what if the n value passed in is a million?

However, in the arithmetic summation function, no matter how large the n value is, only one formula is needed to solve the problem.

We can learn from this in a small way: thousands of problems in the world (algorithmic problems) may have similar situations: same problems, same results, but with different execution efficiency. So is there a way to measure the performance of an algorithm so that people can choose or measure the difference between algorithms? The answer is yes.

The author will introduce two dimensions of resources consumed by the algorithm: time complexity and space complexity.

Time complexity and space complexity

Time complexity

The time complexity of an algorithm refers to the time resources consumed by the algorithm. In general, computer algorithms are problem size! Function f(n) of n, and the time complexity of the algorithm is thus denoted as:

Common time complexity is: constant order O(1), logarithmic order O(log n), linear order O(n), linear logarithmic order O(nlog n), square order O(n^{2}), cubic order O(n^{3}),! O(n^{k}), exponential order O(2^{n})}. As the problem size n increases, the above time complexity increases, and the execution efficiency of the algorithm decreases.

Compare a few of them:

From the figure above, we can see that the complexity of order O(n^{2}) increases almost linearly with the increase of n value. The linear order O(n) increases linearly with the increase of n. We can also see that the constant order O(1) is constant as n increases.

Referring to the summation problem in the previous section, we can see that the complexity of the traversal summation algorithm is linear order O(n) : it grows linearly with the size of the maximum value of the sum; And arithmetic series summation of the complexity of the algorithm for constant order O (1) the algorithm complexity has nothing to do with the size of the input value of n.

The reader can try to think of an algorithm whose complexity is proportional to the square of the input value n.

Here I give an example: find an array of some two elements and a value of the element index algorithm. The array is [0,2,1,3,6], and is 8:

func findTwoSum(_ array: [Int], target: Int)- > (Int.Int)? {
    
    guard array.count > 1 else {
        return nil
    }
    
    for i in 0..<array.count {
        
        let left = array[i]
        
        for j in (i + 1).. <array.count {
            let right = array[j]
            if left + right == target {
                return (i, j)
            }
        }
    }
    return nil
}

let array = [0.2.1.3.6]
if let indexes = findTwoSum(array, target: 8) {
    print(indexes) / / 1 and 4
} else {
    print("No pairs are found")}Copy the code

The above algorithm correctly calculates the indexes of the two elements as 1 and 4. Because two levels of traversal are used, the complexity of the algorithm here is O(n^{2}) squared.

Instead of traversing two levels, we only need to traverse one level: at the time of traversing, we know the current element’s value a, so as long as the rest of the elements have a value equal to (target-a). So the complexity of this algorithm is order n.

Take a look at the code implementation:

//O(n) 
func findTwoSumOptimized(_ array: [Int], target: Int)- > (Int.Int)? {
    
    guard array.count > 1 else {
        return nil
    }
    
    var diffs = Dictionary<Int.Int> ()for i in 0..<array.count {
        
        let left = array[i]
        
        let right = target - left
        
        if let foundIndex = diffs[right] {
        
            return (i, foundIndex)
            
        } else {
            
            diffs[left] = i
        }
    }
    
    
    return nil
}
Copy the code

Similarly, although the above two algorithms can achieve the same effect, when n is very large, the computational efficiency of the two will be more different: when n = 1000, the time needed to get the result may be several hundred times different. It can be said that square-order O(n^{2}) complexity algorithms are unacceptable for large data volumes.

Spatial complexity

The spatial complexity of an algorithm refers to the spatial resources consumed by the algorithm. Its calculation and expression are similar to time complexity, which is generally expressed by the asymptotic property of complexity. Compared with time complexity, spatial complexity is much simpler to analyze. And control complexity is not the focus of this article, so I won’t cover it here.

recursive

In the implementation of algorithm, traversal and recursion are two common operations.

For traversal, you just use a for loop to iterate over the elements of a collection, which I’m sure you’re familiar with. But recursive operations can be unfamiliar. And because the second part of this article is about algorithms, there are two algorithms (also important) that use recursive operations, so in order to help you understand these two algorithms, I think it is necessary to separate recursion.

Let’s look at the concept of recursion.

The notion of recursion

Recursion, in mathematics and computer science, refers to the use of the function itself in the definition of a function

From Wikipedia

By using recursion, a large and complex problem can be transformed layer by layer into a smaller problem similar to the original problem to be solved. Therefore, if recursion is used, it can achieve the purpose of using a small amount of code to describe the process of solving the problem for many times, reducing the amount of code program.

Here’s an example to get a feel for recursion:

You should be familiar with the factorial algorithm: 3! = 3 * 2 * 1; 4! Is equal to 4 times 3 times 2 times 1

It is not hard to see that a gradual -1 and multiply operation is repeatedly performed here. It would be convenient if some code could be used to repeat the call, using recursion:

func factorial(_ n:Int) -> Int{
    return n < 2 ? 1: n * factorial(n-1)
}

factorial(3) / / 6
Copy the code

In the above code, factorial calls itself and returns 1 when n<2; Otherwise, continue calling yourself.

It’s not hard to understand how the function is called from the code itself, but how does this 6 work out? This is how recursion works.

The implementation principle of recursion

The recursive call is actually done through the callback stack. The author draws a diagram of what happens between factorial(3) and factorial(6) :

By the picture above you can see, the whole process of recursion and stack into the stack operations are very similar: orange background elements, the rounded rectangle represents a stack that is being executed, and the gray background of the rounded rectangle represents the rest of the elements, their order is called first order, and keep on content that is called the code execution.

Now I explain the whole process in chronological order from left to right:

  • After 3 is passed initially, it satisfies the condition n>=2 and continues to call itself: 3 * factorial(2), pushing.
  • After passing 2, 2 satisfies the n>=2 condition and continues to call itself: 2 * factorial(1), pushing.
  • After passing 1, it satisfies the condition n<2, stops calling itself, returns 1, and exits the stack.
  • Now the top element on the stack is 2 times factorial of 1, and factorial of 1 just returned 1, so now this becomes 2 times 1 = 2, out of the stack.
  • Again, the top element of the stack is 3 times factorial of 2 and factorial of 2 returns 2, so now this is 3 times 2 = 6, out of the stack.
  • Finally, factorial of 3 returns 6, goes off the stack, and the recursion ends.

According to my personal understanding, the whole recursive process can be roughly understood as: continue the recursive call, holding the call context (temporary variables, functions, etc.) in the form of a stack until the condition that makes the recursion continue is false. Once this condition is true, the values are immediately returned in the order in which they were pushed (the reverse of the order in which they were pushed), passed one by one, and finally returned at the level in which they were originally called.

To simplify things a little bit, “recursion” in recursion means pushing, passing calls; “Return” is out of the stack, output the return value.

And this boundary is the termination condition for recursion. Obviously, this termination condition plays an important role in the recursion process. Imagine if this condition never changed, then it would always be pushed, and the stack would overflow.

Issues to be aware of when using recursion

Based on the recursive example above, we remove the recursive termination condition:

func factorialInfinite(_ n:Int) -> Int{
    return n * factorialInfinite(n-1)
}

factorialInfinite(3)
Copy the code

This code, if left in the playground, will report a runtime error after a short period of time (a few seconds or more). You can also put a print function on top of the return statement to print out some strings, and you’ll see it print and print until you get a run-time error and the stack overflows.

So when you write recursion code in the future, be careful if the termination condition of the recursion is reasonable, because even if the condition exists, it is not necessarily a reasonable condition. Let’s take a look at this example:

func sumOperation( _ n:Int) -> Int {
    if n == 0 {
        return 0
    }
    return n + sumOperation(n - 1)
}

sumOperation(2) / / 3
Copy the code

The above code is similar to the factorial, which is the same as adding values that are less than the current parameter. If we pass in 2, then we start to unstack until n=0,

2 plus 1 plus 0 is 3. That might seem like a good thing, but what if we pass negative 1 at the beginning? The result is more and more pushing until the stack runs out. Because the condition n == 0 is not going to terminate the push if it passes -1, because all -1 operations after -1 are non-zero.

So this is not a reasonable condition, a reasonable condition is n < = 0.

func sumOperation( _ n:Int) -> Int {
    if n <= 0 {
        return 0
    }
    return n + sumOperation(n - 1)
}

sumOperation(-1) / / 0
Copy the code

At this point, the reader should have a basic understanding of the use, invocation, and considerations of recursion.

So now that I’ve done the basic introduction to algorithms, let’s start talking about sorting algorithms.

Sorting algorithm

Before we talk about algorithms, let’s take a look at the comparison of several common sorting algorithms:

Sorting algorithm On average The best situation The worst The stability of Spatial complexity
The bubbling O(n^2) O (n) O(n^2) stable 1
Selection sort O(n^2) O(n^2) O(n^2) unstable 1
Insertion sort O(n^2) O (n) O(n^2) stable 1
Hill sorting O(nlogn) Depend on the step length Depend on the step length stable 1
Heap sort O(nlogn) O(nlogn) O(nlogn) stable 1
Merge sort O(nlogn) O(nlogn) O(nlogn) stable O (n)
Quick sort O(nlogn) O(nlogn) O(n^2) unstable O(logn)

Best-case, worst-case and stability concepts are beyond the scope of this article and are available to interested readers.

Now just look at the average performance:

  • Bubble sort, select sort, insert sort time complexity is O(n^{2})
  • Hill sort, heap sort, merge sort, and quicksort have linear logarithmic order O(nlog n)

This article introduces bubble sort, select sort, insert sort, merge sort and quicksort.

Hill sort is based on insertion sort. After understanding insertion sort, it will be easy to understand Hill sort, so it will not be introduced in this article. Heap sort involves a completely new data structure: heaps, so I will cover this data structure and heap sort in the next article.

Sort a preliminary

Before we talk about sorting algorithms, let’s look at one of the simplest (and least efficient, and best understood) sorting algorithms, which we’ll call “swap sort.”

Note that this name is the author’s own, and there is no name for the algorithm on the Internet and related technical books.

Algorithm on

Nested traversal with two loops:

  • The outer loop traverses the elements from 0 to the end of the array, with an index of I.
  • Loop through the array from I +1 to the end of the array with index j.
  • When the element on I is larger than the element on j, the elements of I and j are swapped so that the element whose index is I is the smallest.

Let’s use an example to see how the swap works:

Given an initial array: array = [4, 1, 2, 5, 0]

When I = 0:

  • Array [0] > array[1] : swap 4 and 1:[1, 4, 2, 5, 0]The inner j continues to traverse, j++.
  • Array [0] > array[4] : swap 0 and 1:[0, 4, 2, 5, 1]The outer loop of I = 0 ends, I ++.

When I = 1:

  • Array [1] > array[2] : swap 2 and 4:[0, 2, 4, 5, 1]The inner j continues to traverse, j++.
  • Array [1] > array[4] : swap 1 and 2:[0, 1, 4, 5, 2]The outer loop of I = 1 ends, I ++.

When I = 2:

  • Array [2] > array[4] : swap 2 and 4:[0, 1, 2, 5, 4]The outer loop of I = 2 ends, I ++.

When I = 3:

  • Array [3] > array[4] : swap 5 and 4:[0, 1, 2, 4, 5]The outer loop of I = 3 ends, I ++.

When I = 4: does not meet the boundary conditions of the inner loop, the inner loop is not carried out, and the sorting ends.

So how do you do that in code?

Code implementation

func switchSort(_ array: inout [Int])- > [Int] {
    
    guard array.count > 1 else { return array }
    
    for i in 0 ..< array.count {
        
        for j in i + 1 ..< array.count {
          
            if array[i] > array[j] {
                array.swapAt(i, j) 
            }
        }
    }
    
    return array 
}
Copy the code

The swapAt function is a function that uses Swift’s built-in array to swap two indexes, which will be used frequently in the future.

To verify the swap in code, we can print out the array after swapping elements under the swapAt function:

var originalArray = [4.1.2.5.0]
print("original array:\n\(originalArray)\n")

func switchSort(_ array: inout [Int])- > [Int] {
    
    guard array.count > 1 else { return array }
    
    for i in 0 ..< array.count {
        
        for j in i + 1 ..< array.count {
          
            if array[i] > array[j] {
                array.swapAt(i, j) 
                print("\(array)")}}}return array   
}


switchSort(&originalArray)
Copy the code

Print result:

original array: [4, 1, 2, 5, 0] switch sort... [1, 4, 2, 5, 0] [0, 4, 2, 5, 1] to [0, 2, 4, 5, 1] is [0, 1, 4, 5, 2], [0, 1, 2, 5, 4] [0, 1, 2, 4, 5]Copy the code

After verification, we can see that the results are the same as the above analysis.

You can also set up the original array and write out the result of each swap as you see fit before you run the code, and then compare it with running the algorithm. This method is very helpful to the understanding of the algorithm, we recommend to use ~

Please be sure to understand the above logic, you can write the results of the way to help understand and consolidate, help to explain the following sorting algorithm understanding.

Are there any questions about the switching process? As you can see, both 1 and 2 are at the front of the array, but after sorting through the middle, they are placed at the back of the array, and then switched back again. This is obviously inefficient and feels like a waste of time.

So is there any way to optimize the swap so that the result of the swap is basically the same as where the elements end up in the array?

The answer is yes, which leads to the author’s first formal introduction of the sort algorithm bubble sort:

Bubble sort

Algorithm on

Similar to swap sort, bubble sort is implemented in a two-layer loop; But here’s the difference:

  • Loop boundary conditions: bubble sort outer layer is [0, array.coun-1); inner layer is [0, array.coun-1 -i).

  • Swap sort compares elements of the inner and outer indexes (array[I] and array[j]), but bubble sort compares elements of two adjacent inner indexes: Array [j] and array[j+1].

The author uses the same array as above to show how elements are swapped:

Initial array: array = [4, 1, 2, 5, 0]

When I = 0:

  • Array [0] > array[1] : swap 4 and 1:[1, 4, 2, 5, 0]The inner j continues to traverse, j++.
  • Array [1] > array[2] : swap 4 and 2:[1, 2, 4, 5, 0]The inner j continues to traverse, j++.
  • Array [2] < array[3] : not swapped, inner layer j continues traversal, j++.
  • Array [3] > array[4] : swap 5 and 0:[1, 2, 4, 0, 5]The outer loop of I = 0 ends, I ++.

When I = 1:

  • Array [2] > array[3] : swap 2 and 4:[1, 2, 0, 4, 5]The inner j continues to traverse, j++.
  • Array [3] < array[4] : no swap, I = 1 end of the outer loop, I ++.

When I = 2:

  • Array [1] > array[2] : swap 2 and 0:[1, 0, 2, 4, 5]The inner j continues to iterate, j++, until it exits the outer loop, I ++, where I =2.

When I = 3:

  • Array [0] > array[1] : swap 1 and 0:[0, 1, 2, 4, 5]The inner j continues to iterate, j++, until it exits the outer loop, I ++, where I =3.

When I = 4: does not meet the boundary conditions of the outer loop, the outer loop is not carried out, and the sorting ends.

Code implementation

Let’s look at the bubble sort code:

func bubbleSort(_ array: inout [Int])- > [Int] {
    
    guard array.count > 1 else { return array }
    
    for i in 0 ..< array.count - 1 {

        for j in 0 ..< array.count - 1 - i {
          
            if array[j] > array[j+1] {
                array.swapAt(j, j+1)}}}return array
}
Copy the code

From the code above we can clearly see the boundary conditions and swap times for the loop iteration. Similarly, we add the log and print out the array after each swap of bubble sort (for comparison, we also print out the log of swap sort) :

original array:
[4, 1, 2, 5, 0]

switch sort...
[1, 4, 2, 5, 0]
[0, 4, 2, 5, 1]
[0, 2, 4, 5, 1]
[0, 1, 4, 5, 2]
[0, 1, 2, 5, 4]
[0, 1, 2, 4, 5]

bubble sort...
[1, 4, 2, 5, 0]
[1, 2, 4, 5, 0]
[1, 2, 4, 0, 5]
[1, 2, 0, 4, 5]
[1, 0, 2, 4, 5]
[0, 1, 2, 4, 5]
Copy the code

It can be seen from the above two sets of prints that the bubble sort algorithm solves the shortcoming of the swap sort algorithm:

  • The first two elements, 1,2, are always going to be first in the sorting process.
  • The 0 element, which was at the end, moved forward bit by bit in bubble sort, and finally got where it was supposed to be.

Now we know that bubble sort is better than swap sort, and it does this by comparing adjacent elements in pairs: swap if they are in reverse order.

So if in the sorting process, the array is already sorted, then it doesn’t make sense to do pairwise comparisons.

To demonstrate the limitations of the sorting algorithm above, let’s use a new test case:

var originalArray = [2.1.3.4.5]
Copy the code

And this time we not only log after the swap, but also count the number of comparisons:

func bubbleSort(_ array: inout [Int])- > [Int] {
    
    guard array.count > 1 else { return array }    
    var compareCount = 0
    
    for i in 0 ..< array.count - 1 {

        for j in 0 ..< array.count - 1 - i {

            compareCount += 1
            print("No.\(compareCount) compare \(array[j]) and \(array[j+1])")
            
            if array[j] > array[j+1] {
                array.swapAt(j, j+1) //keeping index of j is the smaller one
                print("after swap: \(array)")}}}return array
}
Copy the code

Print result:

original array:
[2, 1, 3, 4, 5]


bubble sort...
No.1 compare 2 and 1
after swap: [1, 2, 3, 4, 5] //already sorted, but keep comparing
No.2 compare 2 and 3
No.3 compare 3 and 4
No.4 compare 4 and 5
No.5 compare 1 and 2
No.6 compare 2 and 3
No.7 compare 3 and 4
No.8 compare 1 and 2
No.9 compare 2 and 3
No.10 compare 1 and 2
Copy the code

As can be seen from the printed result, in fact, after the first exchange, the array is already ordered, but the algorithm still continues to compare, doing a lot of useless work, can there be a way to make the pair comparison in the case of known order end earlier? The answer is yes.

Ending this operation early is easy; we just need to break out of the outermost loop. The key is timing: we need to let the algorithm know when the array is already sorted.

Have you already thought of it? After an inner loop, if there is no swap, the array is already in order and there is no need to narrow down the inner loop again. So we need to set a Boolean variable externally to indicate whether the array is in order:

We’ll call this algorithm advanced Bubble sort

func bubbleSortAdvanced(_ array: inout [Int])- > [Int] {
    
    guard array.count > 1 else { return array }
    
    for i in 0 ..< array.count - 1 {
        
        //bool switch
        var swapped = false
      
        for j in 0 ..< array.count - i - 1 {
            
            if array[j] > array [j+1] {
                array.swapAt(j, j+1) 
                swapped = true; }}//if there is no swapping in inner loop, it means the the part looped is already sorted,
        //so it's time to break
        if (swapped == false) {break}}return array
    
}
Copy the code

As you can see from the above code, the first bubble sort algorithm adds only a Boolean value swapped, which defaults to false:

  • If there is no element exchange in the current inner loop, the elements in the current inner loop range are all ordered. This means that the elements of the inner loop range are also ordered (because the inner loop shrinks after each iteration), and you can jump out of the loop.
  • Conversely, if an exchange has occurred in the current inner loop, then the current inner loop is likely to be unordered (and possibly ordered, but the order needs to be verified in the next inner loop, so there is still no early exit and one more inner loop is needed).

In order to verify whether the above improved bubble sort can solve the original bubbling sort problem, we add the log of comparison times:

original array: [2, 1, 3, 4, 5] bubble sort... No.1 compare 2 and 1 after swap: [1, 2, 3, 4, 5] No.2 compare 2 and 3 No.3 compare 3 and 4 No.4 compare 4 and 5 No.5 compare 1 and 2 No.6 compare 2 and 3 No.7 compare  3 and 4 No.8 compare 1 and 2 No.9 compare 2 and 3 No.10 compare 1 and 2 bubble sort time duration : 1.96 ms advanced mercifully sort... No.1 compare 2 and 1 after swap: [1, 2, 3, 4, 5] No.2 compare 2 and 3 No.3 compare 3 and 4 No.4 compare 4 and 5 No.5 compare 1 and 2 No.6 compare 2 and 3 No.7 compare  3 and 4Copy the code

We can see that there are three fewer comparisons after using the improved bubble sort. It does not return immediately because it cannot be determined to be ordered in the current loop, even after swapping to an ordered array. We need to do the next inner loop to verify that.

Because the number of elements in the array is small, the effect of this improvement may not be obvious. Now let’s increase the number of elements in the array and compare the sum to see the difference:

original array:
[2.1.3.4.5.6.7.8.9.10.11.12.13.14]

bubble sort. total comparecount:91


advanced bubble sort. total comparecount:25
Copy the code

As can be seen from the comparison results, there is a large gap between the two algorithms in this test sample, and the gap will get bigger and bigger with the increase of the number of elements (because more meaningless comparisons are made).

Although this test sample is extreme, it in a sense optimizes the original bubble sorting algorithm. General bubble sort algorithm on the web should be able to see this optimized version.

Now, we know that the optimized bubble sort algorithm can terminate early when we know the current array is in order, but after all, constant swapping is still very expensive. Is there a way to do the sorting of the current element by moving it only once? The answer is yes, which leads to the selection sorting algorithm that the author is going to introduce.

Selection sort

Algorithm on

Selection sort is also two-tiered:

  • The outer loop is bounded by [0,array.count-1) and index is I.
  • The boundary of the inner loop is [I +1,array.count), and index is j. It can be seen that the inner range is also shrinking, and the front end of the range is moving backwards, while the back end remains unchanged.

The specific approach is:

  • At the beginning of the outer loop, I is the minimum index (probably not the minimum for the array).
  • Find the minimum value in the current inner loop range and compare it to the recorded minimum value:
    • Replace if it is different from index, the minimum value of the current record
    • If it is the same as index, the minimum value of the current record, it is not replaced

Let’s take a look at the mechanism for selecting a sort by iterating over it by hand, using the same array as the swap sort and bubble sort (non-optimized) array above: [4, 1, 2, 5, 0]

When I = 0:

  1. The index that records the current minimum value is 0 and the current minimum value is 4.
  2. [1,5] [1,5] [1,5] [1,5] [1,5] [1,5][0, 1, 2, 5, 4]. Current inner loop ends, i++.

When I = 1:

  1. The index that records the current minimum value is 1 and the current minimum value is 1.
  2. Index = 1; index = 1; index = 1; index = 1; index = 1

When I = 2:

  1. The index that records the current minimum value is 2 and the current minimum value is 2.
  2. Index = 1; index = 1; index = 1; index = 1;

When I = 3:

  1. The index that records the current minimum value is 3 and the current minimum value is 2.
  2. [4,5] [4,5] [4,5] [4,5][0, 1, 2, 4, 5]. Current inner loop ends, i++.

When I = 4: does not meet the boundary conditions of the outer loop, the outer loop is not carried out, and the sorting ends.

We can see that the same initial sequence, using selective sort, swaps only twice, because it knows what the minimum value to replace is, and does very few meaningless swaps.

Code implementation

Let’s use code to implement the above selection sort algorithm:

func selectionSort(_ array: inout [Int])- > [Int] {
    
    guard array.count > 1 else { return array }

    for i in 0 ..< array.count - 1{
        
        var min = i
        
        for j in i + 1 ..< array.count {
            
            if array[j] < array[min] {
                min = j 
            }
        }
        
        //if min has changed, it means there is value smaller than array[min]
        //if min has not changed, it means there is no value smallter than array[min]
        ifi ! =min {
            array.swapAt(i, min)}}return array
}
Copy the code

As you can see from the above code, the min variable is used to record the index value that the current outer loop needs to be compared. If the inner loop of the current outer loop finds values less than this minimum value, they are replaced.

Let’s use log to see how many times we choose sort to replace:

original array:
[4, 1, 2, 5, 0]

advanced bubble sort...
after swap: [1, 4, 2, 5, 0]
after swap: [1, 2, 4, 5, 0]
after swap: [1, 2, 4, 0, 5]
after swap: [1, 2, 0, 4, 5]
after swap: [1, 0, 2, 4, 5]
after swap: [0, 1, 2, 4, 5]

selection sort...
after swap: [0, 1, 2, 5, 4]
after swap: [0, 1, 2, 4, 5]
Copy the code

The contrast should be obvious from the log above.

To further verify the performance of selection sorting, the author found two tools on the Internet:

  • A class that calculates the runtime of a program:executionTimeInterval.swift
  • Generate various types of random Array classification:Array+Extension.swift

First, take executionTimeInterval. Swift implementation:

//time interval
public func executionTimeInterval(block: (a)- > () - >CFTimeInterval {
    let start = CACurrentMediaTime()
    block();
    let end = CACurrentMediaTime(a)return end - start
}


//formatted time
public extension CFTimeInterval {
    public var formattedTime: String {
        return self> =1000 ? String(Int(self)) + "s"
            : self> =1 ? String(format: "%.3gs".self)
            : self >= 1e-3 ? String(format: "%.3gms".self * 1e3)
            : self >= 1e-6 ? String(format: "%. 3 g (including s".self * 1e6)
            : self < 1e-9 ? "0s"
            : String(format: "%.3gns".self * 1e9)
    }
}
Copy the code

The first function passes in a block to the function that needs to test the elapsed time, returning the elapsed time of the function.

The second function, which classifies CFTimeInterval, adds seconds in milliseconds for milliseconds, microseconds for microseconds, and seconds for seconds greater than 1 second.

To use it, drag two Swift files into the Sources folder inside the playground and click on both to enter the playground:

var selectionSortedArray = [Int] ()var time4 = executionTimeInterval{
    selectionSortedArray = selectionSort(&originalArray4) // The function to test
}

print("selection sort time duration : \(time4.formattedTime)") // Print out the time
Copy the code

Take a look at the Array+ extension. swift class:

Let’s start with one of the methods to generate random numbers:

import Foundation

extension Array {
    
    static public func randomArray(size: Int, maxValue: UInt)- > [Int] {
        var result = [Int](repeating: 0.count:size)
        
        for i in 0 ..< size {
            result[i] = Int(arc4random_uniform(UInt32(maxValue)))
        }
        
        return result
    }
}
Copy the code

This method only needs to pass in the size of the array and the maximum value to generate a random array that does not exceed this maximum value.

For example, we want to generate an array of length 10 and a maximum of 100:

var originalArray = Array<Int>.randomArray(size: inputSize, maxValue:100)
//originalArray:[87, 56, 54, 20, 86, 33, 41, 9, 88, 55]
Copy the code

So now with the above two tools, we can generate our own set of test cases and print out the execution time of the algorithm used. Let’s now generate an array with a length of 10 and a maximum of 100, and see the performance of both using optimized bubble sort and selection sort respectively:

original array: [1, 4, 80, 83, 92, 63, 83, 23, 9, 85] advanced bubble sort... Advanced bubble sort result: [1, 4, 9, 23, 63, 80, 83, 83, 85, 92] Time duration: 8.53ms Selection sort... Selection sort result: [1, 4, 9, 23, 63, 80, 83, 83, 85, 92] Time duration: 3.4msCopy the code

Let’s now make the array a bit longer: one with a length of 100 and a maximum of 200:

advanced bubble sort... Advanced bubble sort sorted elemets: 100 time duration: 6.27s selection sort... selection sort sorted elemets: 100 time duration : 414msCopy the code

As you can see, the difference is about 12 times. That’s a big difference, if it takes one day to do selection sort, it takes 12 days to do bubble sort.

Now that we have studied selective sorting, we know that it improves the performance of the sorting algorithm by reducing the number of swaps.

However, in addition to the swap operation, the comparison operation also takes time: the selection sort gets the minimum value of the current inner loop through continuous comparison of the inner loop, and then makes subsequent judgments and operations.

So what can you do to reduce the number of comparisons? Once again, the answer is yes. This leads me to the next algorithm: insertion sort algorithm.

Insertion sort

Algorithm on

The basic idea of insertion sort is to take one element (usually the first element) out of an array and then take the other elements out of the array in order. If this element is smaller than this element, you put it to the left of this element; Otherwise, it is placed on the right side. Overall, it looks a little bit like sorting the cards you just picked up when playing poker.

Insert sort also circulates in two layers:

  • The outer loop is bounded by [1,array.count) and index is I.
  • The inner loop starts with index j = I, and then we use a while loop, and the condition isj>0 && array[j] < array[j - 1]The inside of the cycle is to swap the elements of j-1 and J and make j-1. If the current element is smaller than the previous one, switch places. Instead, go to the next outer loop.

[4, 1, 2, 5, 0] insert sort: [4, 1, 2, 5, 0]

When I = 1:

  1. J = 1: array[1] < array[0], swap 4 and 1:[1, 4, 2, 5, 0], j-1 does not meet the conditions of the inner cycle, exit the inner cycle, I +1.

When I = 2:

  1. J = 2, array[3] < array[2], swap 4 and 2:[1, 2, 4, 5, 0], j moves to the left, array[2] > array[1], does not meet the inner loop condition, exits the inner loop, I +1.

When I = 3:

  1. J = 3, array[3] > array[2], the inner loop condition is not met, exit the inner loop, I +1.

When I = 4:

  1. J = 4, array[4] < array[3], swap 5 and 0:[1, 2, 4, 0, 5]J – 1.
  2. J = 3, array[3] < array[2], swap 4 and 0:[1, 2, 0, 4, 5]J – 1.
  3. J = 2, array[2] < array[1], swap 4 and 0:[1, 0, 2, 4, 5]J – 1.
  4. J = 1, array[1] < array[0], swap 1 and 0:[0, 1, 2, 4, 5], j-1 = 0, does not meet the conditions of the inner loop, exit the inner loop, I +1 = 5, does not meet the conditions of the outer loop, the sorting is terminated.

Insert sort inner loop (array[j] >= array[j-1]); insert sort inner loop (array[j] >= array[j-1]); Because the algorithm from the beginning is small in front, big in back.

Code implementation

Let’s see how to implement the insertion sort algorithm:

func insertionSort(_ array: inout [Int])- > [Int] {
    
    guard array.count > 1 else { return array }
    
    for i in 1..<array.count {
        
        var j = i
        while j > 0 && array[j] < array[j - 1] {
             array.swapAt(j - 1, j)
            j -= 1}}return array
}
Copy the code

The condition for inserting the sort inner loop can be seen from the code above: j > 0 && array[j] < array[J-1]. The swap continues as long as the current element is smaller than the previous one; Conversely, when greater than or equal to the preceding element, the loop is immediately broken out.

Insert sort can reduce the number of comparisons between elements compared to selection sort. Let’s compare the two algorithms by printing the number of comparisons:

Use a random number set with 50 elements and a maximum of 50:

selection sort...
compare times:1225
selection sort time duration : 178ms

insertion sort...
compare times:519
insertion sort time duration : 676ms
Copy the code

We can see that there are twice as many comparisons using selection sort as there are comparisons using insert sort. Unfortunately, the overall performance selection sort is higher than insertion sort.

That is, although insertion sort has fewer comparisons, it has more swaps than selection sort, so it can sometimes be worse than selection sort in performance.

Note that this does not contradict the author’s previous assertion that insertion sort is better than selection sort in reducing the number of comparisons, but not that insertion sort is better than selection sort overall.

So what are the properties of arrays that make sorting algorithms useful?

When sorting the [4, 1, 2, 5, 0] array using insert sort from above, we can see that since 0 is already at the end, we had a hard time moving it to the front at j=4.

So to take this case to the extreme, we can think of it this way: Wouldn’t we have to do so much shuffling if the indexes of the elements in the array were roughly the same as the final order? . This may sound like a no-brainer, but there is a “basically ordered” array that is not needed, but is generally ordered. For example:

,1,3,6,4,5,9,7,8 [2]

In the words of the author, it is called overall order and partial disorder.

We can simply use this array to compare selection sort and insertion sort separately:

selection sort...
compare times:36
selection sort time duration : 4.7ms

insertion sort...
compare times1 insertion sort time duration: 3.2msCopy the code

We can see that insertion sort works better with basically ordered test cases. In order to make the gap more obvious, the author added a method to generate a basic ordered random number set in Array+ extension. swift file:

static public func nearlySortedArray(size: Int, gap:Int)- > [Int] {

    var result = [Int](repeating: 0.count:size)

    for i in 0 ..< size {
        result[i] = i
    }

    let count : Int = size / gap
    var arr = [Int] ()for i in 0 ..< count {
        arr.append(i*gap)
    }

    for j in 0 ..< arr.count {
        let swapIndex = arr[j]
        result.swapAt(swapIndex,swapIndex+1)}return result
}
Copy the code

The function takes in the length of the array and the span of the index that needs to be scrambled. Its implementation looks like this:

  • First, a completely ordered sequence is generated.
  • Divide the length of the array by the span to find the number of indexes to swap, count.
  • From this count, we can find the indexes to swap, and put these indexes in a new ARR
  • It is convenient for the arR to fetch the index and swap the index of the previously generated w safe ordered array with index+1.

For example, if we generate a roughly ordered array of length 12 and span 3, we can call:

var originalArray = Array<Int>.nearlySortedArray(size: 12, gap: 3)
//[1, 0, 2, 4, 3, 5, 7, 6, 8, 10, 9, 11]
Copy the code

The span is 3, indicating that there are 12/3 = 4-1 = 3 elements that need to be swapped, with serial numbers 0, 3, 6, and 9 respectively. So the serial number is 0,1; 3, 4; 6, 7; The elements 9 and 10 have been swapped, and you can see that the swapped array is still pretty much in order.

Now we can verify this with a larger array:

var originalArray = Array<Int>.nearlySortedArray(size: 100, gap: 10)
Copy the code

The result is:

selection sort...
compare times:4950
selection sort time duration : 422ms

insertion sort...
compare times8 insertion sort time duration: 50 msCopy the code

We can see that the difference is significant, with insertion sort performing nearly 10 times better than selection sort

Merge sort

Algorithm on

Merge sort uses the idea of divide and conquer from algorithm thinking. As the name suggests, it is to break a big problem into similar small problems to break down one by one. In the realization of merge sort algorithm, firstly, the array to be sorted is divided into the smallest component (usually 1 element), and then reverse step merge.

Use a diagram to experience the implementation process of merging algorithm:

The dotted arrows above represent the splitting process; The solid line represents the merging process. If we look closely, we can see that the split and merge operations are repeated, and we can use recursive operations in this case.

Let’s first look at the merge operation:

The operation of merging is to combine two arrays (where they usually have the same number of elements) into a fully ordered array.

The implementation steps of the merge operation are:

  • Creates an empty array to hold the merged ordered array.
  • The two arrays passed are compared in pairs, starting at index 0. The smaller element is placed in the new empty array, index + 1. The larger element does nothing, index stays the same, and then the pairwise comparison continues. We know index moves to the end.
  • If the length of two arrays is different, the remaining elements of the array must be placed in the new array.

Code implementation

Let’s look at the code implementation of the merge sort algorithm:

func _merge(leftPile: [Int], rightPile: [Int])- > [Int] {
    
    var leftIndex = 0   //left pile index, start from 0
    var rightIndex = 0  //right pile index, start from 0
    
    var sortedPile = [Int] ()//sorted pile, empty in the first place
    
    while leftIndex < leftPile.count && rightIndex < rightPile.count {
        
        //append the smaller value into sortedPile
        if leftPile[leftIndex] < rightPile[rightIndex] {
            
            sortedPile.append(leftPile[leftIndex])
            leftIndex += 1
            
        } else if leftPile[leftIndex] > rightPile[rightIndex] {
            
            sortedPile.append(rightPile[rightIndex])
            rightIndex += 1
            
        } else {
            
            //same value, append both of them and move the corresponding index
            sortedPile.append(leftPile[leftIndex])
            leftIndex += 1
            sortedPile.append(rightPile[rightIndex])
            rightIndex += 1}}//left pile is not empty
    while leftIndex < leftPile.count {
        sortedPile.append(leftPile[leftIndex])
        leftIndex += 1
    }
    
    //right pile is not empty
    while rightIndex < rightPile.count {
        sortedPile.append(rightPile[rightIndex])
        rightIndex += 1
    }
    

    return sortedPile
}
Copy the code

Because this function is called inside the merge sort function, an underscore is added before the function name. Just to distinguish, not necessarily.

From the code above, you can see the implementation logic of the merge:

  • Create an empty array and initialize the index of the two passed arrays to 0
  • Compare the two arrays index in pairs, place the smaller one in the new array and index+1.
  • Finally, check if there are any remaining elements and add them to the new array.

Now that we understand the merge algorithm, let’s look at the split algorithm. The splitting algorithm uses recursion:

func mergeSort(_ array: [Int])- > [Int] {
    
    guard array.count > 1 else { return array }
    
    let middleIndex = array.count / 2
    let leftArray = mergeSort(Array(array[0..<middleIndex]))             // recursively split left part of original array
    let rightArray = mergeSort(Array(array[middleIndex..<array.count]))  // recursively split right part of original array

    return _merge(leftPile: leftArray, rightPile: rightArray)             // merge left part and right part
}
Copy the code

We can see mergeSort calling itself, and its recursive termination condition is! (array.count >1), which is returned when the number of elements in the array is 1.

As you can see from the implementation of this recursive function, its purpose is to continuously split the incoming array centrally. According to his recursive termination condition, the split continues when the array element is greater than 1. The following merge functions are only executed when the recursion terminates and the stack begins to exit. In other words, the merging is started after the end of the split, which conforms to the realization process of the merging algorithm introduced by the author above.

The last paragraph needs to be repeated.

Merge sort = merge sort = merge sort = merge sort = merge sort = merge sort = merge sort = merge sort = merge sort

func _merge(leftPile: [Int], rightPile: [Int])- > [Int] {
    
    print("\nmerge left pile:\(leftPile)  |  right pile:\(rightPile)")...print("Sorted from running:\(sortedPile)")
    return sortedPile
}
Copy the code

And for easy comparison with the figure above, the initial array can be taken from the figure [3, 5, 9, 2, 7, 4, 8, 0]. Run it to see what it looks like:

original array: [3, 5, 9, 2, 7, 4, 8, 0] merge sort... The merge left running: [3] | right running: [5] sorted from running: [3, 5] merge left running: [9] | right running: [2] sorted from running: [2, 9] merge left running: [3, 5] | right running: [2, 9] sorted from running: [2, 3, 5, 9] merge left running: [7] | right running: [4] sorted from running: [4, 7] merge left running: [8] | right running: [0] sorted from running: [0, 8] merge left running: [4, 7] | right running: [0, 8] sorted from running: [0, 4, 7, 8] merge left running: [2, 3, 5, 9] | right running: [0, 4, 7, 8] sorted from running: [0, 2, 3, 4, 5, 7, 8, 9]Copy the code

As you can see, splitting and merging takes care of the left side of the array and then the right side of the array. Why is that?

Let’s look at how the function is called initially:

First we call the function:

func mergeSort(_ array: [Int])- > [Int] {
    
    guard array.count > 1 else { return array }
    
    let middleIndex = array.count / 2
    let leftArray = mergeSort(Array(array[0..<middleIndex]))             / / 1
    let rightArray = mergeSort(Array(array[middleIndex..<array.count]))  / / 2

    return _merge(leftPile: leftArray, rightPile: rightArray)            / / 3
}
Copy the code

The recursion begins at the //1 line, where the array is the original array with 8 elements, and mergeSort is split in half to 4. When 4>1 does not meet the termination condition of recursion, recursion continues until the termination condition ([3]) is met and recursion begins to return. [2,3,5,9]. [2,3,5,9]

Similarly, we return to the right half of the original split array (//2 in the above code segment), which is the same split and merge result as the left test, and get the right part of the merge result: [0,4,7,8.

With only one mergeSort function left on the recursive call stack, mergeSort does the final merge (//3 in the above code segment), calling _merge to get the final result: [0, 2, 3, 4, 5, 7, 8, 9].

The performance of merge sort is better than all of the above because of the use of divide-and-conquer and recursion and the use of some other memory space, but only if the initial number of elements is not small.

We can compare select sort with merge sort: random array with initial array length 500 and maximum value 500:

selection sort... Selection sort time duration: 12.7s Merge sort... Merge sort time duration: 5.21sCopy the code

You can see that merge sort is optimal and selective sort.

Now we know that merge sort uses divide-and-conquer and uses recursion to sort arrays efficiently. In fact, there is another algorithm that also uses divide-and-conquer and recursion, but is even better than merge sort – quicksort.

Quick sort

Quicksort algorithm is considered one of the top ten algorithms of the 20th century, and it is one of the most popular algorithms for interviews.

Algorithm on

The basic idea of quicksort is that the sorted records are divided into two independent parts through a sort. If the keywords of one part of the records are smaller than those of the other part, the two parts can be sorted separately to achieve the purpose of ordering the whole sequence.

The above text is excerpted from Big Talk Data Structures

Its implementation steps are as follows:

  1. The selection of an element from a sequence, either by random algorithm or by other optimizations, is called the “pivot”.
  2. Reorder the array: all elements smaller than the base value are placed in front of the base value, all elements larger than the base value are placed behind the base value, and the same elements are placed on both sides.
  3. Partition recursively, continuing to sort subcolumns of elements less than the reference and subcolumns of elements greater than the reference.

As can be seen from the above description, partitioning is the core algorithm in quicksort. The following author combined with examples to describe the process of partition operation.

Get the original array first: [5,4,9,1,3,6,7,8,2]

  • Choose 5 as pivot.
  • Start at both ends of the rest: 1 on the left is labeled low, and 2 on the far right is labeled High.
  • Let’s look at j: 2 < 5, swap 5 and 2, j stays the same:,4,9,1,3,6,7,8,5 [2]
  • I: 2 < 5, I ++; 4 < 5, i++; 9 > 5, switch 9 and 5, I stays the same,4,5,1,3,6,7,8,9 [2].

Code implementation

Use Swift’s filter function

Because there is an array filter function in Swift that can find some values in the array that conform to a certain range, so I first introduce a simple implementation of quick sort using this function:

func quickSort0<T: Comparable>(_ array: [T])- > [T] {
    
    guard array.count > 1 else { return array }
    
    let pivot = array[array.count/2]
    let less = array.filter{$0 < pivot }
    let greater = array.filter{$0 > pivot }
    
    return quickSort0(less) + quickSort0(greater)
}
Copy the code

It’s not hard to see the recursion involved: after pivot is selected, the array is split into two parts and then merged together. This example gives you a better understanding of how quicksort is implemented, although it uses functions built into Swift to find elements that match both sections.

Use the partition function with index = 0

In addition to the built-in Filter function of Swift, we can also implement the partition function ourselves, usually using a custom partition function.

func _partition(_ array: inout [Int], low: Int, high: Int) -> Int{

    var low       = low
    var high      = high

    let pivotValue = array[low]

    while low < high {

        while low < high && array[high] >= pivotValue {
            high -= 1
        }
        array[low] = array[high]
        
        while low < high && array[low] <= pivotValue {
            low += 1
        }
        array[high] = array[low]
    }

    array[low] = pivotValue
  
    return low
}
Copy the code

As you can see from the code implementation, pivotValue, originally selected here, is the first element of the current array.

It then moves to the left from index at the far right of the array. If the value is greater than pivotValue, index-1; Otherwise, directly switch the elements in the high and low positions; The index on the left does the same thing.

The final effect of this function execution is to partition the original array forward and backward according to the selected pivotValue.

How do you use _partition?

func quickSort1(_ array: inout [Int], low: Int, high: Int){
  
    guard array.count > 1 else { return }
    
    if low < high {        
        let pivotIndex = _partition(&array, low: low, high: high)
        quickSort1(&array, low: low, high: pivotIndex - 1)
        quickSort1(&array, low: pivotIndex + 1, high: high)
    }
    
}
Copy the code

The outer call quickSort1 is a recursive function that continuously partitions to get the sorted result.

We compare the performance of the merge sort implemented above with that of the swift built-in function and that of the custom partition function:

merge sort. mergesort time duration : 4.85s

quick sort. quick sort0 time duration : 984ms//swift filter function
quick sort1 time duration : 2.64s //custom partition
Copy the code

Let’s look at the test case for a basic ordered array with the same number of elements:

merge sort... Merge sort time duration: 4.88s quick sort... Quick sort0 time Duration: 921ms Quick sort1 time duration: 11.3sCopy the code

Although the number of elements is the same, but the performance is much worse, why? Because when we partition, pivot’s index is forced to be first. So if the value of the first element is very small, it will cause uneven partitioning (heavy at the front and light at the back), and since it is an iterative operation, each partition will cause uneven partitioning, resulting in plummeting performance. So a reasonable solution is to pick pivot’s index at random.

Use the partition function that randomly selects pivotValue

The implementation method is simple: Generate pivotValue’s index randomly in the partition function:

func _partitionRandom(_ array: inout [Int], low: Int, high: Int) -> Int{
    
    let x      = UInt32(low)
    let y      = UInt32(high)
    
    let pivotIndex = Int(arc4random() % (y - x)) + Int(x)
    let pivotValue = array[pivotIndex] 
    
    ...
}
Copy the code

Now test the algorithm for randomly selecting pivotValue with a roughly ordered array of the same length as the above test case:

merge sort...
merge sort time duration : 4.73s

quick sort...
quick sort0 time duration : 866ms
quick sort1 time duration : 15.1s  //fixed pivote index
quick sort2 time duration : 4.28s  //random pivote index
Copy the code

We can see that when we randomly pick pivot’s index, it runs three times as fast.

We now know three quicksort implementations, all of which split the array in half according to pivotValue. However, if there are a large number of duplicate elements in the array, and pivotValue is likely to fall into these elements, the algorithms above clearly do not deal with the possibility of multiple pivotValue duplicates. For sorting an array with many elements equal to the pivot value, a three-way sorting algorithm is useful.

Three-way quicksort

A three-way quicksort separates elements greater than, equal to, and less than pivotValue. Let’s see how this is implemented. Partition ();

func swap(_ arr: inout [Int],  _ j: Int, _ k: Int) {
    
    guardj ! = kelse {
        return;
    }
    
    let temp = arr[j]
    arr[j] = arr[k]
    arr[k] = temp
}



func quickSort3W(_ array: inout [Int], low: Int, high: Int) {
    
    if high <= low { return }
    
    var lt = low       // arr[low+1...lt] < v
    var gt = high + 1  // arr[gt...high] > v
    var i  = low + 1   // arr[lt+1...i) == v
    
    let pivoteIndex = low
    let pivoteValue = array[pivoteIndex]
    
    while i < gt {
        
        if array[i] < pivoteValue {
          
            swap(&array, i, lt + 1)
            i += 1
            lt += 1
           
        }else if pivoteValue < array[i]{
       
            swap(&array, i, gt - 1)
            gt -= 1
            
        }else {
            i += 1}}swap(&array, low, lt)
    quickSort3W(&array, low: low, high: lt - 1)
    quickSort3W(&array, low: gt, high: high)
    
    
}


func quickSort3(_ array: inout [Int] ){
    
    quickSort3W(&array, low: 0, high: array.count - 1)}Copy the code

The quickSort3W method splits the array into three ranges: greater than piVote, equal to piVote, and less than PiVote.

We generate a random set of 500 elements with a maximum of 5 to see the performance of these quicksort algorithms:

Quick sort1 time duration: 6.19s // Fixed pivote index quick sort2 time duration: 8.1s // Random piVote index quick sort3 time duration: 4.81s //quick sort3 wayCopy the code

You can see that quick sort 3 way works best for arrays with a large number of repeating elements.

For three-way quicksort, we can also use the built-in filter function of Swift to achieve:

func quicksort4(_ array: [Int])- > [Int] {
    
    guard array.count > 1 else { return array }
    
    let pivot = array[array.count/2]
    let less = array.filter{$0 < pivot }
    let equal = array.filter{$0 == pivot }
    let greater = array.filter{$0 > pivot }
    
    return quicksort4(less) + equal + quicksort4(greater)
}
Copy the code

Above, the implementation of quick sort in Swift 5 is introduced.

The last word

conclusion

This article explained some basic concepts of the algorithm and combined with the implementation of Swift code explained bubble sort, select sort, insert sort, merge sort, quick sort. Those who read this article will have a better understanding of these algorithms. Because the author has just come into contact with the knowledge of this field, it is inevitable that there will be some inappropriate expressions in some places, and readers should give more opinions and suggestions.

Thinking about algorithm learning

The author would like to share some thoughts about algorithm learning, but there may be some mistakes, but the author feels it is necessary to say here, hoping to trigger readers’ thinking:

“Question” in the picture above refers to a Question; Mind means an idea, or an idea to solve a problem. Code means Code implementation.

Algorithm learning in reading materials or books usually follows the path of solid lines 1,2,3 in the figure:

  • Path 1: After a given problem is given, a strategy for solving it is immediately given
  • Path 2: Give a given problem, immediately give the algorithm implementation
  • Path 3: Give an algorithm implementation, immediately tell you what the implementation code means

These paths are also essential in learning the algorithm, but it is easy to give the illusion that “I have learned the algorithm”. However, just going through these paths is not enough to really understand the algorithm, and to apply it in the future, for the following reasons:

  • The problems we will encounter in the future are almost impossible to be exactly the same as the problems we are learning now, so we should know why and abstract out the problems themselves, so that we can draw inferences by analogy.
  • When you take a new idea, it’s very difficult to code it in a very sensible way without having enough experience coding it. Improve your ability to translate ideas into code.

The first point of the two points mentioned above corresponds to path 4 in the figure above: given a strategy or design, we should think about what kind of problem this strategy or design solves, so as to understand the significance of this strategy or design. The second point corresponds to path 5 in the figure above: how to correctly and reasonably implement it in code according to a given strategy; As for path 6 in the figure above, I think it is also very important: given a code to solve the problem, can you figure out what the corresponding problem is?

To sum up, the author believes that learning algorithms requires repeated thinking among problems, strategies and codes, so as to truly apply what is learned.


Swift code

The code shown in this article is already in the GitHub repository:

  • Basic part of the Algorithm: Algorithm Introduction
  • Sort Algorithms section: Sort Algorithms

References & websites

Wikipedia: Algorithms

Big Talk Data Structure

Data Structure and Algorithm Analysis: C Language Description

The next trailer

The next article introduces the heap data structure and the heapsort algorithm.

This post has been synced to my blog: Portal

— — — — — — — — — — — — — — — — — — — — — — — — — — — — on July 17, 2018 update — — — — — — — — — — — — — — — — — — — — — — — — — — — —

Pay attention!!

The author recently opened a personal public account, mainly to share programming, reading notes, thinking articles.

  • Programming articles: including selected technical articles published by the author before, and subsequent technical articles (mainly original), and gradually away from iOS content, will shift the focus to improve the direction of programming ability.
  • Reading notes: Share reading notes on programming, thinking, psychology, and career books.
  • Thinking article: to share the author’s thinking on technology and life.

Because the number of messages released by the official account has a limit, so far not all the selected articles in the past have been published on the official account, and will be released gradually.

And because of the various restrictions of the major blog platform, the back will also be released on the public number of some short and concise, to see the big dry goods article oh ~

Scan the qr code of the official account below and click follow, looking forward to growing with you