Some time ago, a joke went viral on the Internet, “AFTER writing this sorting algorithm, I was fired.” Let’s see what kind of code he wrote that would make his supervisor so angry that he could be fired.When I saw this code, the first thing I said was that the author really had a good imagination. But don’t mock the author’s intelligence just yet. If you look it up, it’s a classic sorting algorithm called sleep sorting. Ha ha, isn’t it very graphic?

And just in case you get fired for writing a sorting algorithm, or if you’re a manager and you get angry and fire someone, I’m going to take you through some of the most imaginative, interesting and bizarre sorting algorithms in history.

1. Sleep Sort

The first imaginative sorting algorithm that we’re going to talk about is the one that was written up here by the expelled student, sleep sorting. In fact, the principle is so simple that you should be able to understand it just by looking at the code.

If we sort six pieces of data by size, we create six threads, each processing a number, the number corresponding to the thread sleep time. For example, if we want to sort {102, 338, 62, 9132, 580, 666}, we let the six threads sleep 102ms, 338ms, 62ms, 9132ms, 580ms, 666ms respectively. When a thread wakes up, the thread that wakes up first prints the least data. And so on, the thread that wakes up last prints the most data.

So this sorting algorithm, the total time, is the maximum number of times for the thread to sleep. You might say, if the largest number is large, isn’t it going to take a long time for the last thread to wake up? You’re right. However, we can make threads sleep at a smaller time granularity, for example by changing the sleep times above from milliseconds (ms) to subtleties (US).

And don’t underestimate this seemingly unrealistic sorting algorithm. If one day we can actually build super-fast quantum computers, then this sorting algorithm might actually make sense.

2. Monkey Sort (Bogo Sort)

If that sorting algorithm is useful at all, this sorting algorithm that I’m going to talk about is a very inefficient sorting algorithm that is both impractical and very inefficient.

The name of this sorting algorithm comes from the infinite monkey theory. The theory is actually quite simple, which is that if we had an infinite number of monkeys, and we gave them an infinite amount of time to type at random on a keyboard, it would be possible to type out a Shakespeare. It’s really a matter of probability.

So with the infinite monkey theory behind us, let’s look at what the monkey sort algorithm is. The monkey sorting algorithm is also very simple. It also draws on the knowledge of probability theory. For the data set to be sorted, we randomly generate one permutation at a time to see if it completely satisfies the order from smallest to largest. If not, we randomly generate another permutation until a well-ordered permutation is randomly generated. One day it will hit the right path, just encounter an orderly arrangement.

while not isInOrder(deck):
    shuffle(deck)
Copy the code

3. Slow Sort algorithm

This sorting algorithm was published in 1986 by Andrei Broder and Jorge Stolfi. The name suggests a slow sort algorithm: slow sort algorithm. It looks a little bit like merge sort structurally, and the pseudocode is as follows.

procedure slowsort(A,i,j)
  if i >= j then returnM: = ⌊ (I + j) /2⌋ slowsort (A, I, m)// Sort the first half first
    slowsort(A,m+1,j) // Sort the second half
  if A[j] < A[m] then swap A[j] and A[m] // Find the maximum number and put it at the end
    slowsort(A,i,j-1) // Sort all data except the maximum number
Copy the code

From the implementation of the code, the sorting algorithm seems to be awesome, the idea of divide-and-conquer, recursive implementation are used. What we want to do is sort the index between I and j, sort the first half, sort the second half, put the maximum at j, and then sort the rest of the index between I and j again.

The algorithm is correct and can achieve the effect of sorting an unordered data set. But if we write down the recursive formula for time complexity, you’ll see that it’s very time complex.

T(n) = 2 T(n/2) + T(n-1) + C (constant time)

The derivation of this time complexity formula is very complicated, and I immediately concluded that it is not a polynomial time complexity algorithm, that is, it is an invalid algorithm.

4. Pygmy Sort algorithm

Stupid sorting algorithm, originally proposed by Hamid Sarbazi-Azad in 2000, was later named “Gnome Sorting” by Dick Grune. As the name suggests, it’s not a very clever sorting algorithm either. How does this algorithm work? We’ll look at its pseudo-code implementation first and explain it later.

procedure gnomeSort(a[]):
  pos := 0
  while pos < length(a):
   if (pos == 0 or a[pos] >= a[pos-1]):
     pos := pos + 1
   else:
     swap a[pos] and a[pos-1]
     pos := pos - 1
Copy the code

The idea of this algorithm fits very well with the logic of organizing things in our daily life. So let’s say we’re standing at pos subscript, 0 to POS minus 1 so pos is sorted from smallest to largest. If pos-1 is less than or equal to pos, we advance one bit; Conversely, if pos-1 is greater than pos, we switch the two numbers and take a step back (pos–) to continue the comparison with the sorted data.

In fact, from the above analysis of how it works, this algorithm is somewhat similar to bubble sort. And, despite its name, performance is actually not that bad, with worst-case time complexity O(n^3).

5. Stooge Sort

Stooge sorting algorithm, in terms of implementation, is somewhat similar to the Stupid algorithm. However, it performs slightly better than Stupid with O(nlog 3 / log 1.5) = O(N2.7095…). .

So how does Stooge sort work? Let’s take a look at the pseudo-code implementation.

function stoogesort(array L, i = 0, j = length(L)1){
  if L[i] > L[j] then
    swap L[i], L[j]
  if (j - i + 1) > 2 then
    t = (j - i + 1) / 3
    stoogesort(L, i  , j-t)
    stoogesort(L, i+t, j)
    stoogesort(L, i  , j-t)
  return L
}
Copy the code

From the point of view of code implementation, this algorithm is very neat, very beautiful. Let me explain a little bit.

If the last element is less than the first, we swap the two numbers. If the number of elements in the current set is greater than or equal to 3, recursively sort the first 2/3, recursively sort the second 2/3, recursively sort the first 2/3 again.

It’s a pretty neat thing to do, but let’s see, at the end of the algorithm, can we produce an sorted array?

In fact, the proof of correctness of this algorithm is very simple. Let’s divide the entire array into three segments, each of which is 1/3. Let’s call the first third A, let’s call the middle third B, and let’s call the second third C.

After the first sorting, [AB] is already sorted, which means that the data in B must be greater than the data in A. After the second sorting, [BC] is ordered, which means that the data in C must be larger than the data in [AB], which means that the data in C must be the largest 1/3 of the data in the array.

At this time, is the data of species [AB] still in order? After the second sort, the data in [AB] becomes unordered, so we need to do another sort, the last sort in the code. Does that sound a little bit like hannotta?

Today, I’m just showing you some of these weird sorting algorithms, so if you’re interested in any of them, you can dig deeper for yourself. What other imaginative sorting algorithms do you know? Welcome to comment in the comments section.

Zheng Wang, a former Google engineer, is the author of “The Beauty of Data Structures and Algorithms” and “The Beauty of Design Patterns”. Wechat official account: Xiao Zhengge, follow wechat official account reply PDF, get 100+ pages of Google engineers’ algorithm learning and interview experience sharing.