### 1. The definition of the algorithm ###

Any piece of code can be called an algorithm, which means that an algorithm is simply a set of instructions to perform a task. The advantage of algorithms is that they are either very fast, they solve some very interesting problems, or they are both. And the algorithm can be applied to any programming language.

Who is suitable for learning algorithms

People who study algorithms have to know a certain amount of math, and in particular, linear algebra. As long as you can do this, congratulations, you can learn algorithms.

f(x) = x * 10; f(5) = ? ;

Three. Binary search

Suppose there is an ordered list, like 1,2,3… 100. If I asked you to guess which number in this ordered list I had in mind, what would you guess?

Well, the easiest way to do that is to guess from 1 to 100, and do that, so obviously, if I’m thinking about the number 100, you might have to guess 100 times. Wouldn’t that be a waste of time?

In the same way, if you look for a letter in a line of 100 characters, do you have to look it up 100 times?

Well, there’s a faster way to do it, in the case of the first example, imagine if we split 100 in half the first time, which is 50, and then guess 50, and then decide what to do based on my answer. Maybe if I say small, I’ll guess between 50 and 100, and if I say big, I’ll guess between 1 and 50, and so on, and we’ll divide each guess in half, so 100 ->, 50 ->, 25 -> 13->, 7->, 4->, 2->, 1, so that we can get the number that I have in mind up to 7 guesses.

Splitting an element in half like this is called binary search. The algorithm through binary search is called binary algorithm, referred to as dichotomy.

In general, if we take a list of n elements, binary search takes at most log2 to the n steps.

You might not remember the logarithm concept, but you should remember the power concept. Logarithm of 10 to the 100 is the same thing as multiplying how many tens to 100. The answer, of course, is 2, which is 10*10=100. So log10^100 = 2;

In general, logarithms are the inverse of powers.

Can we use dichotomy to search only if the list is ordered? Well, that’s not necessarily true, but we could store the elements of these lists in an array, just like we put something in a bucket, and we would label the buckets in order, and the order would be numbered from zero, the first bucket would be zero, the second bucket would be one, and so on.

Let’s say I have an ordered array [1,2,3…100]. I want to find the index of the number 75, and obviously the index of the number 75 is 74. We can write code using dichotomy, as follows (JavaScript code for example, but code in any other language is also possible):

var arr = []; for(var i = 1; i <= 100; i++){ arr.push(i); } var checknum = 75; Var low = 0; var low = 0; var high = arr.length - 1; If (low <= high){// if (low <= high){// if (low <= high){// if (low <= high){// if (low <= high){// if (low <= high){// if (low <= high) Math.floor((low + high) / 2); Var result = arr[center]; var result = arr[center]; If (result === checknum){console.log(center); }else if(result < checknum){// If the index value is less than the guess value, change the lowest index value to reduce the guess range low = center + 1; }else if(result bb0 checknum){high = center-1;}else if(result bb0 checknum){high = center-1; }else{ console.log(null); }}

So that’s 74.

I show the algorithm of the above code in the following figure:

In this way, we can also encapsulate a function that takes an array and the values of its elements. As follows:

function getArrIndex(arr,result){ var low = 0,high = arr.length - 1,center = Math.floor((low + high) / 2); while(low <= high){ if(arr[center] === result){ return center; }else if(arr[center] < result){ low = center + 1; }else if(arr[center] > result){ high = center - 1; }else{return 'no index '; }}} // test getArrayIndex ([1,4,6,8,10],6); / / 2

4. Big O notation

Earlier, I concluded with dichotomy algorithms, and I mentioned running time in dichotomy algorithms. In general, we choose the most efficient algorithm to minimize running time or space when we write a program.

If you simply look up a list from 1 to 100, it will take up to 100 guesses. What if the list is longer? Let’s say it’s 100 million, so we have to guess 100 million times. In other words, the maximum number of guesses required is the same as the length of the list, and the amount of time spent in this way is called linear time.

Binary search is different,100 times we only have to guess log2^100 ≈ 7 times. If it’s 100 million, we just have to guess log base 2 to the 100,000,000 ≈ 26 times. So the amount of time it takes to go through binary search is called logarithmic time.

We can express this running time using a special notation, the big O notation. The big O notation indicates how fast the algorithm is.

Before we understand what big O notation is, let’s look at an example:

Assuming a list of 100 elements, we use binary lookup approximately 7 times (because log2^100≈7) if each time is approximately 1 millisecond. Binary search takes 7 milliseconds. Simple search? A simple search takes 100 milliseconds, about 15 times as long as a binary search.

For a list of 100 million elements, a simple search would take about 100 million milliseconds, while a binary search would take about 26 milliseconds (log2^100000000). As a result, simple search is much slower than binary search, and simple search is nearly four million times faster than binary search.

What is the point of this?

This indicates that the running time of the algorithm increases at different speeds. In other words, with the increase of elements, binary search does not require much extra time, while simple search requires much more. We call this growth rate. It’s not enough just to know the running time of the algorithm, we also need to know that the running time increases as the list increases, the rate of increase, and that’s where the big O notation comes in.

In general, with n lists, a simple lookup takes n operations, which we represent in big O notation as O(n). Of course, it doesn’t have to be milliseconds or seconds, but the big O notation just tells you how fast the algorithm is. And with binary search, we can write it as order log base 2 to the n.

Let’s take another example: Suppose we want to draw a 16X16 grid. Simple lookup is to draw one grid by one, resulting in 256 drawings. And if we fold a piece of paper in half, and fold it in half, and fold it in half, and fold it in half, and fold it in half, and every time you fold it, you’ll get more cells, and you’ll draw it in half, and then you’ll draw it in half, and then you’ll draw it in log2^256, which is 8 times.

We use the big O notation, can be expressed as: simple search is O(n) running time, and O(log2^n) is the binary search time.

The big O notation indicates the worst-case time. What does that mean? Here’s another example:

If you were to find an article in a book using a simple search, the amount of time required would be O(n). This means that in the worst case, you have to find every page in a book. If the article you are looking for happens to be on the first page and can be found once, will the simple search run in O(1) or O(n)? The answer, of course, is O(n). Because the big O notation indicates the worst-case running time. If it only takes one flip, that’s the best case.

From these cases, we can draw the following conclusions:

1. The speed of an algorithm is not about time, it’s about speed. 2. When we talk about the speed of an algorithm, we should talk about how fast the running time will increase as the speed increases. 3. The running time of the algorithm is represented by big O notation. 4.O(log2^n) is faster than O(n). The more elements are searched, the faster the former will be.

In fact, there’s another algorithm. The O (! N). That’s the factorial algorithm. Consider an example: Suppose a person is traveling to four cities and wants to make the trip as short as possible. Using the permutation and combination knowledge, it would take about 24 operations to go to four cities, and about 120 operations to go to five cities. That is to say, with the increase of cities, approximately need to be implemented! N operations. This is an algorithm, but it’s a terrible algorithm.

5. Sort selection algorithm

Before we can understand the principles of selective sort algorithms, we need to understand concepts such as big O notation, arrays and linked lists.

Often contact with the computer, believe that will hear the memory of this word, so what is the memory? Let’s use a vivid metaphor to illustrate the problem.

Suppose you have a cabinet with many drawers, and each drawer can put something in it. In other words, when we put things in the drawers, the cabinets keep them. One drawer for one thing, two drawers for two things. That’s how computer memory works. Computer memory is more like a collection of drawers.

You need to store data in a computer, you ask the computer for storage space, and the computer will give you a storage address. When storing multiple pieces of data, there are two data structures that a computer can store: arrays and linked lists.

An array is just a list of elements. For example, you might want to write a to-do list application (or Todolist in front-end terminology). Then you need to store a lot of to-do items in memory. Using arrays means memory is tightly linked. Why is that?

The elements of an array are numbered, and the position of each element is called an index. Such as:

So this array, the index of element 20, or the position of element 20, is 1. Because arrays are numbered starting at 0, this can be confusing to many novice programmers. Almost all programming languages start numbering arrays at 0, and JavaScript is not the only one.

Suppose you want to add one more element to [10,20,30,40]. Obviously, with the array methods provided by JavaScript, you can only insert elements before, at the end, or in the middle. This is not the case in memory. Again, a metaphor.

I believe that everyone has the experience of watching a movie. Imagine when you go to the cinema, find a place and sit down. Then your friend comes and wants to sit next to you, but the seat next to you is occupied. Your friend will have to move, in the computer, ask the computer to reallocate a memory, and then move to another location. As a result, adding new elements is slow.

So, is there a solution?

It seems that a lot of people have done this before, having a person grab a seat and then grab the seat next to him, not only in movies, but also on a bus. Let’s call this “seat reservation.”

The same is true in computers. When we first ask the computer to allocate memory, we ask the computer to allocate some spare memory, which is called “reserved seats”. Assuming there are 20 “reserved seats”, this seems like a good measure. But you should understand that there are drawbacks to this measure:

The extra location you request may not be used at all and will waste memory. For example, if you choose a seat and your friend doesn’t show up, other people don’t know that your friend won’t show up, so they won’t take it, then the seat will be empty.

If more than 20 people show up and you don’t have enough seats, you’ll have to keep asking for a transfer.

For this kind of problem, we can use linked list to solve.

A linked list is also a data structure in which the elements can be stored anywhere in memory. And each element in the list stores the address of the next element, so that a series of random memory is concatenated.

This is like a minesweeper game, when you scan one, this will remind you that there are no mines in the surrounding grid, so that you can decide whether to click on the next grid. Therefore, adding an element to a linked list is easy: you just put the element in memory and put the memory storage address of that element in the previous element.

Also, you don’t have to move elements around with linked lists. As long as there is enough memory space, the computer can allocate memory for the linked list. For example, if you want to allocate 100 places to an array, there are only 100 places in memory, but the elements can’t be close together. In this case, we can’t allocate memory to an array, but we can allocate memory to a linked list.

The linked list seems to automatically say, “OK, let’s sit in these seats separately.”

But linked lists are not without their drawbacks. Let’s do another metaphor:

If you finish reading the first chapter of a novel and think that the first chapter is not good, you want to skip a few chapters to read, but you don’t have such operation, because usually the next chapter is the next chapter to read, which means that you have to read the fiftieth chapter, you have to click the next chapter 49 times, which is really annoying. (dot directory does not count)

When you read the last element in the list, you need to read the storage address of the previous element first, and then read the last element based on the address of the previous element. Linked lists are really efficient if you’re reading all the elements, but what if you’re jumping through them? Linked lists are inefficient.

That’s the downside of linked lists. But arrays are different, because arrays are numbered, which means you know the memory address of each element in the array, and you just have to do some simple math to figure out where that element is.

Using the big O notation, we can express the running time of arrays and lists as follows:

Array linked list read: O(1) O(n) Insert: O(n) O(1)

Where O(1) is called constant time and O(n) is linear time.

What if you want to delete an element? Linked lists are obviously a better choice because you only need to change the address to which the previous element points.

So the question is, which data structure is most used? Arrays or linked lists?

It depends. But arrays are obviously used more because they support random access. Elements can be accessed in two ways: sequentially and randomly. Sequential access means that you need to access elements sequentially, one by one, whereas random access doesn’t have to. Because arrays are numbered, they are more useful for random access.

With the previous knowledge, now, let’s learn the selection sort algorithm! Suppose your computer has stored some videos and you keep a record of the number of times each video is played, such as:

Video 1:50 Video 2:35 Video 3:25 Video 4:55 Video 5:60

Now, all you need to do is rank the number of plays from the least to the most. So how do you do that?

First of all, you must iterate through the list, find the least played element, add it to a new list, remove it from the original list, and then do it again, find the second least played element, and so on…

In the end, you have an ordered list

Video 3:25 Video 2:35 Video 1:50 Video 4:55 Video 5:60

Write the code as follows:

Var arr = [50,35,25,55,60]; var arr = [50,35,25,55,60]; Function selectSort(arr){var len = arr.length; var len = arr.length; // Define minimum index and array for each element variable var minIndex,ele; for(var i = 0; i < len; I ++){minIndex = I; minIndex = I; // Arr = Arr [I]; for(var j = i + 1; j < len; J ++){if(arr[j] < arr[minIndex]){if(arr[j] < arr[minIndex]){minIndex = j; }} // Arr [I] = arr[minIndex]; Arr [minIndex] = ele; arr[minIndex] = ele; } return arr; } // test: selectSort(arr); / /,35,50,55,60 [25]

Let’s test the running time of the code. When you look up each element, that means you have to look up each element once, so the running time is O(n), you have to find the smallest element, and you have to check each element, which means you have O(n) running time again. So the total time required is order nxn = order n^2. So that means we need to do n operations.

Sort by choice is a pretty neat algorithm, but it’s not the fastest, the fastest is quicksort.

6. Recursive algorithm

JavaScript recursion has long been loved and hated by many developers because it’s fun and hard. The most classic is a factorial function, which looks like this:

function fn(num){ if(num <= 1){ num = 1; }else{ num = num * fn(num - 1); } return num; } fn(5); / / 5 * 4 * 3 * 2 * 1 = 120

Faced with such a result, many developers are wondering why this is the case. Let’s not explain this for the moment, let’s first use a common example in life to analyze:

Let’s say I have a box with boxes in it, boxes in it, boxes in it, all the way to n boxes, and then the NTH box has a book in it. If you could find this book, what would you do?

Here is a schematic method:

First, define a box:

var box = { box:{ box:{ box:{ ...... }}}}

Secondly, as long as the box is not empty, we take out a box, as follows:

if(box ! == {}){ box = box.box; }

Now, let’s judge again, if there is a book in the box, then it means that it has been found, do not need to continue to take out the box, that is:

Var book = 'book'; var book = 'book'; if(box ! == {}){ box = box.box; If (box === book){console.log(' Found already ')! }else{ box = box.box.box; / /... }}

But if it’s not the book, then continue fetching the box, which is the statement in the else block above.

Usually, we can use a loop to do this, as follows:

var box = { // ...... } var book = 'book'; for(var key in box){ if(box ! == {}){ box = box[key]; If (box[key] === book){console.log(' This has been found '); }else{ box[key] = box[key][key] //...... }}}

But there seems to be a big drawback to doing this, because if you get too many innermost layers, you have to go through it n times. That’s not appropriate. It depends on the loop. Therefore, we can define a function, pass arguments to the function, and have the function call itself over and over again, so that the result depends on the program rather than the loop. As follows:

// pass the box object var box = {//...... } var book = 'book'; Function checkOutBook(box){for(var key in box){if(box! = = {} {the if (box [key] = = = the book) {the console. The log (' found '); }else{box = checkOutBook(box[key]); }}}}

In this way, the principle of recursion we can be clear, recursion is a step by step, repeatedly calling itself, in the case of a function, is repeatedly calling itself, until the condition is not met, then the recursion stops.

Now, let’s look at the factorial function above again:

The function needs to pass in a numerical argument, and then we evaluate that argument, and if the argument is less than or equal to 1, we set the argument equal to 1. If not, the argument is executed, which is equal to the product of this argument and this argument minus 1.

Fn (5) = > this means that the num = 5 = > num = 5 < = 1 (conditions are not set up) = > num = 5 * fn (4) = > num = 4 = > num = 4 < = 1 (conditions are not set up) = > num = 5 * 4 * fn (3) = > Num = 3 => num= 3 <= 1 => num= 5 * 4 * 3 * fn(2) => num=2 => num=2 <= 1 => num= 5 * 4 * 3 * 2 * fn(1) => (num = 1) => (num = 1);

Combined with the final return statement that returns num, it’s not hard to get the result. We try to complete the factorial with a loop:

function fn(num){ var i = 0,result = 1; while(i < num){ if(i <= 1){ i = 1; }else{ result *= i; } } return result; } fn(5); / / 120

In terms of performance, recursion is not better than loops, but it is easier to understand than loops. In other words, recursion makes the program easier to understand.

What are the characteristics of recursive functions?

Let’s look at a recursive function:

function fn(){
   var i = 10;
   console.log(i);
   fn(i - 1);
}

Run the above function and you will find that the program will continue to loop indefinitely until it crashes. So, when we write a recursive algorithm, we tell the program when to stop recursing.

There are two conditions for recursion: the baseline condition and the recursion condition. A recursion condition refers to a function calling itself, while a baseline condition refers to a condition that stops recursion, such as a function not calling itself anymore.

Such as:

function fn(){ var i = 10; console.log(i); If (I <= 1){I = 1; }else{// Recursion condition fn(I - 1); }}

Stack: A stack is a data structure. We talked about inserts, deletes and reads of elements in arrays and linked lists. Inserting is also called pushing in, and deleting and reading are called popping. The data structure that can be inserted and can be deleted and read is called a stack. So an array is a stack, and a linked list is a stack.

Take the example above:

function fn(){ var i = 10; console.log(i); If (I <= 1){I = 1; }else{// Recursion condition fn(I - 1); }}

The variable I is divided into a block of memory, and every time the function fn() is called, and every time I is decremented by one, which means every time it is divided into a block of memory. The computer represents this memory as a stack, or block of memory. When another function is called, the current function is either finished or in a paused state.

A stack is used to store multiple function variables, also known as a call stack. Although we don’t have to keep track of memory blocks, the stack already does that for us. Let’s look at a simple example:

function greet(){
   console.log('hello');
}
greet();
function bye(){
    console.log('bye');
}
bye();

The greet() function is called first, and a variable object is created in the background, printing out the ‘hello’ string, and the stack is called. A variable object is also stored, which is equivalent to the string being split into a chunk of memory. This is followed by a call to the bye() function, which also allocates a memory.

Although using a stack is convenient, but there is a disadvantage, that is, do not use the stack to store too much information, because this may take up a lot of memory of your computer.

Seven. Quick sort algorithm

An important idea in the algorithm is D&C, Divide and Conquer, a well-known recursive problem-solving method.

Understanding divide and conquer means you get to the heart of the algorithm. The quicksort algorithm is the first important algorithm of this idea.

Let’s take an example to illustrate this idea.

Suppose you want to divide a rectangle of 1000 length and 800 width into uniform squares and make sure those squares are as large as possible.

How do we do that?

We can do this using the D&C strategy, and the D&C algorithm is recursive. To solve this problem, we need to break the process into two steps. As follows:

Find the baseline conditions for the D&C algorithm and keep them as simple as possible.

Keep breaking the problem down until the baseline conditions are met.

We know that if we divide a rectangle that’s 1,000 by 800 into its largest square, it’s going to be 800 by 800. And then we have a 200X800 rectangle. According to the same division, we can divide it into 200X200 square and 200X600 rectangle, and then into 200X200 square and 200X400 square and rectangle, finally the largest and uniform square is actually 200X200 square.

The first time we talked about binary search is actually a divide-and-conquer idea. Let’s look at another example:

Suppose I have an array of [2,4,6]. You need to add up the numbers and return the result. This may be easy to solve using a loop, as follows:

Var arr = [2,4,6],total = 0; for(var i = 0; i < arr.length; i++){ total += arr[i]; }

But how do you do that using a recursive algorithm?

First, we need to find the baseline conditions for this problem. We know that if there are no elements in the array, that means the sum is 0, and if there is one element, then the sum is the value of that element. So that’s the baseline condition.

Each recursive call, we’re going to reduce one element, and we’re going to be very close to an empty array or an array that contains only one element. As follows:

The sum (), four, five [2] = > 2 + sum ([4, 6]) = > 2 + 4 + sum ([6]) = > 2 + 4 + 6 = 12

Therefore, we can write the following code:

If (ar.length <= 1){// Sum}else{// Recursive}

Now, let’s look at the complete code:

function sum(arr,res){ if(arr.length <= 1){ return res + arr[0]; } else {res + = arr. Splice (0, 1) [0]; return total(arr,res); }} // Test sum([2,4,6],0)=>12

You might be thinking, why use recursion when you can easily solve a problem using loops? If you understand functional programming, you get it. Because functional programming does not have a loop, and the way to achieve the sum is to use recursive algorithms.

The reason I mentioned divide and conquer is that the following quicksort algorithm needs to be understood in this way. Quick sort algorithms are much faster than selection sort algorithms (also known as bubble sort algorithms). We need to know what array we need to sort and what array we don’t need to sort. Obviously we don’t have to sort the array when we have no elements or only one element.

Empty array :[] array with only one element :[2];

As with the array above, we don’t need to sort, so in the following function, we can write:

function quickSort(arr){ if(arr.length < 2){ return arr; }}

When an array has more than two elements, such as [12,11], we might want to check whether the first element is larger than the second element, and then decide whether to switch places. What about three elements, or even more? Can we still do that?

We can use the idea of divide and conquer, the D&C strategy. Decompose the array until the baseline condition is met. That is to say, we’re going to do it recursively. The quicksort algorithm works by selecting an element from an array, also known as a reference value.

We can then use this base value to find elements that are larger or smaller than the base value, which is called a partition. After partitioning, you will get:

A subarray that is smaller than the base value. An array containing base values. A subarray that is larger than the base value.

Of course, all of the subarrays we get here are unordered, because if we get ordered, then sorting is easier. The sorting result can be obtained by simply combining the decomposed array using concat().

The choice of reference value is indeterminate, which means you can choose the middle number, the first number, or even the last number. An array [5,2,3,1,4];

Each of the five elements in the array can be used as a reference value, and each of the values can be partitioned into a different subarray.

We can get the final result through the above many kinds of analogy and induction. And this way of proving that the algorithm works is called an inductive proof. Inductive proof combined with quicksort algorithm can make our algorithm lively and interesting.

Now, let’s implement the quicksort algorithm. The code is as follows:

Function quickSort(arr){var newArr = []; var newArr = []; If (arr. Length < 2){return arr; Var standIndex = Math.Floor (Arr. Length / 2); var standIndex = Math.Floor (Arr. Length / 2); Var standNum = arr.splice(standIndex,1); var standNum = arr.splice(standIndex,1); var standNum = arr.splice(standIndex,1); Var minArr = [],maxArr = []; var minArr = []; For (var I = 0,len = arr.length; var I = 0,len = arr.length; var I = 0,len = arr.length; i < len; i++){ if(arr[i] < standNum){ minArr.push(arr[i]); }else{ maxArr.push(arr[i]); NewArr = quickSort(minArr).concat(standNum,quickSort(maxArr)); } // return the new merged array return newArr; } // Test QuickSort ([1,3,2,4,5]); / / [1, 2, 3, 4, 5]

The quicksort algorithm is unique in that its speed depends on the selected reference value. Before discussing the running time of quicksort algorithms, let’s briefly discuss the large O running time of common algorithms:

These numbers are not accurate, but they are provided only to give us an idea of the difference in running times. In fact, computers perform far more operations per second. (Also note that the base of the logarithm of time complexity is uncertain, such as the dichotomy, which may have a base of 2 or 3, depending on the situation).

For each running time, there is an algorithm associated. For example, the selection sort algorithm, which runs in O(n^2), is very slow.

And of course a merge sort algorithm, its running time is O (nlogn), the more quickly than the selection sort algorithm, fast sorting algorithm is more difficult, because of quick sort algorithm is according to the basic value judgement, in the worst case, the running time of quick sort algorithm and the selection sort algorithm running time is the same, is O (n ^ 2).

And of course, on average, the quicksort algorithm takes the same time as the merge sort algorithm, which is order nlogn.

So that’s it. Some of you may be wondering, right? What’s the worst case and average case here? If the quicksort algorithm runs on average O(nlogn), and the merge sort algorithm always runs on O(nlogn), doesn’t that mean that the merge sort algorithm is faster than the quicksort algorithm? So why not merge sort?

And of course before we do a comparison between merge sort and quicksort, we need to talk a little bit about what a merge sort algorithm is.

VIII. Merge sort algorithm

Merge sort algorithms are also called merge sort algorithms. At its core is the idea of divide and conquer, which is similar to the dichotomy, where you split an array down the middle, divide it into two arrays, and so on until the length of the array is one. And then we go back up to form an ordered array, and then we merge it into an ordered array.

Now, let’s look at the code to implement the merge sort algorithm.

Function mergeSort(arr) {if (arr.length <= 1) {return arr; } var mid = Math. Floor (Arr. Slice (0, mid), Arr. Slice (0, mid), Arr. Slice (mid); Var newArr = [], leftArr = mergeSort(left), rightArr = mergeSort(right), rightArr = mergeSort(right), rightArr = mergeSort(right), rightArr = mergeSort(right), rightArr = mergeSort(right), rightArr = mergeSort(right), rightArr = mergeSort(right), rightArr = mergeSort(right); // If both arrays are of length 1, then compare the value of the first element, While (leftar.length && rightar.length) {if (leftArr[0] < rightArr[0]) {push(leftar.shift ()); } else { newArr.push(rightArr.shift()); } } return newArr.concat(leftArr, rightArr); } / / test the console. The log (mergeSort ([1, 3, 2, 4, 6, 5, 7)));

Now that we understand the merge sort algorithm, let’s compare the quicksort algorithm to the merge sort algorithm.

Compare quicksort algorithm with merge sort algorithm

Let’s start with an example. Suppose we have the following function:

Var list = [1, 2, 3, 4, 5]; function printItem(list){ list.forEach(function(item){ console.log(item); }) } printItem(list);

This defines a function that iterates through an array and prints each entry of the array to the console. Since it iterates through each array item in the array list, the running time is O(n). For now, let’s assume that we print out the value of each item in the array every 1s delay, as follows:

Var list = [1, 2, 3, 4, 5]; function printAfterItem(list){ list.forEach(function(item){ setTimeout(function(){ console.log(item); },1000) }) } printItem(list);

So the first time, we know, print 1,2,3,4,5. And the delay 1s once it’s printed, it’s going to delay 1s,1, delay 1s,2, delay 1s,3, delay 1s,4, delay 1s,5. Both of these functions iterate over an array, so their running time is O(n). So, it’s clear that the printItem() function is faster because it has no latency, and although the big O notation means that the two are the same, it’s actually printItem() faster, and the n in the big O notation O(n) is actually the running time that ignores constants like this.

Here we can define the constant as C, which is the fixed amount of time in the algorithm. For example, 1s in the above example is the fixed amount of time, which is also called constant. Of course, the first function printItem() might also have a time constant, such as :10ms * n, and printAfterItem() after 1s delay would be :1s * n;

It’s easy to see which function runs faster. In general, this constant is not considered, because the effect of this constant is not important for the big O running time difference between the two algorithms. Such as binary search and simple search. Suppose binary search has constant n * 1s, and simple search has constant 10ms * n; Well, if you look at this constant, you might think that simply looking up a constant of 10ms would be much faster, but is that actually the case?

For example, to find an element in a list of 4 billion elements, the binary search time and simple search time are as follows:

Simple search :10ms * 4 billion, approximately 463 days. Binary search :1s * log4 billion, approximately 32 seconds.

As you can see, binary search is still much faster and constants are almost negligible.

Sometimes, however, constants can make a big difference, as is the case with quicksort and merge sort algorithms. In fact, the constant of quicksort is smaller than the constant of merge sort, so if they both run in order nlogn. So obviously quicksort is faster, although quicksort has an average case and a worst case, the average case is much more likely to happen than the worst case.

So at this point, you might ask yourself, what is the average case? What’s the worst-case scenario?

Average and Worst Cases

The performance of the quicksort algorithm is highly dependent on the selected base value, assuming that you select the first element of the array as the base value and that the array to be processed is ordered. Because the quicksort algorithm doesn’t check that the input array is ordered, it still tries to sort.

,2,3,4,5,6,7,8 [1] left [] (1) left,3,4,5,6,7,8 [2] [] (2) left,4,5,6,7,8 [3] [] (3) left,5,6,7,8 [4] [],6,7,8 [5] left [] (4) (5) [June] left [] (6) (7, 8] left [] (7) [8]

As a result, the stack is called eight times each time the first element is selected as the base value, and the smallest array is always empty, resulting in a very long call stack. Now, what happens when you choose an intermediate element as a base value?

Left,2,3,4,5,6,7,8 [1] [1, 2, 3] (4) left left,6,7,8 [5] [1] [3] (2) (6) (7, 8] left [] (7) [8]

Because you split the array in half each time, you don’t need as many recursive calls. Baseline conditions for recursion are quickly reached, so the call stack is much shorter.

The first example shows the worst case, while the second example shows the best case. In the worst case, the stack length is O(n), and in the best case, it is O(logn).

Now look at the first layer of the stack, take one element as a reference value, and divide the other elements into two subarrays. This involves eight elements of the array, so the operation time is O(n). In fact, each layer of the stack involves O(n) elements, so the running time is O(n). Even if the best case is to select the middle value of the array, each level of the stack involves O(n) elements, so each level takes O(n) to complete. The only difference is that in the first example the number of layers in the call stack is O(n)[in technical terms, This means that the height of the call stack is O(n), whereas the height of the call stack in the second example is O(logn), so the time required for the whole algorithm is O(n)*O(logn) = O(nlogn), whereas the time required for the first example is O(n)*O(n) = O(n^2). This is the best case versus worst case running time.

So what we need to know here is that the best case is considered to be one of the average cases, and the average running time of quicksort is order nlogn, as long as we randomly pick one element at a time as the base value. Because of this, on average, the constant of quicksort algorithm is smaller than that of merge sort algorithm. Therefore, quicksort algorithm is one of the fastest sorting algorithms and is also the D&C model.

11. Bucket sort algorithm

Let’s start with an example of a bucket sort algorithm. There is such a case: Xiao Ming’s class has only five students, and they got 5 points,3 points,5 points,2 points and 8 points respectively in the exam (the full score is 10 points, and the scores are sorted according to the size.

Now, let’s use an array to hold the scores of these five students. That is:

let scoreArr = [5, 3, 5, 2, 8];

Define an array to return the sorted array as follows:

  let result = [];

The full scale is 10 points, which means we need 11 buckets to represent each score :0,1,2,3,4,5,6,7,8,9,10. Do you see any patterns? Yes, that’s right, we can represent it as an array of length 10, and we can set the initial number of each fraction to 0. There are only 0 people who get 0, and there are only 0 people who get 1, and so on, as follows:

let bucketArr = []; for(let i = 0; i <= 10; I++){// Set the initial value of each score to 0, which represents 0 bucketArr. Push (0); }

Next, we iterate over the array that stores the score, namely scoreArr. As follows:

Scorear.foreach (item => {item = bucketArr[item]++; });

So now that we’ve stored all the scores in the bucket, we’re going to walk through this array, and we’re going to get an array of sorted scores for all of them.

bucketArr.forEach((item,index) => { for(let j = 1; j <= item; j++){ result.push(index); }})

Finally, we get the sorted array of fractions as [2, 3, 5, 5, 8]. The complete code is as follows:

var arr = [5, 3, 5, 2, 8]; // function sortArray(arr,count) {let result = 1 []; Var arrNum = []; for (let i = 0; i <= count; I++) {arrNum.push(0); arrNum.push(0); } for (let i = 0; i <= arr.length; ArrNum [arr[I]]++) {arrNum[arr[I]]++; arrNum[arr[I]]++; arrNum[arr[I]]++; } for (let i = 0; i <= count; i++) { for (let j = 1; j <= arrNum[i]; j++) { result.push(i); } } return result; } console.log(sortArray(arr,10))

The time complexity of the bucket sorting algorithm is O(2(m+n)), and there are 4 loops in total. The number of buckets in two loops and the number of buckets in two loops is 2(m+n). Of course the constant is negligible if it’s small, so the final time complexity is O(m+n), which is a pretty fast sorting algorithm.