“This is the 16th day of my participation in the Gwen Challenge in November. Check out the details: The Last Gwen Challenge in 2021.”

# Evaluation of algorithms

An Algorithm is a set of methods for manipulating data and solving program problems.

Using different algorithms for the same problem may end up with the same result, but the amount of resources and time consumed in the process will be very different. For example, a wrench and pliers can do the job, but using pliers is certainly not as efficient as a wrench.

# Fibonacci numbers introduce complexity analysis

``* Fibonacci sequence: this sequence begins with the third term and each term equals the sum of the first two terms. Function fun1(n) {if (n <= 1) return n; function fun1(n <= 1) return n; Return fun1(n-1) + fun1(n-2)} function fun2(n) {if (n <= 1) return n; let first = 0; let second = 1; for (let i = 0; i < n - 1; i++) { let sum = first + second; first = second; second = sum } return second }Copy the code``

Timing tool

``function check(title, task, num) { console.log(title); Let start = new Date().getTime() console.log(' start time ', start); Task (num) let end = new Date().getTime() console.log(' end time ', end); Console. log(' time ', (end-start) / 1000); } check(' recursive ', fun1, 45) check(' iterative ', fun2, 1111111145) Recursion Start time 1637648114480 End time 1637648126090 Time 11.61 Iteration start time 1637648126091 End time 1637648130718 Time 4.627Copy the code``

The difference between recursion and iteration is surprisingly large.

So how do we measure the strengths and weaknesses of different algorithms?

## 1. Post-hoc statistics

Through statistics and monitoring, the running time of different algorithms is compared with that of computer timers, so as to determine the efficiency of algorithms, but there are very big limitations:

• Test results are highly dependent on the test environment
• Test results are greatly influenced by the size of the data

## 2. Pre-analysis and estimation

Before computer programming, the algorithm is estimated according to statistical methods.

Think about it, when we want to implement a function, we want to quickly know the optimal solution of several solutions and then implement it, rather than spend a lot of effort to make every solution and then test the result, because it is too inefficient. Therefore, we need to evaluate the factors (such as time, space complexity, etc.) that affect the efficiency of the code before executing it. Therefore, we need to make decisions through complexity analysis, and we will focus on the time complexity of the highest frequency in the interview.

• Time dimension: Refers to the time taken to execute the current algorithm, which is usually described by “time complexity”.

• Spatial dimension: Refers to the amount of memory required to perform the current algorithm, which is usually described as “spatial complexity”.

# Derive the large O – order method

The execution efficiency of an algorithm, roughly speaking, is the execution time of the algorithm code. But how do you “visually” get the execution time of a piece of code without actually running it? Here’s a very simple piece of code, and I’m going to take you through an estimate of its execution time.

``function cal01(age) { // 1 * unit-time if (age > 58) { console.log(1); } else if (age > 28) { console.log(2); } else { console.log(3); } } function cal02(n) { // (3+3n)*unit-time let sum = 0; // let I = 1; // execute once // n times n times for (; i <= n; Function cal03(n) {// (1+3n)*unit-time for (let I = 0; i < n.length; i++) { console.log(n[i]); }} function cal04(n) {// 1+2n+n(1+3n) = (3n^2+3n+1)*unit-time for (let I = 0; i < n; i++) { for (let j = 0; j < n; j++) { console.log('666'); } } } function cal05(n) { // 1+2n+n*(1+3*20) = (1+63n)*unit-time for (let i = 0; i < n; i++) { for (let j = 0; j < 20; j++) { console.log('666'); } } } function cal06(n) { // (1+log(2)n)*unit-time let i = 1; While (I < n) {I + I * 2 // 2*x = n x = log(2)n}} function cal07(n) {// I += I + I => I =2i; // (1+(3+2n)log2(n))*unit-time for (let I = 0; i < n; i += i) { for (j = 0; j < n; j++) { console.log('666'); }}}Copy the code``

Although we do not know the specific value of unit_time, we can get a very important rule through the derivation of these code execution times, that is, the execution time T(n) of all codes is proportional to the execution times N of each line of code. We can sum up this rule in a formula.

• T(n) represents the execution time of the code;
• N indicates the data size.
• F (n) represents the total number of times each line of code is executed.
• Because this is a formula, it’s f(n). O in the formula means that the execution time of the code T(n) is proportional to the expression F (n).
1. The low-order, constant and coefficient parts in the formula do not control the growth trend, so they can be ignored. The time complexity of the two pieces of code can be denoted by the big O notation: T(n) = O(n); T (n) = O (n ^ 2).
2. The maximum O time complexity does not specifically represent the real execution time of the code, but the trend of code execution time with the increase of data size. Therefore, it is also called the asymptotic time complexity (also called time complexity). Generally, as n increases, the algorithm with the slowest growth of T(n) is the optimal algorithm.

conclusion

• Replace all addition constants in running time with constant 1
• In the modified run times function, only the highest order terms are retained
• If the highest order term exists and the coefficient is not 1, it removes the coefficient multiplied by the term

# A common order of time complexity

### Constant order O (1)

First, the time complexity of sequential structures is introduced

No matter how many lines the code executes, the time complexity of the code is O(1) as long as there are no complex structures such as loops.

``````int i = 1;
int j = 2;
++i;
j++;
int m = i + j;
Copy the code``````

When the above code is executed, its consumption time does not increase with the growth of a variable, so no matter how long the code is, even if it has tens of thousands of lines, its time complexity can be expressed by O(1). Note: whatever this constant is, it is called O(1), not O(3), O(12), or any other number.

For the branch structure, no matter whether the judgment condition is true or false, The Times of execution are constant and will not change with the increase of n. Therefore, the time complexity of the simple branch structure (not included in the loop structure) is O(1).

### Linear order O (n)

To determine the order of an algorithm, it is necessary to determine the number of times a particular statement is run. Therefore, the key to analyze the complexity of an algorithm is to analyze the operation of the loop structure.

“A loop”, the number of operations the algorithm needs to perform is expressed as a function of input size n, i.e. T(n).

``for(int i=1; i<=n; i++){ console.log(i) }Copy the code``

“A loop”, the number of operations the algorithm needs to perform is expressed as a function of input size n, i.e. T(n).

``for(int i=1; i<=n; i++){ console.log(i) } for(int i=1; i<=n; i++){ console.log(i) }Copy the code``

If it’s a parallel for loop then n is executed 2n times, ignoring the constant is also O(n).

### The logarithmic order O (logn)

``````let i = 1;
while(i<n){
i = i * 2;
}
Copy the code``````

As you can see from the code above, in the while loop, every time I is multiplied by 2, I gets closer and closer to n. Let’s try to solve for it, assuming that after x cycles, I is greater than n, and then the cycle ends, that is, 2 to the x is equal to n, so x is equal to log_{2}n

That is, after the loop log_{2}n, the code ends. The ellipsis base is generally ignored, so the time complexity of this code is O(logn).

### Square order O (n ^ 2)

For example:

``for(i=1; i<=n; i++){ for(j=1; j<=n; j++){ j = i; j++; }}Copy the code``

This code is actually nested with two layers of n loops, and its time complexity is O(n*n), namely O(n²). If n of one layer of the loop is changed to M, that is:

``for(i=1; i<=m; i++){ for(j=1; j<=n; j++){ j = i; j++; }}Copy the code``

So its time is order m times n, so the time of the summary loop is equal to the body of the loop times the number of times that loop runs. What is the time complexity of this nested loop?

``````for(i=0; i<n; i++){
for(j=i; j<n; i++){
console.log(1)
}
}
Copy the code``````

Since when I =0, the inner loop is executed n times, and when I =1,…… is executed n-1 times When I =n-1, it is executed once, so the total execution is: n+(n-1)+(n-2)+….. +1=n(n+1)/2=n^2/2+n/2 Using the method of deriving large order O: end up with n^2

### Linear log order O(nlogn)

Linear log order O(N logn) is actually very easy to understand, so if I loop O(logn) code N times, it’s N times O(logn), which is order N (logn).

Take the above code with a few modifications for example:

``for(m=1; m<n; m++){ i = 1; while(i<n){ i = i * 2; }}Copy the code``

Cubic O (n) after, K power O (n ^ K) refer to the above O (n squared) to understand, O (n) after the equivalent of three layer n cycle, other similar.

But O(n ^ 3) too big for n makes it unrealistic, and O(n ^ 2) and O(n!). Unless n is very small, even n of 100 is a nightmare running time, so this kind of unrealistic algorithm time complexity is generally not discussed.

Function generation tool: zh.numberempire.com/

When the data size is small, the time complexity from top to bottom becomes larger and the execution efficiency becomes lower and lower.

This is especially true when the data is large

## Analyze the time complexity of Fibonacci numbers

``O(2^n) function fun1(n){if(n<=1) return n; // Function fun1(n){if(n<=1) return n; return fun1(n-1)+fun1(n-2); Function fun2(int n){if(n<=1) return n; let first=0; let second=1; for (let i = 0; i <n-1 ; I++) {// let sum=first+second; first=second; second=sum; } return second; }Copy the code``

# Leetcode, Fibonacci number

Fibonacci numbers, usually denoted by F(n), form a sequence called the Fibonacci sequence. The sequence starts with 0 and 1, and each successive digit is the sum of the preceding two. That is:

F of 0 is 0, F of 1 is 1

F(n) = F(n-1) + F(n-2), where n is greater than 1

I give you n, please calculate F(n).

Example 1:

Input: 2

Output: 1.

Explanation: F(2) = F(1) + F(0) = 1 + 0 = 1

Example 2:

Input: 3

Output: 2

Explanation: F(3) = F(2) + F(1) = 1 + 1 = 2

Example 3:

Input: 4

Output: 3

F(4) = F(3) + F(2) = 2 + 1 = 3

# Introduction to Data Structure

## What is a data structure?

Data structures are the way computers store and organize data.

The backbone of every program or software is two entities: data and algorithms. Algorithms transform data into something that programs can use productively. Therefore, it is important to understand how to structure the data so that algorithms can quickly maintain, exploit, and iterate over the data.

The data structures that exist in programming languages are very similar to real-world systems.

Imagine you go to the supermarket to shop. It’s hard to find what you want in a crowded store without a sign indicating where the goods are! The efficiency of finding can be increased many times if goods are categorized and signs are added

Similarly, data is our commodity, and data structures give us a way to organize information in the digital space in a way that allows us to manipulate data more quickly and easily than, say, adding signs to goods.

Common data structures

• Linear structure: array, linked list, stack, queue, hash table
• Tree structure: binary tree, AVL tree, red black tree, B tree, heap, Trie, Huffman tree, and search set
• Graph structure: collar matrix, adjacency list

## How are data structures used?

Data structures deal with four main functions for us:

1. Input information (how to receive data)
2. Processing information (how data is processed in a data structure)
3. Maintaining data (how to organize data within a structure)
4. Retrieve information (retrieve finds and returns data stored in the structure)

Different types of data are entered, processed, stored, and retrieved in different ways. That’s why we have several data structures to choose from!

## Choose the best data structure

Different data structures are better suited to different tasks. Choosing the wrong data structure can cause your code to be slow or unresponsive (and mess up your program!). , so please consider the following factors when making your decision:

1. Are there any data structures with built-in functionality best suited for this purpose? 2. Has the optimal data structure been selected for searching, sorting and iterating? 3. Do you need to set aside memory to store data? 4. How long does it take different data structures to complete various tasks, and what does the runtime do?

# Linear table – array

Linear tables are the most basic, simplest, and most commonly used data structures. A linear table is a kind of data structure. A linear table is an ordered sequence of N data elements with the same properties.

Sequential table storage is the storage of data elements into a contiguous memory storage space, adjacent data elements are also adjacent addresses

## Array

An array is a linear list of sequential storage. The memory addresses of all elements are contiguous.

``,66,88 let array = new array (11);Copy the code``