Complexity analysis is the essence of the whole algorithm learning, as long as you master it, the content of data structure and algorithm is basically half mastered.

1. What is complexity analysis?

  1. Data structure and algorithm solution is “how to make the computer faster time, more space to solve the problem”.

  2. Therefore, the performance of data structure and algorithm should be evaluated from two dimensions of execution time and occupied space.

  3. Time complexity and space complexity are used to describe performance problems respectively, which are collectively called complexity.

  4. Complexity describes the growth of the algorithm execution time (or space) in relation to the size of the data.

2. Why complexity analysis?

  1. Compared with performance testing, complexity analysis has the characteristics of independent execution environment, low cost, high efficiency, easy operation and strong guidance.

  2. Master complexity analysis, will be able to write better performance of the code, is conducive to reduce the cost of system development and maintenance.

3. How to perform complexity analysis?

3.1 Big O notation

The execution time of the algorithm is proportional to the execution times of each line of code, expressed by T(n) = O(f(n)), where T(n) represents the total execution time of the algorithm, f(n) represents the total execution times of each line of code, and N often represents the size of data. This is the big O time complexity notation.

3.2 Time Complexity

1) define

The time complexity of the algorithm, that is, the time measure of the algorithm.

The large O time complexity notation does not specifically represent the real execution time of the code, but the trend of the code execution time with the increase of data size. Therefore, it is also called the asymptotic time complexity (also referred to as time complexity).

Example 1:

function aFun() {
    console.log("Hello, World!"); // This command needs to be executed oncereturn0; }}Copy the code

So this method has to do two operations.

Example 2:

function bFun(n) {
    for(leti = 0; i < n; I++) {// console.log(n + 1) times"Hello, World!"); // execute n times}return0; }}Copy the code

So this method is going to do (n + 1 + n + 1) = 2n +2.

Example 3:

 function cal(n) {
   letsum = 0; / / 1 timesleti = 1; / / 1 timesletj = 1; / / 1 timesfor(; i <= n; ++ I) {// n times j = 1; / / nfor(; j <= n; Sum = sum + I * j; sum = sum + I * j; // n * n = n ^ 2}}Copy the code

Notice that this is a two-layer for loop, so the second loop is n times n = n2, and the loop here is ++ I, which is different from the one in example 2, I ++, which is the difference between adding first and adding later.

So this method requires (n2 + n2 + n + n +1 +1 +1) = 2n2 +2n + 3.

2) the characteristics of

Take time complexity as an example. Since time complexity describes the growth and change trend of algorithm execution time and data size, constant, low order and coefficient do not have a decisive influence on this growth trend, so these terms are ignored in time complexity analysis.

Therefore, the time complexity of example 1 is T(n) = O(1), that of example 2 is T(n) = O(n), and that of example 3 is T(n) = O(n2).

3.3 Time complexity analysis

    1. Focus only on the piece of code that loops the most times

Single pieces of code look at high frequencies: loops, for example.

function cal(n) { 
   let sum = 0;
   let i = 1;
   for (; i <= n; ++i) {
     sum = sum + i;
   }
   return sum;
 }
Copy the code

The one that executes the most is the for loop and the code inside it, which executes n times, so the time is O(n).

    1. Rule of addition: The total complexity is equal to the complexity of the code with the highest order of magnitude

Maximization of multiple pieces of code: For example, if a piece of code has a single loop and multiple loops, take the complexity of multiple loops.

function cal(n) {
   let sum_1 = 0;
   let p = 1;
   for (; p < 100; ++p) {
     sum_1 = sum_1 + p;
   }

   let sum_2 = 0;
   let q = 1;
   for (; q < n; ++q) {
     sum_2 = sum_2 + q;
   }
 
   let sum_3 = 0;
   let i = 1;
   let j = 1;
   for (; i <= n; ++i) {
     j = 1; 
     for(; j <= n; ++j) { sum_3 = sum_3 + i * j; }}return sum_1 + sum_2 + sum_3;
 }
Copy the code

The above code is divided into three parts, respectively sum_1, sum_2, sum_3, mainly look at the loop part.

The first part, sum_1, clearly knows that it is executed 100 times, which is independent of the size of n. It is a constant execution time, which does not reflect the increasing trend, so the time complexity is O(1).

The second and third parts, sum_2 and sum_3, the time complexity is related to the size of n, is O(n) and O(n2).

So, taking the maximum magnitude of the three pieces of code, the final time complexity of the above example is O(n2).

Similarly, if there are 3 for loops, then the time complexity is O(n3), and 4 is O(n4).

So, the total time complexity is equal to the time complexity of the code with the highest magnitude.

    1. Product rule: The complexity of nested code is equal to the product of the complexity of both inside and outside the nested code

Product of nested code: such as recursion, multiple loops, etc.

function cal(n) {
   let ret = 0; 
   let i = 1;
   for(; i < n; ++i) { ret = ret + f(i); F (I)}}function f(n) {
  let sum = 0;
  let i = 1;
  for (; i < n; ++i) {
    sum = sum + i;
  } 
  return sum;
 }
Copy the code

The CAL loop calls f, which also has a loop.

So, the time complexity of the whole CAL () function is T(n) = T1(n) * T2(n) = O(n*n) = O(n2).

    1. Add multiple scales: for example, if the method has two parameters that control the number of cycles, then add the complexity of the two
function cal(m, n) {
  let sum_1 = 0;
  let i = 1;
  for (; i < m; ++i) {
    sum_1 = sum_1 + i;
  }

  let sum_2 = 0;
  let j = 1;
  for (; j < n; ++j) {
    sum_2 = sum_2 + j;
  }

  return sum_1 + sum_2;
}
Copy the code

The above code is also summation, sum_1 data size m, sum_2 data size N, so time complexity O(m+n).

Formula: T1(m) + T2(n) = O(f(m) + g(n)).

    1. Multiply multiple scales: For example, if a method has two parameters that control the number of cycles, then multiply the complexity of the two
function cal(m, n) {
  let sum_3 = 0;
   let i = 1;
   let j = 1;
   for (; i <= m; ++i) {
     j = 1; 
     for(; j <= n; ++j) { sum_3 = sum_3 + i * j; }}}Copy the code

The above code is also summation, two-layer for loop, sum_3 data size is m and N, so the time complexity is O(m*n).

Formula: T1(m) * T2(n) = O(f(m) * g(n)).

3.4 Common time complexity analysis

    1. Polynomial: As the size of the data increases, the execution time and space occupied by the algorithm increases in proportion to the polynomial.

Including O(1) (constant order), O(logn) (logarithmic order), O(n) (linear order), O(Nlogn) (linear logarithmic order), O(n2) (square order), O(n3) (cubic order).

All except O(logn) and O(nlogn) can be seen in the examples above.

Here is an example of O(logn) (logarithmic order) :

let i=1;
while (i <= n)  {
   i = i * 2;
}
Copy the code

The code starts at 1, multiplies each loop by 2, and when greater than n, the loop ends.

It’s just a geometric sequence that we learned in high school, where the value of I is just a geometric sequence. In mathematics it looks like this:

20, 21, 22… 2k … 2x = n

So if we know what x is, we know how many times this line of code has been executed, and we solve for x by 2x equals n, which in math is x equals log base 2n. So the time complexity of the above code is O(log2n).

In fact, whether it’s base 2, base 3, or base 10, we can write the time complexity of all logarithmic orders as O(logn). Why is that?

Log3n =log32 log2n, so O(log3n) = O(C log2n), where C=log32 is a constant.

Since time complexity describes the growth and change trend of algorithm execution time and data size, constant, low order and coefficient do not have a decisive influence on this growth trend, so these terms are ignored in time complexity analysis.

Therefore, in the expression method of logarithmic order time complexity, we ignore the “base” of logarithm and uniformly express it as O(logn).

Here is an example of O(nlogn) (logarithmic order) :

function aFun(n){
  let i = 1;
  while (i <= n)  {
     i = i * 2;
  }
  return i
}

function cal(n) { 
   let sum = 0;
   for (let i = 1; i <= n; ++i) {
     sum = sum + aFun(n);
   }
   return sum;
 }
Copy the code

The time complexity of aFun is O(logn), while that of CAL is O(n), so the time complexity of the above code is T(n) = T1(logn) * T2(n) = O(logn*n) = O(nlogn).

    1. Non-polynomial: With the growth of data size, the execution time and space consumption of the algorithm increase dramatically, and the performance of this kind of algorithm is very poor.

Including O(2n) (exponential order), O(n!) (Factorial order).

O(2n) (exponential order) example:

aFunc( n ) {
    if (n <= 1) {
        return 1;
    } else {
        returnaFunc(n - 1) + aFunc(n - 2); }}Copy the code

Obviously, T(0) = T(1) = 1, and T(n) = T(n-1) + T(n-2) + 1, where 1 is the addition of one execution. Obviously T(n) = T(n-1) + T(n-2) is a Fibonacci sequence. It can be proved by induction proof that T(n) < (5/3)n when n >= 1 and T(n) >= (3/2)n when n > 4. Therefore, the time complexity of this method can be expressed as O((5/3)n), which is O(2n) after simplification. As you can see, the running time for this method increases exponentially. If you’re interested, you can test the algorithm’s running time with input sizes of 1,10,100, and you’ll feel the charm of time complexity.

3.5 Time Complexity Classification

Time complexity can be divided into:

  • Best Case Time Complexity: The time complexity of executing the code in the best case.
  • Worst case time Complexity: The time complexity of executing the code in the worst case.
  • Average Case Time Complexity, expressed as a weighted average of the number of times the code was executed in all situations. Also called weighted average time complexity or expected time complexity.
  • Amortized time Complexity: Most of the complexity cases for code execution are lower levels of complexity. In some cases, higher levels of complexity can be amortized to lower levels of complexity if a sequential relationship occurs. Basically the amortized result is equal to low level complexity.

For example:

// n indicates the length of the arrayfunction find(array, n, x) {
  let i = 0;
  let pos = -1;
  for (; i < n; ++i) {
    if (array[i] == x) {
      pos = i; 
      break; }}return pos;
}
Copy the code

The find function finds an item in an array with the value x and returns the index, or -1 if none is found.

Best case time complexity, worst case time complexity

If the first value in the array is equal to x, then the time is O(1), and if there is no variable x in the array, then we have to iterate through the entire array, and the time is O(n). Therefore, the time complexity of this code is different in different cases.

So the code above has a best-case time complexity of O(1) and a worst-case time complexity of O(n).

Average case time complexity

How to analyze average time complexity? Code complexity differences of an order of magnitude in different cases are represented by a weighted average of the number of times the code is executed in all possible cases.

The position of the variable x in the array can be n+1:0 to n-1 and not in the array. We add up the number of elements to be traversed in each case and divide by n+1 to get the average of the number of elements to be traversed, that is:

The coefficient, low order and constant are omitted, so the average time complexity of this simplified formula is O(n).

We know that the variable x we’re looking for is either in the array or it’s not in the array at all. So the probability of both of these things being in the array and the probability of not being in the array is 1/2. In addition, the probability that the data to be searched appears in the n positions from 0 to n-1 is the same, which is 1/n. So, according to the probability product rule, the probability that the data you are looking for will appear anywhere from 0 to n-1 is 1/(2n).

So, the biggest problem with the previous derivation is that you don’t take into account the probabilities of what happens. If we take into account the probability of each scenario, the calculation of average time complexity looks like this:

This value is the weighted average of probability theory, also known as the expected value, so the full name of average time complexity should be called weighted average time complexity or expected time complexity.

Therefore, according to the conclusion above, the average time complexity obtained is still O(n).

Amortize the time complexity

Amortized time complexity is a special kind of average time complexity (application scenarios are very special and very limited, not to mention here).

3.6 Time Complexity Summary

The commonly used time complexity takes up the following order of time:

O(1) < O(logn) < (n) < O(nlogn) < O(n2) < O(n3) < O(2n) < O(n!) < O(nn)

Common time complexity:

3.7 Space Complexity Analysis

The full name of time complexity is progressive time complexity, which represents the increasing relationship between the execution time of an algorithm and the size of data.

By analogy, the full name for the parabolic space complexity is the increasing relationship between the storage space of an algorithm and the data size of the algorithm.

Definition: The spatial complexity of the algorithm is realized by calculating the storage space required by the algorithm. The calculation formula of the spatial complexity of the algorithm is written as S(n) = O(f(n)), where N is the size of the problem and f(n) is the function of the statement regarding the storage space occupied by n.

function print(n) { const newArr = []; // Line 2 newarr.length = n; / / line 3for (let i = 0; i <n; ++i) {
    newArr[i] = i * i;
  }

  for (let j = n-1; j >= 0; --j) {
    console.log(newArr[i])
  }
}
Copy the code

As with the time complexity analysis, we can see that in line 2 we claim a space to store the variable newArr, which is an empty array. The third line changes the length of newArr to an array of n lengths, each of which is undefined. Otherwise, the rest of the code takes up no more space, so the space complexity of the entire code is O(n).

Common spatial complexity is O(1), O(n), O(n2), logarithmic complexity such as O(logn), O(nlogn) is not usually used.

4. How to master the complexity analysis method?

The key to complexity analysis is practice. Practice makes perfect.

When we write code, whether we use space for time or time for space can be measured according to the time complexity and space complexity of the algorithm.

5. The last

The author of the first and frequent address: Github

If you think this project and article is good or helpful to you, please give me a star, your affirmation is the biggest motivation for me to continue to create.

Reference article:

1. Complexity analysis (I) : How to analyze and statistics the execution efficiency and resource consumption of algorithms?

2. (Data structure) 10 minutes to solve the algorithm time complexity