Today’s sharing started, please give us more advice ~

preface

1. The time complexity we usually talk about is the worst case of the algorithm; 2. Time complexity is a function, which can only roughly estimate the time complexity of the algorithm; 3. Space complexity is a measure of how much extra storage an algorithm takes up during its operation.

1. Algorithm efficiency

Algorithm efficiency analysis is divided into two types: the first is time efficiency, the second is space efficiency. Time efficiency is called time complexity, and space efficiency is called space complexity. Time complexity mainly measures how fast an algorithm can run, while space complexity mainly measures how much extra space an algorithm needs.

Second, time complexity

1. Concept of time complexity

In computer science, the time complexity of an algorithm is a function that quantitatively describes the running time of the algorithm.

The number of basic operations performed in the algorithm is the time complexity of the algorithm. In theory, you can’t figure it out, until you put your program on the machine and run it. But do we need to machine test every algorithm? You could have done it all on the machine, but it’s a hassle, and that’s where the time complexity comes in.

  • The time taken by an algorithm is directly proportional to the number of statements executed;
  • The number of basic operations performed in the algorithm is the time complexity of the algorithm.

2. Asymptotic representation of large O

In practice, when we calculate the time complexity, we don’t really need to calculate the exact number of executions, but only the approximate number of executions, so we use the asymptotic notation of big O.

  • Big O notation: A mathematical notation used to describe the asymptotic behavior of a function

(1) Derivation of large O – order method

  1. Replace all addition constants in running time with constant 1.
  2. In the modified run times function, only the highest order terms are retained.
  3. If the highest order term exists and is not 1, the constant multiplied by the term is removed. The result is order large O

The code is as follows (example) :

  • So the func method is executed 1+N2+2*N+1+10

I see the number of func executions, if when our N is very large, assuming N =100, then the +1 and +10 here can be ignored, because 1002=10000, in front of 10000 +1 and +10 can be said to be very small, so +1 and +10 are no different.

So that’s what we used earlier to derive the large O method.

  • Replace all addition constants in running time with constant 1.

It becomes 1+N2+2 times N+1+1;

Look at

  • In the modified run times function, only the highest order terms are retained.

Simplified N2

  • If the highest order term exists and is not 1, the constant multiplied by the term is removed. The result is order large O.

Here our highest order term is 2, but we don’t have a constant in front of it so we don’t have to get rid of it, if N2 has a 2 in front of it it’s 2, N2 has to get rid of 2 to become N2;

So the time complexity of Func is O(N2) using the progressive notation of large O.

From the above we can see that the asymptotic notation for big O strips out the terms that have little effect on the result and gives a concise representation of the number of executions. The time complexity is a function, so you can only estimate the time complexity of this algorithm.

3. Three cases of time complexity

Other algorithms have best, average, and worst case time complexity.

(1) Worst case

Worst case: The maximum number of runs (upper bound) of any input size is O(N);

Here N is the size of the problem;

(2) Best-case scenario

The minimum number of runs (lower bound) for any input size is O(1);

(3) Average situation

The expected number of runs for any input size;

Note: The average case here is not the average of the best and worst cases, but the number of times we expect to run, and sometimes the average case can be the same as the best or worst case.

In general, when we talk about time complexity, we’re talking about worst-case algorithms.

4. Common examples of time complexity calculation

Example 1.

Example 1:

1+2*N+1+10 After deducing large order method: time complexity is O(N);

Example 2:

The time complexity is O(M+N);

Example 3:

The time complexity here is O(1) because the N passed in is not used;

2. Bubble sort time complexity

Example 4: This is a bubble sort, and let’s find the time complexity of its best, worst and average cases.

  • Best: O (N)
  • The bad: O (N2)
  • Average: O (N)

This is an optimized bubble sort, and the best case is that the set of data is already in order, so it only takes one walk, also order N.

And the worst case is that you go through the entire array and that’s N2;

We said that the average case is what we expect, and what we expect, of course, is order N.

3. Time complexity of binary search

We know that the time complexity is always the worst case, binary search is only the worst case if we’re looking for the next two numbers, so let’s just assume that we’re looking for the number on the edge.

So the time complexity of binary search is O(log2N).

4. Time complexity of recursion

Time complexity of recursion = number of recursions * number of operations performed per recursion;

Example 1:

There’s N recursions, there’s no loop, you’re doing a ternary operator one time. So it’s N+1, which is order N.

Example 2: This is a recursive Fibonacci sequence;

The recursive number of Fibonacci sequence is actually a sum of geometric sequence, the final number of executions is (2n) -1, through the derivation of large order O method, the final time complexity is O(2n).

The time complexity is only approximate, and when the numbers are large enough the missing part here doesn’t affect our time complexity calculation.

Third, space complexity

1. Concept of spatial complexity

Spatial complexity is a measure of how much storage an algorithm temporarily (extra) takes up while it is running.

A measure of the amount of storage space occupied.

The space complexity is not how many bytes the program takes up, since that doesn’t make much sense either, so the space complexity is the number of variables.

The spatial complexity calculation rules are basically similar to the practical complexity, also using large O asymptotic notation.

2. Calculation of space complexity

(1) Bubble sort

The space complexity of this bubble sort is O(1);

Why is that? Because spatial complexity is to solve a problem by applying extra variables, so array here is not extra it’s required, but sorted here is extra, it gets sorted every time, why isn’t it O(N)? Because every time I go through the loop is this variable gone? So back and forth is this variable right here.

(2) Fibonacci sequence

The space complexity here is O(N); In order to solve this problem, we apply an extra array space of N+1 size to find the code of Fibonacci sequence 1 to N. Since 1 is a constant term, the space complexity of this code is O(N).

(3) recursion

This is a recursion to find a hierarchy, and it has space complexity O(N); Because recursion creates a temporary variable every time you recurse.

summary

In the early days of computer development, the storage capacity of computers was very small, so the space complexity was very important. But with the rapid development of computer industry, the storage capacity of computer has reached a very high degree. So we don’t need to pay special attention to the spatial complexity of an algorithm anymore. Because memory isn’t as expensive as it used to be, you often hear about sacrificing space for time.

Today’s share has ended, please forgive and give advice!