1. Time complexity of the algorithm

1.1 Two ways to measure the execution time of a program (algorithm)

1) Post-statistical methods

This works, but there are two problems:

  • First, to evaluate the performance of the designed algorithm, it is necessary to run the program.

  • Second, the statistics of the time obtained depend on the hardware, software and other environmental factors of the computer. In this way, it is necessary to run in the same state of the same computer to compare the algorithm faster.

2) The method of prior estimation

The time complexity of an algorithm is analyzed to determine which algorithm is better

1.2 Time frequency

1. Basic introduction

Time frequency:

The time an algorithm takes is directly proportional to the number of statements executed in the algorithm, and it takes more time whichever algorithm has more statements executed. The number of statements executed in an algorithm is called statement frequency or time frequency. Notes for T (n). [Examples]

2. Examples – Basic cases

For example, to calculate the sum of all numbers from 1 to 100, we designed two algorithms:

3. For example – Ignore constant terms

Conclusion:

  1. 2n+20 and 2n as n gets larger, the execution curves approach infinity, and 20 can be ignored

  2. 3n+10 and 3n as n gets larger, the execution curves approach infinity, and 10 can be ignored

4. For example – ignore coefficient

Conclusion:

  1. As n increases, 5 n2n^2n2+7n and 3 n2n^2n2+ 2n, the execution curves overlap, indicating that in this case, 5 and 3 can be ignored.

  2. And n3n^ 3N3 +5n and 6 n3n^3n3+4n, perform curve separation, how many times the mode is critical

1.3 Time Complexity

  1. In general, the number of repetitions of the basic operation statement in an algorithm is the problem sizeSome function of n, let’s call it T(n)If there is some auxiliary

The auxiliary function f(n), such that when n approaches infinity, the limit value of T(n)/f(n) is a constant not equal to zero, then f(n) is said to be a function of the same order of magnitude of T(n).

Denoted as T(n)=O(f(n)), O(f(n)) is called the asymptotic time complexity of the algorithm, or time complexity for short.

  1. T(n) is different, but the time complexity might be the same. For example, T(n)=n²+7n+6 and T(n)=3n²+2n+2 have different T(n) but complicated time

Same degree, order n ^ 2.

  1. Method for calculating time complexity:
  • With constant 1 instead of running all the additive constant in time T (n) = n squared + 7 n + 6 = > T (n) = n squared + 7 n + 1 T (n) = n squared + 7 n + 6 = > T (n) = n squared + 7 n + 1 T (n) = n squared + 7 n + 6 = > T (n) = n squared + 7 n + 1

  • In the modified run times function, only the highest order items are retained


    T ( n ) = n squared + 7 n + 1 = > T ( n ) = n squared T(n)=n ^ 2 +7n+1 => T(n)=n ^ 2

  • Subtract the coefficient of the highest order term


    T ( n ) = n squared = > T ( n ) = n squared = > O ( n squared ) T(n) = n ^ 2 => O(n ^ 2)

1.4 Common time complexity

Description:

    1. The time complexity of common algorithms increases in descending order: O (1) < O (log2N) < O (N) < O (nLOG2N) < O (n2) < O (N3) < O (NK) < O (2N). As the problem size n increases, the time complexity increases and the algorithm execution efficiency decreases
    1. As can be seen from the figure, we should avoid using exponential algorithms whenever possible

Classification:

  1. Constant order O (1)

  1. The logarithmic order O (log2n)

  1. Linear order O (n)

  1. Linear log order O(nlog2n)

  1. Order squared O(n2n^2n2)

  1. Cubic order O(n3n^ 3N3)

  2. Order NKN to the KNK.

  3. Exponential order O(2n2^n2n)

Just think of it as O(n ^ 2) up here, O(n ^ 3) equals three n cycles, and so on

1.5 Average and Worst Time complexity

    1. Average time complexity refers to the running time of the algorithm when all possible input instances occur with equal probability.
    1. The worst-case time complexity is called the worst-case time complexity. Generally, the time complexity discussed is the worst-case time complexity.

    The reason for this is that the worst-case time complexity is the limit on the running time of the algorithm on any input instance, which guarantees that the algorithm will not run longer than the worst-case time.

    1. Whether the average time complexity is consistent with the worst time complexity depends on the algorithm

2. The spatial complexity of the algorithm

  1. Similar to the discussion of time Complexity, the Space Complexity of an algorithm is defined as the amount of storage the algorithm consumes as a function of the problem size N.

  2. Space Complexity is the amount of storage Space temporarily occupied by an algorithm while it is running. The number of temporary units of work that some algorithms need to occupy is related to the problem size N, which increases with the increase of N. When N is large, more storage units will be occupied, such as quicksort and merge sort algorithms, such as radix sort

  3. When we do algorithm analysis, we mainly discuss time complexity. From the perspective of user experience, the speed of program execution is more important. Some caching products (Redis, memcache) and algorithms (radix sort) essentially trade space for time.