Some students have a vague understanding of time complexity. They probably know it, but they can’t explain it.

Here, from the perspective of the interview, the time complexity is clearly stated.

What exactly is time complexity

Time complexity is used to make it easy for developers to estimate how long an application will run

How do we estimate the running time of the program? We usually estimate the number of operation units of the algorithm as a proxy for the running time of the program, and by default each unit of the CPU takes the same amount of time to run.

Assuming that the problem size of the algorithm is N, the number of operation units is expressed by the function f(n)

With the increase of data size N, the growth rate of algorithm execution time is the same as that of F (n), which is called the asymptotic time complexity of the algorithm, or time complexity for short, denoted as O(f(n)).

What is the Big O

So I’m going to talk about big O, what is big O? Many of you know O(n), O(n^2) when you talk about time complexity, but you don’t know what big O is

** big O is used to represent the upper bound, ** when used as the upper bound of the worst-case running time of the algorithm, is the upper bound on the running time of any data input.

Again, the introduction to algorithms gives an example: take insertion sort, the time of insertion sort we all say is O(n^2).

But the time is O(n) in the case that the data is originally ordered, so for all input cases, the worst time is O(n^2), so we call the time of insertion sort O(n^2).

Similarly when we look at quicksort, we know that quicksort is O(nlogn), but when the data is already ordered, the time of quicksort is O(n^2), strictly by big O, it should be O(n^2).

But we still say quicksort is O(nlogn) time, which is the default rule in the industry, and we say O is the general case, not a strict upper bound

So that’s good to know here

In the interview, the interviewer will never discuss the definition of O in terms of the time complexity of quicksort, as long as you know that the time complexity of the discussion is the general time complexity.

You have to have an idea of the time complexity of the algorithm

Even if the time complexity of the same algorithm is not invariable, it is still related to the form of input data

The main thing we care about is what the data looks like in general.

What the time complexity of the algorithm is in the interview is the general case

But if the interviewer is talking to us about the implementation and performance of an algorithm we should always keep in mind that the data use case is different and the time complexity is different, so you should pay attention to that

How to describe time complexity

In this figure, we can see the difference of time complexity of different algorithms under different data input scales.

When we decide which algorithms to use, not the time complex as low as possible, but the size of the data, if the size of the data is small you can even use an O(n^2) algorithm rather than an O(n) algorithm

In the case of O(5n^2) and O(100n) before n is 20, it is clear that O(5n^2) is superior and takes the least amount of time.

So why do we ignore the constant term when we calculate the time, so O(100n) is the time of O(n), O(5n^2) is the time of O(n^2)

And by default O(n) is better than O(n^2)?

So this is the definition of big O again

Because big O is actually the time complexity when the data magnitude exceeds a point and the data magnitude is very large, this point is also the point where the constant coefficient has no decisive role.

In the figure above, for example, 20 is the point, and the constant term is no longer decisive as long as n is greater than 20.

Therefore, the constant coefficient is omitted when we say time complexity, because we generally assume that the data size is large enough by default. Based on this fact, the ranking of the algorithm with complex time is as follows:

The O (1) constant order < < O (logn) logs in order O (n) < O (n ^ 2) square linear order order < O (n ^ 3) (cubic order) < O (2 ^ n) index (order)

What you don’t know is order logn.

We usually say that the time of this algorithm is log base n, is it log base two of n?

Well, that’s not true. You could have log base 10 of n, you could have log base 20 of n, but we’ll just stick to log base n, which means we ignore the base description.

Why is it okay to do that?

As shown in the figure below

Let’s say we have two algorithms whose time complexity is log base two of n and log base ten of n

So if you remember your high school math, you probably didn’t understand that log base 2 of n is equal to log base 2 of 10 times log base 10 of n

So log base 2 of 10 is a constant, and I’ve shown you up here that we’re ignoring the constant term coefficients in calculating time

And just to abstract this, log base I of n is equal to log base j of n, so we’re going to ignore I, just say log base n, because log base j is just a constant

So, that should make sense why we ignore the base, right

If time complexity is a complex expression, how can we simplify it

Sometimes, when we calculate the time complexity, we find not a simple O(n) or O(n^2), but a complex expression such as:

O(2*n^2 + 10*n + 1000)

So how do we usually describe the time complexity of this algorithm here, one way is simplification

Remove the addition constant term in the running time (because the constant term does not increase the number of computer operations as n increases)

O(2*n^2 + 10*n)

Get rid of constant coefficients (we’ve just gone over in detail why we can get rid of constant terms)

O(n^2 + n)

Only retain the highest term, remove n of one order of magnitude smaller (because the data scale of N ^2 is much larger than that of N), and finally simplify to:

O(n^2)

If you have trouble with this step, you can also do the extraction of n, which becomes O(n(n+1)), or you can omit the addition constant

O(n^2)

So we end up saying: The time of our algorithm is O(n^2).

Or another way to simplify this, for n greater than 40, this is always going to be less than O(3 times n^2).

O(2*n^2 + 10*n + 1000) < O(3*n^2)

So we end up dropping the constant term and the final time is O(n^2).

What is the time complexity, for example

So let’s go through a problem and see how do we calculate the time complexity

Find two identical strings out of n strings (assuming there are only two identical strings)

Some of you might think that you can solve this problem by enumerating, in order n^2 time.

This time complexity is actually wrong.

Some of you are ignoring the time spent comparing strings, which is not as easy as comparing int numbers

In addition to n^2 traversals, string comparisons still take m operations (m is the length of the string), so the time complexity is O(m*n*n).

So let’s think about another way to do this

We sort the n strings in lexicographical order, so that the n strings are sorted, meaning that two identical strings are next to each other

Then I iterate over n strings, and I find two strings that are the same

So let’s look at the time complexity of this algorithm

If the time of quicksort is O(nlogn) and the length of the string is m, then each quicksort comparison must have m character comparisons, which is O(m*n*logn).

And then we’re going to iterate over these n strings to find two strings that are the same, remember we’re still comparing strings as we iterate, so the total time is order m times n logn plus n times m.

We simplify O(m*n*logn + n*m), extract m*n to become O(m*n*logn + 1),

The time complexity at the end of the constant term is O(m*n*logn), so let’s compare the time efficiency of O(m*n*logn) with the first method O(m*n*n)

It’s clear that order m times n logn is better than order m times n times n.

So sorting through a collection of strings to find two identical strings is faster than simply enumerating by force.

Through this example I hope you have a preliminary understanding of how time is complicated.

I believe that many of you feel at a loss when facing nearly 2,000 questions. I spent half a year sorting out the Github project:Leetcode brush problem guide. There are 200 classic algorithm topics brush topic sequence, with 60W words of detailed illustrations, summary of commonly used algorithm templates, and difficult video explanation, according to the list of a brush can be! Star support a wave!

You can follow my videos at B and find me at B

In addition, I have arranged the algorithm articles of “Code Random Thoughts” in accordance with the order of brush questions from shallow to deep, sorted into a book, and sorted out the PDF version successively

First above:

Download it and you’ll find it too late! BAT programmer algorithm learning manual download address!

If it helps, give it a thumbs up.