See the data Structures & Algorithms Must-know must-Know series: 1

2. Time & Space Complexity (PART 1)

Time complexity analysis

Constant execution complexity is O(1)

Before we get into the drill, a rule: Constant execution complexity is always O(1).

To understand this principle, take a look at ๐ŸŒฐ (Golang implementation, which should not be difficult to read)

for a := 0; a < 100000; a++ {
    for b := 0; b < a; b++ {
        fmt.Printf("%d ", b)
    }
}
Copy the code

Play.golang.org/p/itly0TWKU… , you will find the prompt timeout Running Program

So what is this complexity (big O)? ๐Ÿ™Œ O (1)

Even if it’s bigger, 1000000000… Anything that is constant can be ignored, although the actual execution time has a big impact.

However, “time complexity” represents the trend of increasing execution efficiency and data size, so no matter how large the constant is, it can be ignored.

Practical analysis routines

Ok, start routine, lift the first ๐ŸŒฐ

func calcSum(n int) int {
	sum_1, sum_2, sum_3 := 0.0.0

	for p := 0; p <= 100000; p++ {
		sum_1 = sum_1 + p
	}

	for q := 0; q < n; q++ {
		sum_2 = sum_2 + q
	}

	for i := 0; i < n; i++ {
		for j := 0; j < n; j++ {
			sum_3 = sum_3 + i*j
		}
	}

	return sum_1 + sum_2 + sum_3
}
Copy the code

Without looking at the code implementation, what do you observe? Or what is the form?

๐Ÿ™Œ Two loops + one nested loop

Ok, so let’s keep it separate: loop 1 constant; Cycle 2 is O(n); Loop 3 nesting, which is also easy to see, is O(n ^ 2);

So let’s just add it up. Complexity = O(1) + O(n) + O(n ^ 2)

If you find the above analysis difficult, please return to the previous chapter

Play.golang.org/p/e4Uaka1Ze… You can Go Play

As n goes to infinity, the overall complexity only depends on a large number of order nยฒ, so the overall complexity = O(nยฒ).

Although, is a very simple ๐ŸŒฐ, but we can get the following routine:

  1. Just focus on the piece of code that loops the most
  2. Rule of addition: The total complexity is equal to the complexity of the code with the highest order of magnitude
  3. Multiplication rule: The complexity of nested code is equal to the product of the complexity of both inside and outside of the nested code (think of the multiplication rule as a nested loop)

This article ๐ŸŒฐ is still relatively simple, the actual code will have functions and recursion, etc., continue to summarize the steps as follows:

  1. Disturb terms first (ignore constants, minimal quantities)
  2. Work from the inside out, start at the bottom
  3. If you encounter a function call, drill down into the function analysis

Maybe, it still feels a little squishy. That’s okay. Keep reading. More algorithms, more analysis. You get a “feel” for the routine and the steps.

Common time complexity

Here are some specific common time complexity orders and corresponding ๐ŸŒฐ analysis

Although the code varies widely (different languages implement different things written by different people), the common complexity levels are small, so I’ll summarize

Except for multiple orders of magnitude (m, n), we can roughly divide them into two categories, polynomial magnitude and non-polynomial magnitude. There are only two nonpolynomial magnitude-o (2^n) and O(n!).

O(m+n) O(m*n)

The code complexity analysis of an algorithm usually introduces only one data size, but it is possible to introduce two data sizes (more than one is not usually discussed)

Directly on ๐ŸŒฐ

func calcSum(n int, m int) int {
	sum_1, sum_2 := 0.0

	for i := 0; i < n; i++ {
		sum_1 = sum_1 + i
	}

	for i := 0; i < m; i++ {
		sum_2 = sum_2 + i
	}

	return sum_1 + sum_2
}
Copy the code

N and M are two data sizes, so the overall complexity = O(m+n)

Play.golang.org/p/1P1n3q75E… You can Go Play

For multiplication, then ๐ŸŒฐ, as follows:

func calcSum(n int, m int) int {
	sum := 0
	for i := 0; i < n; i++ {
		for j := 0; j < m; j++ {
			sum = sum + i*j
		}
	}
	return sum
}
Copy the code

Here ๐Ÿคš multiplication rule: the complexity of nested code equals the product of the complexity of both inside and outside of the nested code

So, complexity = O(m*n)

Play.golang.org/p/VoZcACv3B… You can Go Play

O(1)

Once again, O(1) is just a way of describing constant time complexity, not just one line of code.

Generally, the time complexity of this algorithm is equal to two o (1), even if there are thousands of lines of code, as long as there are no circular or recursive statements in the algorithm.

The constant execution complexity is always O(1).

O (logn), O (nlogn)

Logarithmic time complexity is very common and relatively difficult to analyze compared to others.

Continue, for example ๐ŸŒฐ, the code is relatively simple (here for is omitted from go)

func calc(n int) int {
	i := 1
	for i <= n {
		i = i * 2
	}
	return i
}
Copy the code

Here, we need to use routine 1: focus on the code I = I *2 with the most loops

For personal analysis (sort of ๐Ÿท), let’s first list what the value of the I loop will be, as follows:

1, 2, 4, 8, 16, 32... 2^xCopy the code

As can be seen, until 2^x <= n ๐Ÿ”š, counting the number of cycles is actually solving a mathematical equation, as follows:

Based on the previous theory: when adopting the complexity of large O tags, coefficients can be ignored. We ignore the base of the logarithm, which is order logn.

The O (nlogn)? ๐Ÿ™Œ product rule

If the time of a piece of code is O(logn) and the loop is executed n times, the time is O(nlogn).

O(nlogn) is also a common algorithm time complexity. For example, merge sort and quicksort have O(nlogn) time complexity.

O (n squared)

๐Ÿ™Œ product rule, without further ado, nested loops; Actually, there’s a third power, which is not very common and I won’t mention it here.

Complexity comparison

Directly above

O (2 ^ n) and O (n.)

As mentioned, there are only two common non-polynomial magnitude-o (2^n) and O(n!). .

When the data size n becomes larger and larger, the execution time of the non-polynomial scale algorithm will increase sharply, and the execution time of solving the problem will increase indefinitely.

So, non-polynomial time complexity algorithms are actually very inefficient algorithms, so I won’t go into them here.

Master theorem formula

Good at math ๐Ÿ‘ฆ, consider using the “master theorem formula” (especially for complex recursive analysis)

I don’t understand… For those interested, see Introduction to Algorithms: Solving recursions using master theorems to compute algorithm complexity

Personal experience

(*ยดโ–ฝ ‘) Blue Blue, my personal experience of complexity analysis is two words: “feeling”

Some algorithm geeks or LeetCode are much more sophisticated than others, and you can easily see the complexity at a glance. Respect ~

Quote from the columnist (Wang Zhengda) :

Complexity analysis is not difficult; practice is the key. Later, I’ll take you through the time and space complexity of each data structure and algorithm. Just follow my lead and practice, and you’ll soon be able to see the complexity of the code at a glance, and the complexity at a bit of analysis every time you see it.

Letter big guy, set a Flag, adhere to learning a variety of common algorithms & solid practice after each ๐ŸŒฐ