In the process of learning “data structure and algorithm”, “recursion” and “dynamic programming” are two abstract knowledge points that are relatively difficult to understand because people are accustomed to the way of thinking in a flat and straightforward way.

Programmer Xiao Wu intends to use the form of animation to help understand “recursion” and then extend the concept of “recursion” to understand the idea of “dynamic programming” algorithm.

What is recursion

A recursive algorithm is an algorithm that calls its own function or method directly or indirectly.

In layman’s terms, the essence of recursive algorithms is to decompose a problem into smaller sub-problems of the same kind, and then recursively call methods to represent the solution of the problem. It has the following characteristics:

    1. The solution of a problem can be decomposed into several subproblems
    1. This problem and the decomposition of the sub-problem, except the data scale is different, the solution idea is exactly the same
    1. There is a recursive termination condition, that is, there must be a definite recursive termination condition, called recursive exit

Analyze the animation one by one.

1. The solution of a problem can be decomposed into several subproblems

A subproblem is a problem with a smaller data size than its predecessor.

In the GIF, problem no. ① (a large area) is divided into problem No. ②, which is composed of two sub-problems (two middle areas).

2. This problem is exactly the same as the decomposed sub-problem except for the different data scale

“No. ① is divided into No. ②” and “No. ② is divided into No. ③” have the same logic and the solution idea is the same.

3. There are recursive termination conditions, that is, there are recursive exits

To decompose the problem into sub-problems, sub-problems into sub-problems, layer by layer, there can be no infinite cycle, which requires termination conditions.

No. ① is divided into No. ②, no. ② is divided into No. ③, and no. ③ is divided into No. ④. When divided into no. ④, each region has only one problem that cannot be divided, which indicates the existence of recursive termination conditions.

Start with a classic example of recursion

Array summation

Sum(arr[0...n-1]) = arr[0] + Sum(arr[1...n-1])

Copy the code

The Sum function solves the same problem that is smaller than the previous one.

Sum(arr[1...n-1]) = arr[1] + Sum(arr[2...n-1])

Copy the code

And so on, until the sum of an empty array, which is zero, becomes the most basic problem.

Sum(arr[n-1...n-1] ) = arr[n-1] + Sum([])

Copy the code

Ii. Hannott’s Tower problem

The Hanoi Tower question is also a classical recursive question, which is described as follows:

Question: There was A pagoda in ancient times. There were three blocks A, B and C in the pagoda. There were 64 plates in block A, with the large plates at the bottom and the small plates at the top. A monk wanted to move the plate from Block A to Block B, but only one plate was allowed to be moved at A time, and the plates on the three blocks should always be on the bottom and the smaller plates on the top.

  • ① If there is only one plate, it is not necessary to use tower B to move the plate from A to C directly.

  • ② If there are two plates, you can first move plate 1 on plate 2 to plate B; Move plate 2 to C; Move plate 1 to C. This tells us that we can move two plates from A to C with the help of B, and of course, we can move two plates from A to B with the help of C.

  • ③ If there are three plates, then according to the conclusion of two plates, two plates on plate 3 can be moved from A to B with the help of C; Move plate 3 from A to C. A becomes empty. Using seat A, move the two plates on B to C.

  • ④ By analogy, the above idea can be extended to the case of N plates. The smaller N-1 plates will be regarded as A whole, which is the sub-problem we require. Taking tower B as an example, n-1 plates on plate A can be moved from A to B with the help of empty tower B; Move A’s largest plate to C, A becomes an empty tower; With the help of empty tower A, move n-2 plates from tower B to A, and the largest plate from tower C to C, and B becomes empty tower…

Three. Climb the steps

Problem Description:

A person who climbs stairs can only climb 1 or 2 steps at a time, assuming n steps, how many different ways can that person climb stairs?

Start with an easy way to climb the stairs one at a time, using four steps as an example:

Climb the stairs by climbing two steps and the rest one at a time

Here, consider this: All moves can be divided into two categories based on the moves made in the first step:

  • ① The first type is the first step to take a step
  • ② The second type is the first step to take two steps

So n steps is the same thing as n minus 1 steps, plus n minus 2 steps after 2 steps.

It can be expressed by the formula:

f(n) = f(n-1)+f(n-2)

With recursive formulas, the recursive code is almost half done. So let’s consider the recursive termination condition.

When we have a step, we don’t have to keep recursing, there’s only one way to go.

So f (1) = 1.

And I found that this recursive termination condition wasn’t enough when I tried it out with small numbers like n = 2, n = 3.

When n is 2, f of 2 is equal to f of 1 plus f of 0. If there is only one termination condition f(1) = 1, then f(2) is unsolvable, and the recursion is unresolvable. Therefore, in addition to the recursive termination condition f(1) = 1, there should also be f(0) = 1, which means there is a way to walk 0 steps. From the perspective of thinking and the GIF, this seems a little illogical. So just to make it easier to understand, f(2) = 2 is a termination condition that means you take two steps, and there are two ways to do it, one step or two steps.

The summary is as follows:

  • ① If there is only one step, then there is only one way to go, that is to climb one step
  • ② If there are two steps, there are two ways to walk, one step or two steps

By recursive conditions:

f(1) = 1;
f(2) = 2;
f(n) = f(n-1)+f(n-2)

Copy the code

It’s easy to derive the recursive code:

int f(int n) {
  if (n == 1) return 1;
  if (n == 2) return 2;
  return f(n-1) + f(n-2);
}
Copy the code

Using the above three examples, we can summarize how to write recursive code:

  • 1. Figure out how to break down big problems into smaller ones
  • 2. Write the recursive formula through the rule
  • 3. Deduce the termination condition through the critical point of the recursive formula
  • 4. Translate recursive formulas and termination conditions into code

What is dynamic programming

Before introducing dynamic planning, let’s start with Divide and Conquer.

Divide and conquer strategy

Divide the original problem into several smaller-scale sub-problems similar to the original problem. Conquer these sub-problems recursively, and then combine the solutions to create the original problem.

Because in solving big problems, need to recursively solve small problems, so generally use “recursive” method to achieve, that is, from the top down.

Dynamic Programming

In fact, dynamic programming is similar to divide-and-conquer strategy, which is to decompose a original problem into several smaller sub-problems, solve these sub-problems recursively, and then combine the solutions of the sub-problems to obtain the solution of the original problem. The difference is that these subproblems overlap, and once you solve a subproblem, you might have to solve it again, so the idea is to store the solution of these subproblems, and then the next time you solve this subproblem, you just bring it back. In other words, the problem solved by dynamic programming is a subset of the problem solved by divide-and-conquer, but this subset is more suitable for dynamic programming to achieve a smaller running time. Divide-and-conquer can certainly solve problems that can be solved with dynamic programming, but it just takes a long time to run. Therefore, divide-and-conquer strategies are generally used to solve problems with opposite subproblems, called standard divide-and-conquer, while dynamic programming is used to solve problems with overlapping subproblems.

Similar to the concept of “divide-and-conquer strategy” and “dynamic programming”, there are also “greedy algorithm” and “backtracking algorithm”. Due to space constraints, programmer xiao wu will not expand on this. In the subsequent articles, they will respectively introduce “greedy algorithm”, “backtracking algorithm” and “divide-and-conquer algorithm”, please pay attention to 🙂

To extract the key points of the concept of “dynamic programming”, this is how it is described:

  • 1. Dynamic programming attempts to solve each subproblem only once
  • 2. Once the solution of a stator problem has been calculated, it is memorized and stored so that the next time the same subproblem is needed, it can be directly looked up in the table.

From recursion to dynamic programming

Again, if you do it recursively, the time complexity of this method is O(2^n). See my previous article a Song of Ice and Fire: Time complexity and Space Complexity for details.

The same color represents the part of the step climbing problem that is repeated in the recursive calculation.

It can be found from the picture that we carry out recursive operations from the top down, for example, f(n) is the sum of F (n-1) and F (n-2), and F (n-1) is the sum of F (n-2) and F (n-3).

Think about it: What if, instead, you took a bottom-up, iterative approach to derivation?

The bottom-up solution of F (n) is explained in the following table.

Number of steps 1 2 3 4 5 6 7 8 9
Number of moves 1 2

The first row of the table represents the number of steps, and the second row represents the number of steps. Where f(1) = 1 and f(2) = 2 are the explicit results above.

In the first iteration, if the number of steps is 3, then the number of moves is 3, obtained by f(3) = f(2) + f(1).

Number of steps 1 2 3 4 5 6 7 8 9
Number of moves 1 2 3

In the second iteration, if the number of steps is 4, then the number of moves is 5, which is obtained by f(4) = f(3) + f(2).

Number of steps 1 2 3 4 5 6 7 8 9
Number of moves 1 2 3 5

Thus, in each iteration process, only the previous two states need to be retained to push out the new state.

show me the code

int f(int n) {
    if (n == 1) return 1;
    if (n == 2) return2; Int a = 1, b = 2; int a = 1, b = 2; int temp = a + b;for (int i = 3; i <= n; i++) {
        temp = a + b;
        a = b;
        b = temp; 
    }
    return temp; 
}

Copy the code

The program iterates from I = 3 until I = n. In each iteration, the number of moves for one more step is calculated. During the iteration, only two temporary variables, A and B, are reserved, representing the results of the previous iteration and the previous iteration respectively. For ease of understanding, the temp variable is introduced. Temp represents the result value of the current iteration.

In fact, there is not much code added, just a simple optimization, time complexity will be reduced to O(n), and space complexity also becomes O(1), this is the “dynamic programming” powerful!

Explain dynamic programming

There are three important concepts in dynamic programming:

  • Optimal substructure
  • “Border”
  • [State transition formula]

In the “Stair Climbing problem”

F (10) = F (9) + F (8) is [optimal substructure] F (1) and F (2) are [boundary] F (n) = F (n-1) + F (n-2)

The “stair climbing problem” is a relatively simple problem in dynamic programming because it has only one dimension of variation, and becomes much more complicated when multiple dimensions are involved.

The difficulty lies in identifying these three concepts in dynamic programming.

For example, “The king and the gold mines.”

The king and the gold mines

One country discovered five gold mines, each with a different amount of gold and a different number of workers. The total number of miners involved was 10. Every gold mine must be dug up completely or not dug up at all. You can’t send half your men to dig up half of it. I’m going to ask you to use the program to figure out, in order to get as much gold as possible, which gold mines should I dig?

Identify these three concepts in dynamic programming

Optimal Substructure in the king and Gold Mine Problem

There are two optimal substructures in the king and gold problem:

  • ① Optimal selection of 10 workers in 4 gold mines
  • ② 4 Optimal choice of gold mine workers (10-5)

The relation between the optimal choice of gold mine 4 and gold mine 5 is

MAX[(number of gold mined by 10 workers in 4 gold mines), (number of gold mined by 5 workers in 4 gold mines + number of gold mined by 5 workers in 5 gold mines)]

The King and the Gold Mine

There are two boundaries in the king and gold problem:

  • ① When there is only one gold mine, only one gold mine can be mined, and the amount of gold obtained is the amount of the gold mine
  • ② When the given number of workers is not enough to dig one gold mine, the amount of gold obtained is 0
State transition formula in king and Gold problem

We set the number of gold mines as N, the number of workers as W, the gold quantity of gold mines as array G[], and the labor quantity of gold mines as array P[], and obtain [state transition formula] :

  • Boundary value: F(n,w) = 0 (n <= 1, w < p[0])

  • F(n,w) = g[0] (n==1, w >= p[0])

  • F(n,w) = F(n-1,w) (n > 1, w < p[n-1])

  • F(n,w) = max(F(n-1,w), F(n-1,w-p[n-1]) + g[n-1]) (n > 1, w >= p[n-1])

Realization in the problem of Kings and Gold Mines

First through a few animations to understand the way “worker” and “gold mine” collocation

1. Only dig the first gold mine

The first two workers get zero for digging the first gold mine, and when there are three workers, it starts to get 200, and then even if you add more workers, it stays the same, because there’s only one gold mine to dig.

2. Dig the first and second gold mines

In the case of the first and second gold mines, the first two workers earn zero because W < 3, so F(N,W) = F(n-1,W) = 0.

When there are three workers, arrange them to dig the first gold mine and start generating revenue of 200.

When there are four workers, the location of the mine is changed and they are arranged to dig a second gold mine, which starts to generate revenue of 300.

When you have five or six workers, because you don’t have more than four workers to dig the first mine, you still get 300.

When there are seven workers, the first and second gold mines can be mined at the same time, starting to generate revenue of 500.

3. Dig the first three gold mines

This is one of the most important animations in the “King and the Gold Mine” problem and can be seen several times

4. Dig the first four gold mines

This is one of the most important animations in the “King and the Gold Mine” problem and can be seen several times

The king and the Gold Mine

A closer look at the above animation shows:

  • By comparing “mining the first and second gold mines” and “mining the first three gold mines”, in “mining the first three gold mines”, the mining benefits of 7 workers in 3 gold mines come from the results of 7 workers in 2 gold mines and 4 workers in 2 gold mines, Max(500,300 + 350) = 650;

  • Comparing “dig the first three gold mines” and “dig the first four gold mines”, in “dig the first four gold mines”, the mining benefits of 10 workers in 4 gold mines come from the results of 10 workers in 3 gold mines and 5 workers in 3 gold mines, Max(850,400 + 300) = 850;

Dynamic programming code in the King and Gold Problem

Code source: HTTP: / / https://www.cnblogs.com/SDJL/archive/2008/08/22/1274312.html / / maxGold [I] [j] saved personal j before digging a gold I can get the maximum number of gold, Int maxGold[max_people][max_n]; int GetMaxGold(int people, int mineNum){ int retMaxGold; // Declare the maximum number of gold mines returned // if the problem was ever calculatedif(maxGold[people][mineNum] ! = -1){ retMaxGold = maxGold[people][mineNum]; // Get the saved value}else if(mineNum == 0) {// If there is only one gold mine [corresponding to dynamic planning"Border"]
        ifRetMaxGold = gold[mineNum]; (people >= peopleNeed[mineNum]) // The maximum you get is the amount of gold in the mineelseRetMaxGold = 0; retMaxGold = 0; // The maximum value is 0 gold}else if(people >= peopleNeed[mineNum]) // If people are able to mine this gold mineOptimal substructure{// Consider both mining and non-mining, RetMaxGold = Max (GetMaxGold(people - peopleNeed[mineNum],mineNum - 1) + gold[mineNum], GetMaxGold(people,mineNum - 1) ); }else// Otherwise there are not enough people to mine the gold mineOptimal substructure] { retMaxGold = GetMaxGold(people,mineNum - 1); MaxGold [people][mineNum] = retMaxGold; }return retMaxGold;
}
Copy the code

I hope that through this article, you can have a certain understanding of “recursion” and “dynamic programming”. In the future, based on “dynamic programming”, we will study multiple knapsack algorithm, dijtthra algorithm and other deeper algorithm problems. At the same time, more concepts of “recursion” will be extended again in the chapter of “didivide and conquer algorithm”. Please pay attention to programmer xiao wu 🙂