This is the 16th day of my participation in the August More Text Challenge

Hello, everyone. After the analysis of the first two chapters, I believe that you have a certain understanding of dynamic programming, but also feel the powerful algorithm ideas of dynamic programming. Today we will summarize what problems can be solved by dynamic programming, and what is the thinking process of solving the problem of dynamic programming? Let’s start together.

I. Problem model

Dynamic programming is generally used to solve optimal solutions. In the process of solution, it is necessary to go through multiple stages of decision making. Each phase corresponds to a set of states. We need to find a set of decisions through which we can find the optimal solution to the problem. Our kind of problem is abstracted into “multi-stage decision optimal model”.

Dynamic programming also has three characteristics, namely optimal substructure, no aftereffect and repetitive subproblem. Only when the original problem meets these three characteristics, we can use the algorithm idea of dynamic programming to solve it. Let’s take a look at each of these three characteristics.

1. Optimal substructure

Optimal substructure specifies the relationship between primal problems and subproblems. The optimal solution of the original problem contains the optimal solution of the subproblem. Conversely, we can find the optimal solution of the original problem by using the optimal solution of the subproblem.

2. No aftereffect

(1) No aftereffect means that when deducing the state of the later stage, it only relies on the state of the earlier stage and does not care how the state is obtained step by step. For example, the Fibonacci sequence F(5)=F(4)+F(3) shows that F(5) depends only on the values of the two states F(4) and F(3), regardless of how they were obtained.

(2) Once a certain state is determined, it is not affected by the decision in the later stage.

3. Repeat subquestions

When the original problem is divided into many sub-problems, there is double calculation between the sub-problems and the sub-problems.

Now let’s combine the examples to have a more thorough understanding of the above theoretical part.

Copy the code

Problem description: We assume that there is an n*n matrix W [n][n] (the matrix stores positive integers). The pieces move from the top left corner of the matrix to the bottom right corner of the matrix. A piece can only move one step to the right or down at a time. Pieces can take many different paths through the matrix. If we add up the number of paths that each path takes, what is the shortest path length?

Copy the code

We go from start to end, and we need to go 2*(n-1) steps altogether, which corresponds to these 2*(n-1) stages. Each stage has two decisions of going down or going right, and different decisions correspond to different states. Therefore, this conforms to the multi-stage decision, and the shortest path length is solved in the end, so it conforms to the problem model of dynamic programming.

Now, does it meet the three characteristics of dynamic programming? As shown below, there are two ways to go from position (0,0) to position (1,1), so it satisfies the repetition subproblem.

Now let’s look at the feature of no aftereffect. If we want to get to (I, j), we can only go through (I -1, j) and (I, j-1), which means we only care about (I -1, j) and (I, j-1), not how we got from (0,0) to this position. Also, we only allow you to go down or to the right, not back, so once the state of the previous stage is determined, it will not be changed by the decision of the later stage. So this question is in line with the “no aftereffect” of the feature.

We’ll call the shortest path from (0,0) to (I, j) min(I, j), because we can only move to the right or backward, so we can only get to (I, j) from (I -1, j) and (I, j-1). In other words, min(I, j) can only be derived from the two states min(I -1, j) and min(I, j-1). So this is optimal substructure.

Min (I, j) = w [I] [j] + min (min (I - 1, j), min (I, j - 1))Copy the code

So this problem is in line with the dynamic programming problem model, so we can use the idea of dynamic programming to solve this problem.

Second, the solution

There are two methods to solve dynamic programming problems: state transition table method and state transition equation method.

1. State transfer table method:

So let’s define a state table, and state tables are usually two-dimensional. We populate each state in the state table in stages, from front to back, according to a recursive relationship, based on the order in which decisions were made. Finally, we translate the recursive table filling process into code. \

Let’s look at how to solve the shortest path problem using the state transition table method.

Let’s draw a two-dimensional array. The rows and columns in the table represent the positions of the pieces, and the values in the table represent the shortest path from the starting point to this position. Then, we fill in the form according to the decision-making process. The first two steps are shown in the figure below:

Let’s see how the code works.

Def minDis(data,n): status=[[0 for _ in range(n)] for _ in range(n)] sum=0 Sum =sum+data[0][I] status[0][I]=sum sum=sum+data[i][0] status[i][0]=sum for i in range(1,n): for j in range(1,n): The status [I] [j] = data [I] [j] + min (status [j], [I - 1] status [I]] [j - 1) return the status [n - 1) [n - 1] data = [,3,5,7,2 [1], [3,6,5,2,1]. ,3,8,2,3,4,1,6,5 [7], [1], [4,3,1,6,4]] n = 5 print (minDis (data, n))Copy the code

2. State transition equation method

The state transition equation method is similar to recursion. We write the recursion formula from the optimal substructure, which is called the state transition equation. And then based on the state transition equation, the implementation code is ready. There are generally two methods of implementation, one is recursion plus memo method, the other is iterative recursion.

Let’s look at the shortest path example, whose state transition equation looks like this:

Min (I, j) = w [I] [j] + min (min (I - 1, j), min (I, j - 1))Copy the code

Here we use recursion and memo method to achieve, another iterative recursion and state transfer table method of code implementation is the same, but the idea is different.

,3,5,7,2 import sys data = [[1], [3,6,5,2,1],,4,1,6,5 [7], [1,3,8,2,3], [4,3,1,6,4] def minDis(I,j): if I ==0 and j==0: return data[0][0] if mem[i][j]>0: return mem[i][j] minLeft = sys.maxsize if j-1>=0: minLeft=minDis(i,j-1) minUp = sys.maxsize if i-1>=0: minUp=minDis(i-1,j) current=data[i][j]+min(minLeft,minUp) mem[i][j]=current return current print(minDis(n-1,n-1))Copy the code

So that’s it for dynamic programming. Hopefully you already have a grasp of the idea of dynamic programming algorithms. If you don’t understand it, don’t worry, do a few more questions, and then go back and look at it again.