How hard is dynamic programming?

Dynamic programming is a term borrowed from other industries.

The general idea is to break something down into stages and then move between them to achieve a goal. Since there are usually multiple migration directions, it is time to make a decision about which one to choose.

The problem of dynamic programming is usually to achieve a specific goal, and this goal is often the optimal solution. And:

  1. There can be transitions between phases, which is called dynamic.
  2. To reach a feasible solution (target stage) requires constant transfer, then how to transfer to reach the optimal solution? It’s called planning.

Each phase is abstracted into states (represented by circles), and transitions between states may occur (represented by arrows). You can draw something like this:

So what sequence of decisions should we make in order to get the optimal outcome? In other words, how each state should be selected to the next specific state and eventually reach the target state. This is the problem of dynamic programming research.

Each decision doesn’t really take into account subsequent decisions, but only previous states. Figuratively speaking, it is actually a kind of myopic thinking to take one step at a time. Why is this shortsightedness a good way to find the optimal solution? That’s because:

  1. We simulated all the possible transitions and finally picked an optimal solution.
  2. Anisotropism (we’ll talk about that later, but for the record)

And if you don’t simulate all the possibilities, and you go straight to the best solution, that’s greedy.

That’s right. Dynamic programming is all about finding the optimal solution. But sometimes, incidentally, you can figure out the total number of schemes and other things, which is actually a byproduct of dynamic programming.

OK, so let’s break down dynamic programming into two parts, and maybe you get a sense of what dynamic programming is. But that doesn’t really help you with the problem. So what is algorithmically dynamic programming?

Algorithmically, dynamic programming and table lookup recursion (also known as memorization recursion) have many similarities. I suggest you start with memorizing recursion. This article also begins with memorizing recursion and then introduces dynamic programming step by step.

Recursion by memorization

So what is recursion? What is table lookup (memorization)? Let’s take a look.

What is recursion?

Recursion refers to the method within a function that calls the function itself.

Meaningful recursion usually breaks down the problem into smaller subproblems of the same kind, and when the subproblem is shortened to normal, we know its solution directly. The original problem can then be solved by establishing a relationship (transition) between recursive functions.

Is it a little bit like divide and conquer? Divide and conquer refers to dividing a problem into many, and then combining multiple solutions into one. That’s not what it means here.

For a problem to be solved with recursion there must be a recursive termination condition (the fineness of the algorithm), that is, the recursion will be scaled down to normality.

Although the following code is also recursive, it is not a valid algorithm because it cannot be terminated:

def f(x):
  return x + f(x - 1)

The above code will execute forever and never stop unless the outside world intervenes.

So more of the case should be:

def f(n):
  if n == 1: return 1
  return n + f(n - 1)

Using recursion usually makes code shorter and sometimes more readable. In the algorithm, the use of recursion can be very simple to complete some functions that are not easy to be realized by the loop, such as the left, middle and right order traversal of binary tree.

Recursion is widely used in algorithms, including functional programming, which is becoming increasingly popular.

Recursion plays a very important role in functional programming. In pure functional programming, there are no loops, only recursion.

In fact, in addition to coding the recursion through the function call itself. We can also define recursive data structures. You know trees, linked lists and so on are recursive data structures.

Node {
    value: any; // 当前节点的值
    children: Array<Node>; // 指向其儿子
}

The above code is a definition of a multi-tree, we can see that children is the Node collection class, this is a recursive data structure.

It’s not just a normal recursive function

Recursive functions in memorized recursion mentioned in this paper actually refer to special recursive functions, that is, ordinary recursive functions meet the following conditions:

  1. Recursive functions do not depend on external variables
  2. Recursive functions do not change external variables

So what’s the use of meeting these two conditions? This is because we need the function to be given an argument and its return value to be deterministic. So we can remember it. We’ll talk about memorization later.

And if you know anything about functional programming, the recursion here is actually strictly a function in functional programming. If you don’t know it, the recursive function here is actually a function in mathematics.

Let’s review functions in mathematics:

In a change, if there are two variables x and y, if there is a unique y corresponding to any x, then x is said to be an independent variable and y is a function of x. The range of x is called the range of the function, and the corresponding range of y is called the range of the function.

All of the recursions in this article refer to functions in this kind of math.

Take the recursive function above for example:

def f(x):
  if x == 1: return 1
  return x + f(x - 1)
  • X is the argument, and the set of all possible return values of x is the domain.
  • F of x is the function.
  • The set of all possible return values of f(x) is the range.

You can have more than one argument, you can have more than one argument to a recursive function, like f of x1, x2, x3.

It is the core content of memorizing recursion to describe problems through functions and to describe the relations between problems through the calling relations of functions.

In fact, every dynamic programming problem can be abstracted into a mathematical function. The set of arguments for this function is all of the values that they have, and the range is all of the possible answers that they want. So our goal is really to fill in the contents of this function so that a given argument, x, uniquely maps to a value, y. (Of course, there may be more than one argument, corresponding to the recursive function parameters may have more than one argument)

Solving the dynamic programming problem can be thought of as filling the black box of the function so that the numbers in the domain map correctly to the range.

Recursion is not an algorithm, it is a programming method corresponding to iteration. It’s just that we usually use recursion to solve problems. For example, let’s define a recursive function, f(n), to describe the problem in terms of f(n). It’s the same as using the normal dynamic programming f[n] to describe the problem, where f is an array of dp.

What is memorization?

To help you understand the content of this section, let’s start with an example:

A person can only climb one or two steps at a time. If there are n steps, how many different ways can the person climb the stairs?

Ideas:

Since the NTH step must come from the n-1 step or the n-2 step, the number of steps to the NTH step is the number of steps to the n-1 plus the number of steps to the n-1 step.

Recursive code:

function climbStairs(n) {
  if (n === 1) return 1;
  if (n === 2) return 2;
  return climbStairs(n - 1) + climbStairs(n - 2);
}

We use a recursion tree to visualize the following (each circle represents a subproblem) :

Red indicates duplicate calculations. So Fib of N minus 2 and Fib of N minus 3 are both evaluated twice, actually once is enough. So if you compute Fib of N minus 2 the first time, then the next time you need to compute Fib of N minus 2, you can just return it. The reason we can do this is because as mentioned earlier, our recursive function is a mathematical function, which means that if the parameter is fixed, the return value will not change. Therefore, if we run into the same parameter next time, we can return the value we calculated last time without having to recalculation. The time saved is equivalent to the number of overlapping subproblems.

In this case, you would have had to do $2 to the n dollars, but if you had memorized it, you would have only had to do it n times, and that’s amazing.

In the code, we can use a Hashtable to cache the results of intermediate calculations, thus eliminating unnecessary calculations.

We use mnemonics to transform the above code:

memo = {}
def climbStairs(n):
  if n == 1:return 1
  if n == 2: return 2
  if n in memo: return memo[n]
  ans = func(n - 1) + func(n-2)
  memo[n] = ans
  return ans
climbStairs(10)

Here I use a hash table called memo to store the return values of the recursive function, where the key is the argument and the value is the return value of the recursive function.

Key, in the form of (x, y), represents a primitive ancestor. In general, dynamic programming has more than one parameter, so we can use the meta ancestor to memorize. Or it can take the form of a multidimensional array. In the case of the figure above, it can be represented by a two-dimensional array.

You can get a feel for memorization by removing and adding memos to your code.

summary

The advantage of using recursive functions is that the logic is simple and clear. The disadvantage is that too deep a call can lead to stack overflow. Here I list a few algorithm problems that can be easily written recursively:

  • Recursive implementation of sum
  • Binary tree traversal
  • Staircase Problem
  • Hannot tower problem
  • Yang hui triangle

Recursion where there are double computations (called overlapping subproblems, which we’ll discuss later) is one of the most powerful signals to use memorized recursion (or dynamic programming) to solve a problem. It can be seen that the core of dynamic programming is to eliminate the calculation of repeated subproblems by means of memorization. If the size of such repeated subproblems is exponential or higher, the benefits of memorization recursion (or dynamic programming) will be very large.

To eliminate this kind of double counting, we can use lookup table. Recursively, we use a “record table” (such as a hash table or an array) to keep track of what we’ve already computed, and the next time we run into it, if we’ve already computed it, we just return it, so that we don’t have to repeat the calculation. In dynamic programming, the DP array is used in the same way as the “record sheet”.

If you’re new to recursion, I encourage you to practice recursion first and look at it later. A simple way to practice recursion is to write all of your iterations in recursive form. For example, if you write a program that “prints a string in reverse order,” it is very easy to write it using iteration. Can you write it recursively? With this practice, you will gradually get used to writing programs using recursion.

Once you’re comfortable with recursion, let’s move on to dynamic programming.

Dynamic programming

So with all this recursion and memorization, we finally get to our hero.

Basic concepts of dynamic programming

Let’s start with two of the most important concepts of dynamic programming: optimal substructure and no aftereffect.

Among them:

  • The absence of after-effects determines whether dynamic programming can be used to solve the problem.
  • The optimal substructure determines exactly how to solve it.

Optimal substructure

Dynamic programming is often applied to problems with overlapping subproblems and optimal substructural properties. We talked about overlapping substructures, so what’s the optimal substructure? Here’s the definition I got from Wikipedia:

If the optimal solution of the problem contains the optimal solution of the subproblem, we say that the problem has the optimal substructure property (that is, satisfies the optimization principle). Optimal substructure properties provide important clues for dynamic programming algorithms to solve problems.

For example, if the score on the test is defined as F, then the question can be broken down into Chinese, math, English, etc. Obviously when the subproblems are optimal, the total score is also optimal for the big problem.

Another example is the 01 backpack problem: define f(weights, values, capicity). If we want to find the optimal solution of f([1,2,3], [2,2,4], 10). We can divide it into sub-problems:

  • Put the third item in the backpack, i.e. F ([1,2], [2,2], 10)
  • andDo not put the third item in the backpack, i.e. F ([1,2,3], [2,2,4], 9)

Obviously these two issues are still complex, and we need to unravel them further. However, this is not how to disassemble.

The original problem F ([1,2,3], [2,2,4], 10) is equal to the maximum of the above two subproblems. The whole is optimal only if both subproblems are optimal, and that’s because the subproblems don’t affect each other.

There was no aftereffect

That is, once the solution of the subproblem is determined, it does not change and is not affected by the solution decision of the larger problem that contains it later.

Continue with the two examples above.

  • High math test can not affect English (in fact, it may affect, for example, time is certain, more investment in English, other subjects are less).
  • In the backpack problem, F ([1,2,3], [2,2,4], 10) chooses whether to take the third item, which should not affect whether to take the previous item. For example, if you take the third item, the value of the second item will be lower or higher. This is not a case where there is no backward direction.

Three elements of dynamic planning

State definition

What is the central point of dynamic programming? If I may say so, that is to define states.

The first step in dynamic programming is to define the state. Once you’ve defined the state, you can draw the recursion tree, focus on the optimal substructure and write the transition equation, so that’s why I say that the state definition is the core of dynamic programming, and it’s really not easy to see the state of a dynamic programming problem.

But once you’ve defined the states, you can draw the recursion tree, draw the recursion tree and then focus on the optimal substructure. But to be able to draw a recursion tree is to partition the problem, which in technical terms is to define the state. So how do we define states?

Fortunately, the definition of the state has a characteristic pattern. The state of a string, for example, is usually dp[I] for…. ending in I of the string s . For example, the state of two strings, usually DPI means the string s1 ends in I, and s2 ends in j…. .

That means there are usually different ways to define states, and you can learn and summarize them as you go along. But there are a lot of them, so how do you do it?

To be honest, you just have to practice more and learn the tricks as you go along. Refer to the dynamic programming section for specific routines. And then you can think about general state definition directions for different types of questions.

Two examples

The state definition is really important, so much so that I put it at the heart of dynamic programming. So I think it’s important to give you a couple of examples. I’m going to take the first two straight from the dynamic programming topic of Force-Knot.

The first question: “5. The longest back to the text sub string” medium difficulty

Given a string S, find the longest echo substring in S. Example 1: input: s = "babad" output: "bab" interpretation: "aba" is also the answer to the question. Example 2: Input: s = "CBBD" output: "bb" Example 3: Input: s = "a" output: "a" Example 4: Input: s = "ac" output: "a" Tip: 1 <= s.l <= 1000s is composed of numerals and letters (uppercase and/or lowercase) only

This problem is a string, so we’re going to convert it to a smaller subproblem, which is definitely a problem where the string gets shorter, and the critical condition should be an empty string or one character.

Therefore:

  • One way to define the state is f(s1), which means the longest echo substring of the string s1, where s1 is a substring of the string s in the question, so the answer is f(s).
  • Since the smaller size means the string is shorter, the description string can also be described with two variables, which actually saves the overhead of carving up the string. The two variables can be the starting index + the length of the substring, or the ending index + the length of the substring, or the starting coordinate + the ending coordinate. If you like, I’m just going to use the starting point plus the ending point. So the state definition is f(start, end), which means the longest echo substring of s[start:end+1], and the answer is f(0, len(s) – 1).

S [start:end+1] refers to a contiguous substring that contains S [start] but does not contain S [end+1].

This is certainly one way to define the state, but once we define it this way, the state transition equation becomes difficult to determine (in fact, many dynamic programs have this problem, such as the longest ascending subsequence problem). So how exactly do we define a state? And I’ll do that later on in transition equations of state. Let’s do the next problem first.

The second problem: “10. Regular expression matching” difficulty difficult

Given a string s and a character rule p, please implement a regular expression match that supports '.' and '*'. '.' matches any single character '*' matches zero or more elements of the previous element. By matching, we mean to cover the whole string S, not just parts of the string. Example 1: input: s = "aa" p = "a" output: false explanation: "a" cannot match the entire string "aa". Example 2: Input: s = "aa" p = "a*" output: true Explanation: Because '*' means zero or more of the preceding element can be matched, in this case the preceding element is 'a'. Thus, the string "aa" can be treated as a repetition of 'a'. Example 3: Input: s = "ab" p = ".*" output: true Interpretation: ".*" means that zero or more ('*') arbitrary characters ('.') can be matched. Example 4: Input: s = "aab" p = "c*a*b" output: true Explanation: Because '*' means zero or more, where 'c' is zero, 'a' is repeated once. So it matches the string "aab". Input: s = "Mississippi" p = "mis*is*p*." output: false 0 <= s.length <= 20 0 <= p.length <= 30 s may be null and contains only lowercase letters from a to z. P may be empty and contain only lowercase letters from a to z, along with the characters. And *. Ensure that every time the character * appears, it is preceded by a valid character match

There are two arguments in this problem, one is s, one is p. Following the above line of thought, we have two ways to define the state.

  • One way to define the state is f(s1, p1), which means whether p1 matches the string s1, where s1 is a substring of the string s in the problem, and p1 is a substring of the string p in the problem, so the answer is f(s, p).
  • F (s_start, s_end, p_start, p_end), p1[p_start:p_end+1] matches the string s[s_start:s_end+1], So the answer is f of 0, len of s, minus 1, 0, len of p, minus 1.

And in this case we could have done a much simpler way of defining the state, but the basic idea is the same. I’ll keep it a little tricky, but we’ll see what the transfer equation is.

Once you’ve defined the state, you’ll find that the complexity of time and space becomes apparent. This is why I keep saying that state definition is at the heart of dynamic programming.

How obvious is the complexity of time and space?

First of all, the spatial complexity, as I said before, dynamic programming is just a table lookup, so the spatial complexity of dynamic programming is basically the size of the table. Just a little bit more straightforward is the size of the memo in the hash table above. And the memo size is basically the number of states. What’s the number of states? Doesn’t that depend on how you define your state? So f of S1, P1, up here. What’s the number of states? It’s obviously the Cartesian product of the size of the range of each parameter. All of the possible values of S1 are len of s, all of the possible values of P1 are len of p, so the total state size is len of s times len of p. So the space complexity is $O(m * n)$, where m and n are the sizes of s and p, respectively.

When I say spatial complexity is the number of states, I’m not going to worry about state compression for the moment.

Then there is the time complexity. The time complexity is harder to tell. But since we want to enumerate all the states anyway, the time complexity is the total number of states. $O(M * N)$is the time complexity of the state defined above.

If you enumerate every state you need to compute with every character of S, then the time complexity is $O(m^2 * n)$.

In the example above, we define state F (n) as the number of ways to reach the NTH step, so the total number of states is n, and the space and time complexity is based on $n$. (Still not considering scrolling array optimization)

62. Different paths

A robot is located in the upper left corner of an M x N grid (the starting point is marked "Start" in the figure below). The robot can only move down or to the right one step at a time. The robot tries to reach the lower right corner of the grid (marked "Finish" in the image below). How many different paths are there?

This problem is very similar to the above and climb the stairs, but from one dimension into two dimensions, I call it two-dimensional climb the stairs, similar to the change of skin is a lot of questions, you slowly experience.

In this case, I’m going to call it f(I,j), which is the total number of ways that the robot can get to the point (I,j). So the total number of states is the Cartesian product of the values of I and j, which is m times n.

In general, the spatial and temporal complexity of dynamic programming is based on the number of states, and the number of states is usually the Cartesian product of parameters, which is determined by the non-backward nature of dynamic programming.

The critical condition is the easiest thing to compare

Once you have defined the state, there are only three things left:

  1. Critical condition
  2. State transition equation
  3. Enumeration state

In the stair climbing problem above, if we use f(n) to indicate how many ways to climb n steps, then:

F (1) and F (2) is the boundary F (n) = F (n-1) + F (n-2) is the state transition formula.

Let me express this in dynamic programming form:

Dp [0] and dp[1] is the boundary dp[n] = dp[n-1] + dp[n-2] is the state transition equation.

You can see how similar memorized recursion is to dynamic programming.

In fact, the critical condition is relatively simple, you only have to brush a few more problems, inside the feeling. The hard part is finding state transition equations and enumeration states. Both of these cores are based on states that have already been abstracted. If we use f(n) to describe how many ways there are to climb n steps, then f(1), f(2),… It’s the individual states.

Now that we’ve defined the state, let’s look at the state transition equation.

State transition equation

In dynamic programming, the state of the current stage is usually the result of the state of the previous stage and the decision of the previous stage. Here are two key words, respectively:

  • The previous state
  • Decisions made at the previous stage

In other words, if the state S [k] at the k stage and decision choice(S [k]) are given, then the state S [k+1] at the k+1 stage is completely determined, which is expressed by the formula: S [k] + choice(S [k]) -> S [k+1], which is the state transition equation. It is important to note that there may be more than one choice, so there will be more than one state S [k+1] for each phase.

Continuing with the stair problem above, the stair problem is that the number of steps going up the NTH step must come from either n-1 or n-2, so the number of steps going up the NTH step is the number of steps going up n-1 plus the number of steps going up n-1.

The above understanding is the core, and it is our state transition equation, which is coded as F (n) = F (n-1) + F (n-2).

The actual operation process, it is possible that the problem and climb the stairs as intuitive, we are not difficult to think of. It can also be deeply hidden or over-dimensional. If you really can’t think of anything, you can try drawing to open your mind, which is also the method I used when I first learned dynamic programming. When you do the problem up, your sense of problem will come, at that time you can not draw.

For example, we defined the equation of state, from which we defined the initial state and the target state. Then we focus on the optimal substructure and think about how each state expands to get closer and closer to the target state.

As shown in the figure below:

The theory is about to start with this, and then let’s digest it in a few actual situations.

OK, now it’s time to decrypt. We didn’t do the transfer equation in either of the last two problems, so let’s make it up here.

The first question: “5. The longest back to the text sub string” medium difficulty. Neither of these states are well defined, and I can change them a little bit to make the transition equation a little bit easier to write. This technique is reflected in many dynamic topics, such as the longest ascending subsequence, etc., we need to master.

In the case of f(start, end) mentioned above, meaning is the longest echo substring of the substring S [start:end+1]. We don’t change the representation, we just change the meaning to the longest echo substring S [start:end+1], and it must contain start and end. With this definition, we don’t really need to define the return value of f(start, end) to be the length, just the Boolean value. If true, the longest callback substring is end-start + 1, otherwise it is 0.

Thus, the transfer equation can be written as:

F (I,j)=f(I +1,j−1) and s[I] == s[j]

The second problem: “10. Regular expression matching” difficulty difficult.

In our case, f(s_start, s_end, p_start, p_end) means whether the substring P1 [p_start:p_end+1] matches the string S [s_start:s_end+1].

In fact, we can define a simpler way of doing it, which is f(s_end, p_end), which means whether the substring p1[:p_end+1] matches the string s[:s_end+1]. This means that the starting point is fixed at index 0, which is also a very common technique, so be sure to master.

Thus, the transfer equation can be written as:

  1. If p[j] is a lowercase letter, the match depends on whether s[I] is equal to p[j]:

$$ f(i,j)=\left\{ \begin{aligned} f(i-1, j-1) & & s[i] == p[j] \\ false & & s[i] ! = p[j] \\ \end{aligned} \right. $$

  1. If p[j] == ‘.’;
F (I, j) = f (j - I - 1, 1)
  1. If p[j] == ‘*’; if p[j] == ‘*’;

$$ f(i,j)=\left\{ \begin{aligned} f(i-1, j) & & match & & 1+ & & times \\ f(i, j – 2) & & match & & 0 & & time \\ \end{aligned} \right. $$

I believe you can analyze this, write code is not difficult. Specific code can refer to my force buckle problem warehouse, we are not here to speak.

Notice that? All of the state transition equations are described using the mathematical formulas described above. Yes, all the transfer equations can be described this way. I encourage you to do every dynamic programming problem and write out a formula like this, which you might find annoying at first. But trust me, if you keep at it, you’ll find yourself getting stronger. Just as I urge you to analyze the complexity of every problem. Dynamic programming requires not only understanding the transition equations, but also writing them out in complete mathematical formulas like I did.

Does it bother you to write a state transition equation? I’m going to give you a little trick here, and that is to use LaTeX, and LaTeX syntax makes it very easy to write formulas like this. In addition, Xifa also intimately wrote a key to generate dynamic programming transfer equation formula function, to help you generate the public prosecution office with the fastest speed. Plug-in address: https://leetcode-pp.github.io…

There is really no magic bullet for a state transition equation, and different problems have different solutions. The state transition equation is also the most difficult and key point to solve the dynamic programming problem, so we must practice more to improve the sense of problem. Next, let’s look at the less difficult, but more novices’ question of how to enumerate states.

Of course, there may be more than one state transfer equation, and the corresponding efficiency of different state transfer equations may be very different. This is a topic of comparative metaphysics, and we need to understand it in the process of doing the problem.

How to Enumerate States

The key to enumerating states is how to enumerate states.

  • If it’s a one-dimensional state, then we can do it with a loop.
for i in range(1, n + 1):
  pass

  • If it’s a two-dimensional state, then we can do it with a two-level loop.
for i in range(1, m + 1):
  for j in range(1, n + 1):
    pass

  • .

But the actual operation process has many details, such as:

  • One dimensional states do I enumerate the left first or the right first? Let’s iterate from left to right or right to left.
  • In two dimensions should I first enumerate the top left or the top right, or the bottom left or the bottom right?
  • The position of inner loop and outer loop (can be interchangeable)
  • .

In fact, this thing is related to many factors, it is difficult to summarize a rule, and I think it is completely unnecessary to summarize the rule.

But here’s the key point, which is this:

  • If you don’t have the skill of scrolling arrays, the order of traversal depends on the state transition equation. Such as:
for i in range(1, n + 1):
  dp[i] = dp[i - 1] + 1

So we need to traverse from left to right, and the reason is simply because dp[I] depends on dp[I – 1], so when we compute dp[I], dp[I – 1] has to be computed already.

It’s the same thing in two dimensions, so you can try it out.

  • If you use the scrolling array trick, you can traverse it any way you like, but different traversals usually mean the same thing. For example, if I compress two dimensions into one dimension:
for i in range(1, n + 1):
  for j in range(1, n + 1):
    dp[j] = dp[j - 1] + 1;

That’s OK. Dp [j-1] actually refers to the DPI before compression

And:

For j in range(n, 0, -1): dp[j] = dp[j -1] + 1;

That’s okay. But DP [j-1] actually refers to DPI-1 before compression. So what kind of traversal you actually use depends on the problem. I specially wrote a “complete knapsack problem” routine (1449. Digital cost and the maximum number for the target value of the article, through a specific example to show you what the actual difference between different traversal, I strongly recommend you to look at, and easy to give a triplet.

  • In terms of the inner-outer cycle, it’s actually a very similar principle.

This is more subtle, you can refer to this article to understand 0518. Coin-change-2.

summary

How to determine the critical condition is usually relatively simple, do a few more problems can quickly grasp.

In terms of how to determine the state transition equation, this is actually quite difficult. Fortunately, these are more complex, such as the state of a string, which is usually dp[I] for…. ending in I of the string s . For example, the state of two strings, usually DPI means the string s1 ends in I, and s2 ends in j…. . So meet a new topic can go to the set, really set out the first honest drawing, constantly observe, improve the sense of the problem.

As for how to enumerate states, if there is no scrolling array, then the transition equation determines how to enumerate. If a scrolling array is used, then we should pay attention to the DP correspondence between compression and compression.

Dynamic programming vs. memorized recursion

We used the memorization recursion problem to solve the stair climbing problem. So how does dynamic programming solve this problem?

If you have a dp table, you have a memo. If you have a dp table, you have a memo. If you have a dp table, you have a memo.

When we write a dp table, the index of the array usually corresponds to the recursive function parameters, and the value corresponds to the return value of the recursive function.

There seems to be no ideological difference between the two, the only difference is the way of writing. That’s right. But there are other, related differences that come with this difference in writing, which we’ll talk about later.

What does the code look like if we use dynamic programming for the stair climbing problem above? Let’s take a look:

function climbStairs(n) {
  if (n == 1) return 1;
  const dp = new Array(n);
  dp[0] = 1;
  dp[1] = 2;

  for (let i = 2; i < n; i++) {
    dp[i] = dp[i - 1] + dp[i - 2];
  }
  return dp[dp.length - 1];
}

If you don’t know it now, it doesn’t matter, but let’s do a little bit of the recursive code that we did before. We can change the name of the function:

function dp(n) {
  if (n === 1) return 1;
  if (n === 2) return 2;
  return dp(n - 1) + dp(n - 2);
}

After this change. Let’s compare dp[n] with dp(n), does that make sense? The only difference is that recursion uses call stacks to enumerate state, while dynamic programming uses iterative enumerations.

If multiple dimensional enumeration is required, then the internals of memorized recursion can also be enumerated using iterations, such as the longest ascending subsequence problem.

The dynamic programming table lookup process would look like this if it were plotted:

The dotted line represents the table lookup process

Scroll Array Optimization

We don’t have to use a one-dimensional array to climb the stairs, but we do it with two variables, and the space complexity is O(1). Code:

function climbStairs(n) {
  if (n === 1) return 1;
  if (n === 2) return 2;

  let a = 1;
  let b = 2;
  let temp;

  for (let i = 3; i <= n; i++) {
    temp = a + b;
    a = b;
    b = temp;
  }

  return temp;
}

We can do this because the state transition equation for the stair climbing problem only relates to the first two states, so we only need to store these two. There are many clever ways of doing this for dynamic programming problems, and this technique is called scrolling arrays.

This problem is the simplest problem in dynamic programming, because it only involves the change of a single factor. If multiple factors are involved, it will be more complicated, such as the famous knapsack problem and the gold mining problem.

For a single factor, we need at most a one-dimensional array, and for higher latitudes such as the knapsack problem, we need a two-dimensional array.

Answer the question above: there is no difference between memorizing recursion and dynamic programming except that one uses recursion and the other uses iteration. So what’s the difference? I think the biggest difference is that the memorization recursion can’t use scrolling array optimization (try climbing the stairs above), the memorization call stack has a lot of overhead (the complexity is the same, you can assume the space complexity constant term is larger), but not nearly as much as the TLE or MLE. Therefore, my suggestion is to directly memorize without space optimization requirements, otherwise use iterative DP.

Again:

  • If you say recursion is to work backwards from the result of a problem until the problem is reduced to a normal size. Then dynamic programming is to start from the ordinary and gradually expand the scale to the optimal substructure.
  • Memorized recursion is not fundamentally different from dynamic programming. Are enumerated state, and according to the state of direct connection step by step derivation solution.
  • Dynamic programming usually performs better. One is the stack overhead of recursion, and the other is the skill of scrolling arrays.

Basic types of dynamic programming

  • Backpack DP (we have a special topic on this)
  • Interval DP

Interval class dynamic programming is an extension of linear dynamic programming. When dividing problems by stages, it has a great relationship with the order in which elements appear in the stages and which elements are merged from the previous stage. If the state $f(I,j)$represents the maximum value that can be obtained by combining all elements from the subscript positions $I $to $j$, then $f(I,j)=\ Max \{f(I,k)+f(k+1,j)+cost\}$, $cost$is the cost of combining the two sets of elements.

Characteristics of interval DP:

Merge: the integration of two or more parts, or vice versa;

Features: Can decompose the problem into the form of pair-to-pair combination;

Solve: Set the optimal value for the whole problem, enumerate the merging points, decompose the problem into two parts, and finally merge the optimal value of the two parts to the optimal value of the original problem.

Two questions are recommended:

  • 877. A game of stones
  • 312. Stamp balloons
  • Cylindrical pressure DP

For the pressure DP, please refer to an article I wrote before: What is the pressure DP? This solution will get you started

  • Digital DP

The digit DP is usually this: given a closed interval, let you find the total number of numbers in that interval that satisfy some condition.

D) Increasing — Digits

  • Count DP and probability DP

I won’t go into those two. Because there’s no pattern.

The count DP is enumerated for two reasons:

  1. Just to let you know that it does exist.
  2. Counting is a by-product of dynamic programming.

The probability DP is special, and the state transition formula of probability DP is generally to say how likely is a state to transfer from a certain state, which is more like the calculation of expectation, so it is also called expected DP.

  • 91. Decoding methods
  • 837. The new 21 points

For more types of questions and recommended questions, see the learning route in the Brush plugin. Plug-in access method: public force buckle plus reply plug-in.

When do you use memorization recursion?

  • It’s easy to use memorization recursion when you’re traversal from both sides of an array at the same time, which is just range DP. Such as stones, such as the problem is https://binarysearch.com/prob…

If the interval dp, your traversal would look something like this:

class Solution: def solve(self, s): N = len (s) dp = [[0] * n for _ in range (n)] # right border reverse traversal for I in range (n - 1, 1, 1) : For j in range(I + 1, n): # do something return dp[0][m-1] # do something return dp[0][m-1

If you use memorized recursion, you don’t need to consider the traversal mode.

Code:

class Solution:
    def solve(self, s):
        @lru_cache(None)
        def helper(l, r):
            if l >= r:
                return 0

            if s[l] == s[r]:
                return helper(l + 1, r - 1)

            return 1 + min(helper(l + 1, r), helper(l, r - 1))

        return helper(0, len(s) - 1)
  • It’s better to use memorized recursion when you choose to be more discrete. Like a horse on a chessboard.

So when don’t you memorize recursion? The answer is nothing else. Because the ordinary dp table has an important function, this function memory recursion can not be replaced, that is scrolling array optimization. If you need to optimize the space, always use the DP table.

Warm up start

So we’re done with the theory, so let’s try out a problem.

Let’s practice our hands with a classic backpack problem.

Title: 322. Change of small change

Give me coins of different denominations and an amount. Write a function that calculates the minimum number of coins needed to make up the total. If no combination of coins makes up the total, return -1. You can think of each coin as having an infinite number of coins. Example 1: Input: Coins = [1, 2, 5], Amount = 11 Output: 3 Explanation: 11 = 5 + 5 + 1

There are two parameters to this question: Coins and Amount.

We can define the state as f(I, j) to represent the minimum number of coins needed to find jyuan using the first I term of coins. So the answer is f(Len (coins) -1, amount).

By the combinatorial principle, Coins’ entire selection state is $2^n$. The total number of states is the Cartesian product of the values of I and j, which is 2^len(coins) * (amount + 1).

Minus 1 because there are 0 dollars.

With that in mind, what we need to think about is how the state is transferred, that is, how the state is transferred from ordinary to f(Len (Coins) -1, amount).

How to determine the state transition equation? We need:

  • Focus on the optimal substructure
  • Make a choice, choose the best solution among the choices (count dp if it is)

In this case, we have two options:

  • Choose COINS [I]
  • Don’t choose COINS [I]

This is undoubtedly complete. By simply selecting or not selecting each item in the coin, the number of states is $2^n$, where n is the length.

Simply enumerating like this will definitely time out, because the number of states is already exponential.

It’s not that important whether you choose a coin or not. What matters is how much money you have.

Therefore, we can define f(I, j) as the minimum number of coins required to form j yuan by choosing the first I item of coins (it doesn’t care how to choose).

For example, Coins = [1,2,3]. So choice [1,2] and choice [3] are different states, but we don’t care at all. Since there is no difference between the two, we will pick whoever contributes the most to the result.

Using Coins = [1,2,3] and Amount = 6, we can draw the following recursion tree.

(image from https://leetcode.com/problems…

So the transfer equation is MIN (dp[I][j], dp[i-1][j-coins [j]] + 1), which means the minimum number of coins required for MIN (dp[I][j], dp[i-1][j]] + 1).

The formula is:

$$ dp[i]=\left\{ \begin{aligned} min(dp[i][j], dp[i-1][j – coins[j]] + 1) & & j >= coins[j] \\ amount + 1 & & j < coins[j] \\ \end{aligned} \right. $$

“Amount” means no solution. Since the denominations of the coins are all positive integers, there is no way there would be a solution that would require amount + 1 coin.

code

Mnemonized recursion:

class Solution:
    def coinChange(self, coins: List[int], amount: int) -> int:
        @lru_cache(None)
        def dfs(amount):
            if amount < 0: return float('inf')
            if amount == 0: return 0
            ans = float('inf')
            for coin in coins:
                ans = min(ans, 1 + dfs(amount - coin))
            return ans
        ans = dfs(amount)
        return -1 if ans == float('inf') else ans

Two-dimensional dp:

class Solution: def coinChange(self, coins: List[int], amount: int) -> int: if amount < 0: Return -1 dp = [[amount + 1 for _ in range(len(coins) + 1)] for _ in range(amount + 1)] For j in range(len(coins) + 1): dp[0][j] = 0 for I in range(1, amount + 1): for j in range(1, len(coins) + 1): if i - coins[j - 1] >= 0: dp[i][j] = min( dp[i][j - 1], dp[i - coins[j - 1]][j] + 1) else: dp[i][j] = dp[i][j - 1] return -1 if dp[-1][-1] == amount + 1 else dp[-1][-1]

DPI depends on dp[I][j-1] and dp[i-Coins [j-1]][j] + 1) which is an optimized signal that we can optimize to one dimension.

One-dimensional DP (scrolling array optimization) :

class Solution:
    def coinChange(self, coins: List[int], amount: int) -> int:
        dp = [amount + 1] * (amount + 1)
        dp[0] = 0

        for j in range(len(coins)):
            for i in range(1, amount + 1):
                if i >= coins[j]:
                    dp[i] = min(dp[i], dp[i - coins[j]] + 1)

        return -1 if dp[-1] == amount + 1 else dp[-1]

Recommended Practice Questions

Finally, I recommend several problems to you, and I suggest that you use memorized recursion and dynamic programming to solve them respectively. If dynamic programming is used, space is optimized with scrolling arrays whenever possible.

  • 0091.decode-ways
  • 0139.word-break
  • 0198.house-robber
  • 0309.best-time-to-buy-and-sell-stock-with-cooldown
  • 0322.coin-change
  • 0416.partition-equal-subset-sum
  • 0518.coin-change-2

conclusion

This paper summarizes two commonly used methods in the algorithm – recursion and dynamic programming. For recursion, you can practice with the topic of tree. For dynamic programming, you can brush up on what I recommended above, and then consider brushing the dynamic programming TAB of force button.

When you learn dynamic programming early on, you can try to solve it with memorized recursion. Then transform it into dynamic programming, so you can practice it a few times and get a feel for it. Then you can practice scrolling through arrays, which is useful and relatively easy.

The core of dynamic programming is to define the state. Once you define the state, everything else will follow.

The difficulty of dynamic programming is to enumerate all the states (no duplication, no leakage) and find the state transition equation.

reference

  • Oi-Wiki-DP is a very comprehensive resource that is recommended for everyone to study. But more suitable for people who have a certain basis, we can cooperate with this handout to eat Oh.

In addition, you can go to Recursion I in Leetcode Exploration for interactive learning.