preface

When we swipe leetCode, we often encounter dynamic programming type problems. Dynamic programming problem is very classic, also very skillful, general big factories are very like to ask. Today with you to learn the routine of dynamic planning, if the article is not correct, welcome to point out ha, thank you ~

  • What is dynamic programming?
  • The core idea of dynamic programming
  • An example goes into dynamic programming
  • Problem solving routines of dynamic programming
  • Leetcode case study

Public number: a boy picking up snails

What is dynamic programming?

Dynamic programming (DP) is a method used in mathematics, management science, computer science, economics, and bioinformatics to solve complex problems by breaking down original problems into relatively simple subproblems. Dynamic programming is often applied to problems with overlapping subproblems and optimal substructural properties.

dynamic programming is a method for solving a complex problem by breaking it down into a collection of simpler subproblems.

The above definition is from Wikipedia, but it still feels a bit abstract. In simple terms, dynamic programming is basically, given a problem, we break it down into subproblems until the subproblems can be solved directly. Then, the sub-question answers are saved to reduce double counting. Then according to the answer of the subproblem to deduce a method of the original problem solution.

In general, these subproblems are very similar and can be recursively derived from functional relations. Then, dynamic programming focuses on solving each subproblem once to reduce double computation, such as the Fibonacci sequence, which can be seen as an entry-level classical dynamic programming problem.

Core idea of dynamic programming

The core idea of dynamic programming is to dismantle molecules, remember the past and reduce double calculation.

Let’s take a look at a popular example on the Internet:

  • A: “1+1+1+1+1+1+1 =?”
  • A: “What is the value of the above equation?”
  • B: Calculate “8”
  • A: How about writing “1+” on the left-hand side of the equation above?
  • A: “What is the value of this equation?”
  • B: Quick answer “9”
  • A: “How did you know the answer so quickly?”
  • A: “Just add one to the eight.”
  • A: “So you don’t have to recalculate because you remember the value of the first equation is 8! Dynamic programming algorithms can also be said to ‘remember the solution to save time’.”

An example that takes you into dynamic programming is the frog step problem

Violence recursive

A frog can jump up one step or two steps at a time. Find out how many ways the frog can jump up a 10 step.

Some of you may be a little confused when you first see this problem, and you don’t know how to solve it. In fact, consider this:

  • To jump to the tenth step, either jump to the ninth and then one step up. Or skip to level 8 and take two steps at a time.
  • Similarly, to jump to the ninth step, either jump to the eighth and then one step up; Or skip to level seven and take two steps at a time.
  • To jump to the eighth step, either jump to the seventh and then one step up. Or skip to level 6 and take two steps at a time.

Assuming that the number of hops to the NTH step is defined as F (n), it is obvious that the following formula can be obtained:

F (10) = f (9) + f (8) (9) f = f (8) + f (7) f (8) = f (7) + f (6)... F (3) = F (2) + f(1), the general formula is: F (n) = F (n-1) + f(n-2)Copy the code

So what is f of 2 or f of 1?

  • When there are only two steps, there are two ways to jump. The first way is to jump two steps directly, and the second way is to jump one step and then another step. F (2) = 2;
  • When there is only one step, there is only one jump, that is f (1) = 1;

So we can recursively solve this problem:

class Solution { public int numWays(int n) { if(n == 1){ return 1; } if(n == 2){ return 2; } return numWays(n-1) + numWays(n-2); }}Copy the code

Go to Leetcode to submit, found a problem, exceeded the time limit

Why is it over time? What’s the recursion time? Let’s draw the recursion tree:

  • To compute the original problem f(10), we need to compute the subproblems F (9) and f(8) first.
  • And then you have to compute f(9), and then you have to compute the subproblems f(8) and f(7), and so on.
  • All the way to f(2) and f(1), the recursion tree ends.

Let’s first look at the time complexity of this recursion:

Recursive time complexity = time to solve a subproblem * Number of subproblemsCopy the code
  • A subproblem time = F (n-1) + F (n-2), which is an addition operation, so the complexity is O(1);
  • The number of problems is equal to the total number of nodes in the recursion tree, the total number of nodes in the recursion tree is equal to 2 to the n minus 1, so it’s order 2 to the n.

So, if the frog jumps, the time complexity of the recursive solution is O(1) * O(2^n) = O(2^n), which is exponential and explodes. If n is large, timeouts are normal.

If you go back and look at the recursion tree, you’ll see a lot of double counting, like f (8) being double counted twice, f (7) being double counted three times… So what makes this recursive algorithm inefficient is that there’s a lot of double counting!

Since there is a lot of repeated calculation, so we can first save the calculated answer, that is, to create a memo, until the next need, the first to check the memo, if there is, just take it directly, the memo did not start to calculate, then you can save the time of repeated calculation! That’s the solution with the memo.

Recursive solution with memo (top down)

You typically use an array or hash map as the memo.

  • In the first step, f(10) = f(9) + f(8), both f(9) and f(8) need to be calculated and then added to the memo as follows:

  • Second step,f (9) = f(8) + f(7),f (8) = F (7) + f(6), because f(8) has been in the memo, so it can be omitted,f (7),f (6) need to be calculated and added to the memo ~

Step 3,f (8) = f(7) + f(6), found that f(8),f (7),f (6) are all in the memo, so can be cut.

So, using the memo recursion algorithm, the recursion tree becomes a bare trunk, as follows:

For recursion algorithm with memo, the number of subproblems = number of tree nodes =n, and solving a subproblem is still O(1), so the time complexity of recursion algorithm with memo is O(n). Next, we use a recursive algorithm with a memo to solve the frog jump problem timeout problem, code is as follows:

Public class Solution {// use HashMap to act as memo map <Integer, Integer> tempMap = new HashMap(); Public int numWays(int n) {// if (n == 0) {return 1; } if (n <= 2) { return n; If (tempmap.containsKey (n)) {if (tempmap.containsKey (n)) {return tempmap.get (n); } else {// The memo has not been calculated, performs a recursive calculation, and saves the result to the memo map, Mod 1000000007 tempmap. put(n, (numWays(n-1) + numWays(n-2)) % 1000000007); return tempMap.get(n); }}}Copy the code

Go to Leetcode to submit it, as shown in the picture, stable:

And actually, you can solve this problem with dynamic programming.

Bottom-up dynamic programming

The basic idea of dynamic programming is the same as that of recursive solution with memo, which is to reduce double calculation and time complexity is similar. But:

  • Recursion with memo, which extends from f(10) to f(1), is also called the top-down solution.
  • Dynamic programming gradually decides the solution of larger problems from the solution of smaller problems and from the overlapping properties. It pushes up from f(1) to F (10), so it is called the bottom-up solution.

Dynamic programming has several typical characteristics: optimal substructure, state transition equation, boundary and overlapping subproblem. In the frog step problem:

  • F (n-1) and f(n-2) are called optimal substructures of F (n)
  • F (n)= F (n-1) +f (n-2) is called the state transition equation
  • F (1) = 1, f(2) = 2 is the boundary
  • Such as f = f (9) (10) + f (8), (9) f = f (8) + f (7), f (8) is overlapping subproblems.

Let’s look at the bottom-up solution, going from f(1) to f(10), and wonder if a for loop would do the trick, as follows:

The recursive solution with memo is O(n), but if you look at the figure above, you can see that f (n) only depends on the first two numbers, so you only need two variables, A and B, to store it, so the space is O(1)

The dynamic programming code is as follows:

public class Solution { public int numWays(int n) { if (n<= 1) { return 1; } if (n == 2) { return 2; } int a = 1; int b = 2; int temp = 0; for (int i = 3; i <= n; i++) { temp = (a + b)% 1000000007; a = b; b = temp; } return temp; }}Copy the code

Problem solving routines of dynamic programming

What kind of problems can you consider using dynamic programming to solve?

Dynamic programming can be considered if all possible answers to a problem can be exhausted, and overlapping subproblems are found when exhausted.

For example, some optimization scenarios, such as the longest increasing subsequence, minimum editing distance, backpack problem, change problem and so on, are the classic application scenarios of dynamic programming.

Dynamic programming

The core idea of dynamic programming is to dismantle molecules, remember the past, reduce double calculation. And dynamic programming is generally bottom-up, so here, based on the frog jumping problem, I summarized my thinking of dynamic programming:

  • Exhaustive analysis
  • To determine the boundary
  • Find out the law and determine the optimal substructure
  • Write the state transition equation

1

  • When the number of steps is 1, there is a jump, f (1) =1
  • When there are only two steps, there are two ways to jump. The first way is to jump two steps directly, and the second way is to jump one step and then another step. F (2) = 2;
  • When a step is three, to jump to the third step, either jump to the second step and then jump to the first step, or jump to the first step and then take two steps at a time. So f of 3 is equal to f of 2 plus f of 1 is equal to 3
  • When a step is four, to jump to the third step, either jump to the third step and then jump up one step, or jump to the second step and take two steps at a time. So f of 4 is equal to f of 3 plus f of 2 is equal to 5
  • When the steps are 5……

2. Define boundaries

Through exhaustive analysis, we found that when the number of steps is 1 or 2, we can clearly know the frog jump. F (1) =1, f(2) = 2. When step N >=3, the rule f(3) = F (2) + F (1) =3 has been shown, so F (1) =1, F (2) = 2 is the boundary of frog jump step.

3. Find the law and determine the optimal substructure

When n>=3, the rule f(n) = F (n-1) + F (n-2) has been shown. Therefore, F (n-1) and F (n-2) are called optimal substructures of F (n). What is optimal substructure? Here’s an explanation:

A dynamic programming problem, in fact, is a recursive problem. Assuming that the result of the current decision is F (n), the optimal substructure is to make f(n-k) optimal. The property of the optimal substructure is to make the state transferred to N optimal and has no relation with the subsequent decision, that is, to make the latter decision safely use the previous local optimal solution

4. Write the state transition equation

Through the previous three steps, exhaustive analysis, determine the boundary, optimal substructure, we can get the state transition equation:

5. Code implementation

When we’re implementing code, we’re going to look at it from the bottom up, and then we’re going to look at the boundary case, the space complexity, and that’s pretty much it. Dynamic programming has a framework, we can consider the appropriate reference when implementing:

dp[0][0][...] = for(state 1: all state 1 values){for(state 2: all state 2 values){for(...) {// State transition equation dp[state 1][state 2][... = maximize}}}Copy the code

Leetcode case study

Let’s analyze a classic Leetcode problem

Given an integer array nums, find the length of the longest strictly increasing subsequence.

Example 1:

Input: nums = [10,9,2,5,3,7,101,18] output: 4 explanation: the longest increasing subsequence is [2,3,7,101], so length is 4.Copy the code

Example 2:

Input: nums = [0,1,0,3, 3] Output: 4Copy the code

According to the above dynamic programming approach,

  • Exhaustive analysis
  • To determine the boundary
  • Find the law, determine the optimal substructure
  • State transition equation

1

Because dynamic programming, the core idea is to break down molecules, remember the past, reduce double counting. If num[I] is the longest increasing subsequence of num[I -1], then the maximum increasing subsequence of num[I -1] is the longest increasing subsequence of num[I -1].

From the top up

Observing the pattern here, which is clearly relevant, we continue to follow the bottom-up principle of dynamic programming, starting with a single element of the array based on the data from Example 1.

  • When NUMs has only one element, 10, the longest increasing subsequence [10] is of length 1.
  • When NUMS requires the addition of an element 9, the longest increasing subsequence is either [10] or [9] of length 1.
  • When numS adds another element 2, the longest increasing subsequence is [10] or [9] or [2] and has length 1.
  • When numS adds an additional element 5, the longest increasing subsequence is [2,5] of length 2.
  • When nums adds an element of 3, the longest increasing subsequence is [2,5] or [2,3] of length 2.
  • When nums adds an element 7, the longest increasing subsequence is [2,5,7] or [2,3,7] of length 3.
  • When nums adds an element 101, the longest increasing subsequence is [2,5,7,101] or [2,3,7,101] of length 4.
  • When nums adds an element 18, the longest increasing subsequence is [2,5,7 101] or [2,3,7 101] or [2,5,7,18] or [2,3,7,18], with length 4.
  • When nums adds another element 7, the longest increasing subsequence is [2,5,7 101] or [2,3,7 101] or [2,5,7,18] or [2,3,7,18], with length 4.
Analysis to find rules, molecular problems

Through the above analysis, we can find a rule:

If a new element nums[I] is added, the longest increasing subsequence is either the one ending in nums[I] or the longest increasing subsequence of nums[i-1]. The longest increasing subsequence of nums[I] has been associated with the longest increasing subsequence of the subproblem nums[i-1].

The longest increasing subsequence of the original problem array nums[I] = the longest increasing subsequence of the subproblem array nums[i-1] / the longest increasing subsequence at the end of nums[I]Copy the code

Does it feel like half the battle? But how to convert the nums[I] ending increasing subsequence into the corresponding subproblem? If only the increasing subsequence ending in NUMs [I] was also related to the longest increasing subsequence in NUMs [i-1]. Num [j] (0=

Nums [I] = nums[I] = nums[I] = nums[I] = nums[I] = nums[I] = nums[I] = nums[I] = nums[I] = nums[I] = nums[I] Num [I] = dp[I] = dp[I] = dp[I] = dp[I] = dp[I]

Nums [I] = nums[I] = nums[I] = nums[I] = nums[I] Obviously, there are many new subsequences that can be formed, so let’s take the longest one, which is dp[I]

  • Nums [3] = 5 to5The last eldest sequence at the end is(2, 5)Because from the array subscript0 to 3Traversal, only found the subsequence[2]than5It’s small, so it’s[2] + [5], namely,dp[4]=2
  • Nums [4] = 3 to3The last eldest sequence at the end is[2, 3]Because from the array subscript0 to 4Traversal, only found the subsequence[2]than3It’s small, so it’s[2] + [3], namely,dp[4]=2
  • Nums [5] = 7, in order to7The last eldest sequence at the end is(2, 5, 7)and,3,7 [2]Because from the array subscript0 to 5Walk through and find2, 5 and 3They’re all less than 7, so there’s going to be[2, 7], [5, 7], [3, 7], [2, 5, 7] and [2,3,7]And these subsequences, the oldest sequence is going to be[2, 5, 7] and [2,3,7], they are not5At the end and3The longest increasing subsequence +[7] at the end. So,dp[5]=3 =dp[3]+1=dp[4]+1.

There is an obvious pattern: an array nums ending in nums[I]

  • If num[I]>num[j], dp(I) = Max (dp(j) +1,

The simplest boundary case

When the NUMS array has only one element, the length dp(1) of the longest increasing subsequence is 1. When the NUMS array has two elements, dp(2) is either 2 or 1, so the boundary is dp(1)=1.

Determine the optimal substructure

From exhaustive analysis, we can get the following optimal structure:

Dp (I) = Max (dp) (j) + 1, there are j belong to the interval [0, I - 1), and num [I] > num [j].Copy the code

Max dp of j is the optimal substructure.

State transition equation

From the previous analysis, we can get the state transition equation:

So the longest increasing subsequence of num[I] is:

Maximum increasing subsequence = Max (dp[I])Copy the code

Code implementation

class Solution { public int lengthOfLIS(int[] nums) { if (nums.length == 0) { return 0; } int[] dp = new int[nums.length]; Dp [0] = 1; int maxans = 1; For (int I = 1; i < nums.length; i++) { dp[i] = 1; // for (int j = 0; j < i; If (nums[I] < nums[I]) {if (nums[I] < nums[I]) {if (nums[I] < nums[I]) {if (nums[I] < nums[I]) { We will take the biggest in the dp dp [I] [I] = math.h Max (dp [I], dp [j] + 1); Maxans = math.max (maxans, dp[I]); maxans = maxans, dp[I]; } return maxans; }}Copy the code

Reference and thanks

  • Leetcode website
  • Labuladong Algorithm Cheat Sheet