1. Introduction

Recently, I found some interesting things when I was learning algorithms, so I want to write them down and share them with you. I’m going to share with you the content of dynamicprogramming (DP) to start with…

2.DP

2.1 Start from the split steel bar

2.1.1 Problem Description

Background: Serling buys long steel strips and cuts them into short strips for sale. The cutting process itself has no cost. Company management wants to know the best cutting plan. Suppose we know that the Serling company sells a piece of steel I inches long for p_i(I = 1,2,3… , in US dollars). The length of the adjustment is the whole inch

The length of the I 1 2 3 4 5 6 7 8 9 10
Price p_i 1 5 8 9 10 17 17 20 24 30

Problem description: Given a length of n inches and a price list p_i(I = 1, 2, 3… , n). Find a plan for cutting steel bars to maximize sales revenue R_n. Hypothesis: Take the first four elements of the table, that is, n=4

2.1.2 Problem Analysis

When I saw the problem myself, my immediate thought was to use force to solve it. That is, find all the cutting schemes, and find the most profitable among them. If you look closely, the result of the partition is just a tree, and then you go through the tree, and you find the maximum value in the middle. Ok, problem solved. It’s easy. What I’m going to say is bullshit.

However, things are not that simple. What do you want DP to do if you can just walk through it. So what went wrong? Let’s analyze a wave:

The starting point of the analysis must be how to cut: for a steel bar of N, you can model it as an array of length n-1 that stores variables of type True/False. What’s the use of modeling this way?

As shown in the figure, a steel bar of length N can be cut in how many positions, obviously n-1. Math problem for primary school students Then, there are only two values that can be stored in each location: 0/1. From this analysis, it is not difficult to find that there are 2^(n-1) cutting schemes for a steel bar of length N. The time complexity of the algorithm is exponential.Too slow!!

Therefore, when the size of the problem becomes very large, violence does not work. Then we need to find new solutions.

2.1.3 Use violence recursion to solve

The better algorithms come from violence. Therefore, the first step is to implement the violent recursive algorithm. To implement a recursive algorithm, you have to understand two things:

  1. What kind of expression is the body of the recursion?
  2. What is the boundary of recursion?

First, to answer the first question: the body of recursion depends in part on the mathematical model we build of the actual problem. As for the mathematical model, it was established when the problem was analyzed in section 2.1.2.For the sake of representation, the array representing the cleavable position is defined as index[n-1]. Suppose index[I] is true and all index[j] (j< I) values are false. It means that the first cut is made at the position I, but the position after I is not known whether it is cut or not. The maximum benefit in this case is “p[I] + the maximum benefit in a rigid cut scheme with length n-i remaining after the first cut “. If r_n is assumed to be the maximum benefit, then the expression for r_n is

From this recursive expression, we can derive the body of the recursion:

   var q = -1  // Initialize to save the current maximum value
        for(let i = 0; i < n; i++){    // Iterate through all the unit prices to find the maximum revenue
            q = q > p[i]+cutRob(p,n-i-1)? q : p[i]+cutRob(p,n-i-1)}Copy the code

The next thing to determine is the boundary of the recursion, which is when the recursion ends. Obviously the recursive boundary here is that the length of the bar is zero. In this case, no matter how you cut it, the maximum payoff is always 0. Thus, the final recursive algorithm is obtained:

function cutRob(p, n){
    if(n === 0) {  // Recursive bounds
        var q = 0
    } else {   // Recursive body
        var q = -1
        for(let i = 0; i < n; i++){
            q = q > p[i]+cutRob(p,n-i-1)? q : p[i]+cutRob(p,n-i-1)}}return q
}
Copy the code

2.1.4 Dots found in recursion trees

Now, we can recursively calculate the maximum payoff for n equals 4. Draw the recursion tree as follows:From this recursion tree, it’s not hard to see that the red part overlains the black part in front of it. Obviously, as n gets bigger, there’s more overlap. In other words, with the exception of the first subtree, everything else is redundant. The best way to improve an algorithm is to avoid repeated calculations.

So, the question is, how do you avoid double counting? The most straightforward way to do this is to save the computed value. That way, you can just grab it and use it when you need it. Without further ado, post code!

function cutRodMemoized(p, n){
    let r = []   // Use an array to hold the computed results
    for(let i = 0; i <= n; i++){
        r[i] = -1
    }

    return cutRobMemoizedAux(p, n, r)
}

function cutRobMemoizedAux(p, n, r){
    if(r[n]>=0) {   // Check whether it has been calculated before
        return r[n]
    } 
    if(n === 0) {
        var q = 0
    } else {
        var q = -1
        for(let i = 0; i < n; i++){
            q = q > p[i]+cutRobMemoizedAux(p,n-i-1,r) ? q : p[i]+cutRobMemoizedAux(p,n-i-1,r)
        }
    }
    
    r[n] = q
    return q
}
Copy the code

2.1.5 Top-down VS bottom-up

I’m going to get the same recursive tree that I had before. But the actual part is just the black part. Redraw the part of the actual operation to get:This diagram shows the scale of the problem to be calculated as follows: n + (n-1) + (n-2) +… Plus 2 plus 1, which is O(n^2). The direction of the recursion is the direction of the arrow. The algorithm is zeroTop down with memoization.

But, for solving a particular problem, recursion backwards always looks uncomfortable. So, how do you turn this backwards recursive algorithm right?

You can see from this graph that there are four arrows pointing out from four, and four arrows pointing out from zero. Similarly, 1 and 3 have this correspondence, and 2 points to and points to the same arrow. This means that recursive calls from maximum to minimum and loops from minimum to maximum are equivalent. Therefore, it is theoretically possible to replace recursion with a loop.

In order to replace recursion with loop, we need to solve the problem, which is, where do we find the maximum benefit of n- I? The way to solve this problem is to think about what is stored in the memo in the recursive algorithm. Obviously, the memo R [I] in the recursive algorithm holds the maximum benefit when the length is I feet. Now, you can useBottom-up methodInstead of a top-down recursive algorithm with a memo.

It’s time to post code again:

function cutRodBottomUP(p,n){
    let r =[]
    r[0] = 0

    for(let i = 1; i <= n; i++){
        let q = 0
        for(var j = 0; j < i; j++) {
            q = q>p[j]+r[i-j-1]? q : p[j]+r[i-j-1]
        }
        r[i] = q
    }
    return r[n]
}
Copy the code

2.2 How DP solves problems

In “Introduction to Algorithms”, the four steps for designing dynamic programming algorithms are:

  1. Describe the structural characteristics of an optimal solution
  2. Recursively define the value of the optimal solution
  3. Calculate the value of the optimal solution by using a bottom-up approach
  4. An optimal solution is constructed using the calculated information

Corresponding to these four steps, look at the example of solving the steel bar cutting problem. In 2.1.3 recursively Solving with violence, we get the optimal substructure of the problem. In 2.1.4 Finding hua Point in recursion tree, the value of the optimal solution is solved by recursion; In the part of 2.1.5 top-down VS bottom-up, the bottom-up method is adopted to solve the value of the optimal solution. For step 4, you just need to save the I’s in the code of step 3, and then print out the I’s in order to get the optimal solution. It should be noted at this point that, in the case of the steel bar cutting, the next three sections correspond to the design steps of dynamic programming. So, what is the 2.1.2 problem analysis step? This part describes the general modeling method for DP problems: dynamic programming is a dynamic selection process. And there is always a standard of evaluation, and every choice has a consequence. For example, in this problem, each choice of a cut always brings a single value of I, and the maximum value of the rest.

2.3 Another example: Finding the largest common substring

2.3.1 Problem Description

  • Lead definition:
    1. Subsequences: given a sequence X=

      and another sequence Z=

      , such that there exists a strictly increasing X subscript sequence

      , for all j=1,2… K, such that X_i_j = z_j, Z is said to be a subsequence of X
      ,>
      ,>
      ,>
    2. Common subsequence: Given two sequences X and Y, Z is said to be a common subsequence of X and Y if Z is both a subsequence of X and a subsequence of Y
  • Longest common subsequence (LCS) problem: given two sequences X=

    and Y=

    , find the common subsequence with the longest length of X and Y.
    ,>
    ,>
  • Example: x = [‘ A ‘, ‘B’, ‘C’, ‘B’, ‘D’, ‘A’, ‘B’] y = [‘ B ‘, ‘D’, ‘C’, ‘A’, ‘B’, ‘A’]

2.3.2 Using DP to analyze problems

First, model the problem. For the string x, there is x.length = m, and for the string y, there is y.length = n. Therefore, we need to create two arrays of length m and length n to hold the selected cases.

Each position in the array stores a Value of type Boolean, indicating whether to select the element whose subscript corresponds to the position. If you use brute force, you need to write the substrings of both x and y and determine whether they are equal separately. It’s too slow to take. In this case, we need to change the way of thinking, and that is to focus on the rules for selecting elements. Obviously, a good idea is to determine whether two elements belonging to two substrings are equal, and if so, select that element, otherwise skip it.

Put this simple idea into mathematical language, i.e

Let c[I, j] represent the maximum value of the current common substring. If the corresponding elements x[I] and y[j] are equal, then the maximum length of the previous common substring is increased by one. Otherwise, the maximum length of the current common substring is selected. When you do m,n you’re going to get the longest common substring of the two strings.So that’s the idea of dynamic programming.

2.3.3 post code

function lcs(x,y){
    const m = x.length
    const n = y.length
    let c = []

    // Initialize the array that holds the oldest string
    for(let i = 0; i <= m; i++){
        c[i] = []  
        for(let j=0; j <= n; j++){
            c[i][j] = 0}}for(let i = 0; i <= m; i++){
        for(let j = 0; j <= n; j++){
            if(i === 0 || j === 0) {    // Set the boundary of DP
                c[i][j] = 0             
            } else if(x[i-1] === y[j-1]) {
                c[i][j] = c[i-1][j-1] + 1      // Look up to the right
            } else {
                c[i][j] = c[i-1][j] > c[i][j-1]     // Look for the maximum of the two values
                        ?   c[i-1][j]               / / go up
                        :   c[i][j-1]               / / to the left}}}return c[m][n]
}
Copy the code

3. The greedy

3.1 Heard: there is a complex DP behind every greed?

After every greedy, there is a DP, which starts with the design steps of the greedy algorithm. Design steps of greedy algorithm:

  1. Determine the optimal substructure of the problem
  2. Design a recursive algorithm
  3. Prove that if we make a greedy choice, there is only one subproblem left
  4. Prove that greedy choice is always safe
  5. Design a recursive algorithm to achieve the greedy strategy
  6. Convert a recursive algorithm to an iterative algorithm

Carefully comparing the design steps of greedy and DP, it is not difficult to find that the first step to determine the optimal substructure of the problem and the second step to design the recursive algorithm coincide. For DP, the top-down recursive algorithm with the memo and the bottom-up iterative algorithm are designed directly according to the optimal substructure. For greed, it is to choose the local optimal solution.

3.2 For example: activity selection

3.2.1 Problem Description

Suppose there is a set S={a_1, a_2… , a_n}, these activities use the same resource (for example, a lecture theatre) that can only be used by one activity at a time. Each activity A_i has a start time s_i and an end time F_i. If selected, the task a_i occurs during the windowing interval [s_i, f_i). If two activities A_I and A_j satisfy that [s_i, f_i) and [s_j, f_j) do not overlap, they are said to becompatible.

In the choice to get that question, we want to pick a maximum compatible set of activities. Assume that the activities have been sorted in monotonically increasing order of end time:

3.2.2 Solution

Like the previous problem, this is also a problem of dynamic selection and the cost of each selection is that no other activity can be scheduled at that time. We want to find an interval in which the most activities can be arranged. The recursive expression is as follows:And then you add a_i and a_j to get the final result. According to this conclusion, a DP algorithm can be obtained to solve the activity scheduling problem. Again, however, this is an example of satisfactionWhen we make greedy choices every time, the final result is also the bestThis feature. So we can do it with the greedy algorithm. As for the problem of activity selection, it is obvious that the earliest end of the activity is the best for us to arrange the activity, so the greedy choice each time is the earliest end of the activity. We just have to choose the one that ends earliest in each selection.

3.2.3 Without further ado, post code

  • DP code:
function activitySelect(start, final){
    let len = (start.length === final.length) ? start.length : null
    var dp = []
    let max = 0

    for(let i = 0; i <= len + 1; i++){
        dp[i] = new Array(len+2).fill(0)}for(let j = 1; j <= len+1; j++) {
        for(let i = 0; i < j; i++){             // i < j
            for(let k = i; k < j; k++){
                if(final[i]<=start[k] && final[k] <= start[j]){ // Boundary conditions
                    const res = dp[i][k] + dp[k][j] + 1
                    max = max > res ? max : res
                }
                dp[i][j] = max
                max = 0}}}console.log(dp)
    return dp[0][len+1]}Copy the code
  • Greedy code:
function greedyActivitySelector(s,f){
    let n = s.length
    let selectRes = [`a1`]
    let k = 0

    for(let i = 1; i<n; i++){
        if(s[i]>=f[k]){
            selectRes.push(`a${i + 1}`)
            k = i
        }
    }

    return selectRes
} 
Copy the code
  • Note: According to the solution of DP, he only finds the non-overlapping elements between a_i and a_j, but always 2 less than the correct value, and the two missing elements are a_i and a_j

3.3 greed? DP?

Both DP and greed are dynamic selection problems, but each choice made by greed is the current optimal solution, while DP has to traverse subproblems and select the optimal one among multiple subproblems. So it can be said that any problem that can be solved by greed can be solved by DP, but any problem that can be solved by DP may not be solved by greed. If a problem can be solved by greed, then the problem must satisfy the global optimal solution of the problem by the union of all the local optimal solutions, that is, satisfies the greedy choice property.