Recently in trying to help my friends understand the dynamic programming, I find the Internet for a long time, there are a lot of related data, but most of the time directly refer to the wikipedia definition of dynamic programming, then lu code directly to the problem, I think the light focus on code implementation, is not very well to impart the thoughts of other learners.

In order to make it easier for you to understand dynamic programming, I also want to record some of my own learning process and understanding of dynamic programming. This article is the first in the series of dynamic programming, and I will try to explain the nature of dynamic programming and the problems it solves.

Fibonacci numbers

There are a lot of problems that can be applied to dynamic programming, but I think the classic problem, the one that gives you a little bit of a quick overview of it, is solving the Fibonacci sequence. For those of you who may not be familiar with the Fibonacci sequence, it is simply a sequence that begins with 0,1, and each digit after that is the sum of the first two digits. For example, 0,1,1,2,3,5,8,13,21…

Let’s write it out using the formula:

Fib(n) = Fib(n-1) + Fib(n-2), for n > 1
Fib(0) = 0, Fib(1) = 1
Copy the code

We can easily convert this formula into code:

public int fib(int n) {
    if (n < 2)
        return n;
 return fib(n - 1) + fib(n - 2); 
 }
Copy the code

We can recursively get the answer we want, and to compute the current result we can compute the first two positions fn-1, fn-2, and finally combine to get Fn. But there’s a problem, the time complexity is O(2^n), the space complexity is O(n), and as the input number gets bigger, the time we have to wait to get the result increases, and a lot of that time is wasted in double-counting. Assuming our input is n=5, let’s take a look at the following graph:

Idea 1:

Engineering experience tells us that we can cache a result when a previously obtained result is likely to be used later and every same input returns the same result. This way, when there is a result for an input in the cache, we can skip the calculation and return that result directly from the cache, otherwise we have to compute first and then put the result in the cache for the next time we need it. The code is as follows:

public int fib(int n) {
    int dp[] = new int[n + 1];// Use an array to cache results
 return fibRecursive(dp, n); 
 }

public int fibRecursive(int[] dp, int n) {
    if (n < 2)
        return n;
 if (dp[n] == 0)
        dp[n] = fibRecursive(dp, n - 1) + fibRecursive(dp, n - 2);
 return dp[n]; 
 }
Copy the code

At this point we were able to optimize our code to O(n) time and space complexity, which was quite a breakthrough for us in terms of results.

Could the time and space complexity be lower? Let’s try it!

Idea 2:

Let’s go back and take a closer look at this sequence:

Then our implementation is clear:

public int fib(int n) {
    int dp[] = new int[n + 1];
  dp[0] = 0;
  dp[1] = 1;   for (int i = 2; i <= n; i++)
        dp[i] = dp[i - 1] + dp[i - 2];   
  return dp[n]; 
}
Copy the code

At this point of time space complexity is O (n), students may have discovered attentively, for that matter, actually we need and only need the first two Numbers, the result of the earlier actually used up, don’t have to do the cache can be dropped, (that is, when the F4 is calculated, also cache the F0 ~ F2 actually has no meaning, will only waste of space, Instead of maintaining a cache array, use two variables to store the first two numbers, reducing space complexity to a constant O(1).

public int fib(int n) {
    if (n < 2)
        return n;
 int n1 = 0, n2 = 1, temp;
 for (int i = 2; i <= n; i++) {
        temp = n1 + n2;
  n1 = n2;
  n2 = temp;
  }
  return n2; 
}
Copy the code

So that’s the end of the Fibonacci series. A lot of people might wonder,?? It’s as simple as that. To summarize a little bit, the so-called dynamic programming is essentially a way to optimize recursion, and the dynamic programming problem has two distinctive features.

  1. It has a lot of repeating subproblems (double counting in recursion);
  2. The result of the original problem can be derived from the result of the subproblem.

Then, according to these two features, we also derived two optimization ideas:

  1. The process of solving the current problem resolves the contained subproblems and caches the results of the solved problems. (Idea 1)
  2. Solve the subproblems first, and then directly combine the results of the subproblems involved to produce the results of the current problem. (Idea 2)

Well, I believe that we have a general perception of dynamic planning, dynamic planning mentioned before a lot of people are very afraid of ah, think interview encountered this kind of problem will be cool. Don’t panic. Take a dynamic programming problem and break it down into smaller, simpler subproblems, and then apply the two ideas above to solve the problem. I know that the hardest thing to do with a problem is to determine whether it is a dynamic programming problem, or whether some of the subproblems of dynamic programming problems may not be as easy to identify as the Fibonacci sequence. In future articles, I will also list some common types of dynamic programming problems to help you understand them better.