Today, I’m going to talk to you about some of the techniques that we often use when we do algorithm problems. For those of you who haven’t used these techniques before, maybe you should consider trying to see if you can use these techniques in practice to optimize the solution of a problem. I’m sure you’ll learn something. Otherwise, look at me.

1. Clever array subscripts

The subscript of an array is an implicit and useful array, especially when counting numbers or determining the presence of integers. For example, if you are given a list of letters and asked to judge the number of occurrences of these letters, we can use these letters as subscripts. In traversal, if a is traversed, then arr[a]++;

By using subscripts in this clever way, we don’t have to go letter by letter.

Let me give you another example:

Problem: you are given an array of n unordered ints (arr) with values ranging from 0 to 20, and are asked to print the n numbers in order from smallest to largest in O(n) time.

For this problem, if you sort the n numbers first and then print them out, you can’t print them out in order n. But the values range from 0 to 20. So we can use array subscripts in a clever way. Take the corresponding number as the array subscript, and increment the array if the number is present. The code is as follows:

public void f(int arr[]) {
 
       int[] temp = new int[21];
       for(int i = 0; i < arr.length; i++) { temp[arr[i]]++; } // Print sequentiallyfor (int i = 0; i < 21; i++) {
           for(int j = 0; j < temp[i]; j++) { System.out.println(i); }}}Copy the code

There are a lot of other applications that use array subscripts, and you can consider whether you can use array subscripts to optimize when you encounter some problems in the future.

2. Skillfully use redundancy

Sometimes when we’re going through a set of numbers, we do an out of bounds judgment, and if the index is almost out of bounds, we set it to 0 and we go through again. This is especially true in circular arrays, such as arrays. Code like this is often written:

for (int i = 0; i < N; i++) {
       if(pos < N) {// Not out of bounds // arr[pos]else{ pos = 0; Arr [pos]} pos++; }Copy the code

We can actually simplify our code by doing mod

for(int i = 0; i < N; I++) {// use the array arr[pos] (we assume pos < N at the beginning) pos = (pos + 1) % N; }Copy the code

3. Use dual Pointers wisely

For double Pointers, it is particularly useful in doing problems about single linked lists, such as “judging whether single linked lists have rings”, “how to find the middle node of the list in one traverse”, “the KTH node from the bottom of the single linked list” and so on. For this kind of problem, we can use double Pointers, which is much more convenient. Let me tell you by the way how to solve these three problems with two Pointers.

For example, for the first question

We can set a slow pointer and a fast pointer to traverse the list. The slow pointer moves one node at a time, while the fast pointer moves two nodes at a time. If the list has no loops, the fast pointer traverses the list first; if there are loops, the fast pointer meets the slow pointer on the second traverse.

On the second question

Again, set a fast pointer and a slow pointer. Slow moves one node at a time, fast moves two. When traversing a linked list, the slow pointer reaches the midpoint just as the fast pointer completes.

On the third question

Set two Pointers, one of which moves k nodes first. Then the two Pointers move at the same speed. When the first pointer is traversed, the second pointer is at the KTH node from the end.

You see, it’s much more convenient to have a double pointer. So in the future, when dealing with some problems related to linked lists, you can consider double Pointers.

4. Clever shift operation.

Sometimes when we’re doing divisor or multiplier operations, such as n / 2, n / 4, n / 8, we can do it by shifting, which is much faster.

Such as:

N over 2 is the same thing as n >> 1

N over 4 is the same thing as n >> 2

N over 8 is the same thing as n >> 3.

In this way, the operation of the shift will be faster in the execution speed, and you can also show the appearance of a very strong, ha ha.

There are also some & (and), | (or) operation, can also speed up the operation. For example, to determine whether a number is odd, you might do this

if(n % 2 == 1){
 dosomething();
}
Copy the code

But it’s a lot faster if we use and or. For example, if it’s odd, we can sum n with 1, if it’s 1, it’s odd, otherwise it’s not. namely

if(n & 1 == 1){
 dosomething();
)
Copy the code

Some specific operation skills, but also need you to try to use in practice, so that after a long time will be more skilled.

5. Set sentries

In linked list problems, we often set a header pointer, and the header pointer does not hold any valid data, just for ease of operation, this header pointer can be called a sentinel.

For example, if we want to delete the first node in the header, if we do not set a sentinel, it will operate differently from deleting the second node. But we set the sentry, so deleting the first node is the same as deleting the second node, so there’s no extra judgment. Of course, the same goes for inserting nodes.

Sometimes we can set a sentinel when we are manipulating arrays, and use arr[0] as the sentinel. Arr [I] == arr[i-1]? Arr [I] == arr[i-1]? . You don’t have to worry about crossing the boundary at I = 0.

Of course, THIS is just one example, there are many more specific applications, such as insertion sort, circular linked lists and so on.

6. Optimizations related to recursion

(1). Consider state saving for problems that can be recursive

When we use recursion to solve a problem, it is easy to repeatedly calculate the same subproblem. At this time, we should consider state saving to prevent double calculation. For example, let me just pick up a random problem that I did before

Question: A frog can jump up one step or two at a time. How many ways can the frog jump up n steps?

This is a very good problem to solve recursively. Assuming f(n) represents the total hop number method of n steps, then

F of n is equal to f of n minus 1 plus f of n minus 2.

The ending condition for recursion is f(n) = n when 0 <= n <= 2. So we can write recursive code very easily

    public int f(int n) {
       if (n <= 2) {
           return n;
       } else {
           returnf(n - 1) + f(n - 2); }}Copy the code

But for problems that can be solved recursively, it’s important to consider whether there’s a lot of double counting. Obviously, for f(n) = f(n-1) + f(n-2) recursion, there’s a lot of double counting. Such as

There’s a lot of double counting. At this point we have to think about state saving. For example, you can use a hashMap, or you can use an array, just like we said above. If arr[n] = 0, n has not been calculated. If arr[n]! When theta = 0, it means that f(n) has been evaluated, and then you can return the evaluated value directly. So the code we’re considering for state saving is as follows:

Int [] arr = new int[1000]; int[] arr = new int[1000]; public int f(int n) {if (n <= 2) {
           return n;
       } else {
           if(arr[n] ! = 0) {returnarr[n]; // Already calculated, return}else {
               arr[n] = f(n-1) + f(n-2);
               returnarr[n]; }}}Copy the code

In this way, the efficiency of algorithm can be greatly improved. Some people call this state preservation the memo method.

(2). Think bottom-up

For recursion problems, we usually recurse from the top down until we get to the bottom, and then we return the value layer by layer.

However, sometimes when n is large, such as when n = 10000, you have to recurse down 10000 levels until n <=2 to slowly return the result. If n is too large, you may run out of stack space.

In this case, we can actually think about the bottom-up approach. For example, I know

f(1) = 1;

f(2) = 2;

So we can say that f of 3 is equal to f of 2 plus f of 1 is equal to 3. So you can derive f of 4,f of 5, all the way to f of n. Therefore, we can consider using the bottom-up approach.

The code is as follows:

public int f(int n) {
       if(n <= 2)
           return n;

       int f1 = 1;
       int f2 = 2;
       int sum = 0;

       for (int i = 3; i <= n; i++) {
           sum = f1 + f2;
           f1 = f2;
           f2 = sum;
       }
       return sum;
   }
Copy the code

We also call this bottom-up approach recursion.

To summarize

There are two questions to consider when using recursion to solve a problem

(1). Can memo method be used to optimize whether there is stateful repeat calculation?

(2). Can recursion be adopted to do things from bottom up to reduce the cost of constant recursion?

If you find this article inspiring, you can:

1, like, so that more people can see this content (collection does not like, are playing rogue -_-)

* pay attention to me, let us become a long-term relationship.

3. Follow the public account “Helpless and painful code Farmer”, which has more than 100 original articles, I also share a lot of video, book resources, and development tools, welcome your attention, the first time to read my article.