16.1 the introduction

When solving a new problem, the usual idea is to find the similarity between the current problem and the solved problem, so as to easily find the solution of the new problem.

This chapter will classify various algorithms according to different methods, and then introduce three algorithm design ideas (greedy, divide-and-conquer and dynamic programming) respectively in the following three chapters.

16.2 classification

There are many ways to classify algorithms, some of which are listed below:

● Implementation method

● Design method

● Other classification methods

16.3 Classification by implementation method

1. Recursion or iteration

A recursive algorithm is an algorithm that calls itself repeatedly until some baseline condition is met. It is widely used in functional programming languages such as C and C++. Iterative algorithms often use structures such as loops and sometimes use data structures such as stacks and queues to solve problems.

Some problems are suitable for recursive implementation, while others are suitable for iterative implementation. For example, the recursive implementation of the Hannotta problem is very easy to understand. Each recursive implementation can be changed to an iterative implementation and vice versa.

2. Procedural or declarative (not procedural)

For declarative programming languages, you only need to specify the goals you want to achieve, not the implementation details. Procedural programming languages need to specify specific steps to get the desired result. For example, SQL tends to be more of a declarative language than a procedural one, because the query operation does not need to give the specific steps of the query. C, PHP, PERL, and so on are examples of procedural languages.

3. Serial or parallel or distributed

When we talk about algorithms, we generally think of computers as executing one instruction a time, which is called a serial program. The parallel algorithm takes advantage of the characteristics of computer architecture to process multiple instructions at a time, and divides the original problem into multiple sub-problems, which are then distributed to multiple processors or threads to solve separately. Iterative algorithms are generally parallelizable. If parallel algorithms are distributed across different machines, such algorithms are called distributed algorithms.

4. Certainty or uncertainty

Deterministic algorithms solve problems based on predefined procedures, while uncertain algorithms infer the optimal solution through some heuristic rule at each step.

5. Accuracy or approximation

Many problems may not find the optimal solution. Therefore, those algorithms that give the optimal solution are called exact algorithms. In computer science, approximation algorithms are often given for problems that cannot provide exact solutions. Approximation algorithms are often used to solve NP hard problems (see Chapter 20 for a detailed description).

16.4 Classification according to design method

In addition to implementation methods, algorithms can be classified according to design methods.

1. The greedy method

Greedy algorithms divide the problem into stages. At each stage, the optimal decision of the current state is selected, regardless of the influence on subsequent decisions. This means that the algorithm will select some local optimal solutions during execution. Greedy method assumes that global optimal solution can be obtained through local optimal solution.

2. The partition method

The divide-and-conquer method solves the problem in the following three steps:

1) points: Divide the original problem into several sub-problems, which are smaller instances of the same type as the original problem.

2) Recursion: solving subproblems recursively.

3) cure: reasonable solution of combinatorial sub-problem.

Examples: merge sort and binary search algorithms.

3. Dynamic programming method

Dynamic Programming (DP) is combined with the memo method. The difference between dynamic programming and divide-and-conquer is that there is no dependence between the subproblems of divide-and-conquer, while the subproblems of DP overlap. Dynamic programming reduces the complexity of many problems from exponential to polynomial (O(n2), O(n’), etc.) by using the memo technique (a table that holds the answers to solved subproblems). Dynamic programming differs from recursion in the memo technique of recursive invocation. This memo technique does nothing to reduce complexity when subproblems are independent of each other and not called repeatedly, so dynamic programming is not a one-size-fits-all approach. Dynamic programming can reduce algorithm complexity from exponential to polynomial by using the memo technique (a table that holds the answers to solved subproblems).

4. Linear programming method

In linear programming, maximize (or minimize) the linear function of the input variable, subject to inequality constraints. Many problems (e.g., the maximum flow of a directed graph) can be solved by linear programming.

5. Reduction (transformation and solution)

The idea of this method is to transform and solve a complex problem into a known problem with an asymptotic optimal solution. The objective of this method is to find a reduction algorithm, so that the complexity of the reduced algorithm is not higher. For example, the algorithm for finding the median in a linear table is to sort the linear table first and then find the median of the ordered table. These two steps are called transformation and solution respectively.

16.5 Other taxonomies

1. Classification by research field

In computer science, every field has its own problems and needs effective algorithms. For example, search algorithm, sorting algorithm, merge algorithm, numerical algorithm, graph algorithm, string algorithm, geometry algorithm, combination algorithm, machine learning, encryption, parallel algorithm, data decompression algorithm, etc.

2. Classification by complexity

The algorithm classifies according to the solution time relative to the input size. Some algorithms have linear time complexity (O(n)), others consume exponential time, and some never stop. Note that some problems may have multiple solutions of varying complexity.

3. Stochastic algorithms

Some algorithms require random selection. For some problems, the fastest solution must involve random operations, such as quicksort.

4. Branch bound, enumeration, and traceback

These methods are used in the field of artificial intelligence and will not be described here. For backtracking methods, refer to Chapter 2.

Note: In the next three chapters, we will discuss greedy, divide-and-conquer, and dynamic programming algorithms. Focus on these three techniques because they solve more problems than other techniques.