Abstract: Intelligent optimization algorithm, also known as modern heuristic algorithm, is a global optimization performance, strong universality and suitable for parallel processing algorithm. This paper mainly introduces genetic algorithm and ant colony algorithm in detail.

This article is shared from huawei cloud community “Intelligent Optimization Algorithm (1) — Genetic Algorithm”, original author: I am a big watermelon.

Intelligent optimization algorithm, also known as modern heuristic algorithm, is a kind of algorithm with global optimization performance, strong universality and suitable for parallel processing. This algorithm generally has strict theoretical basis, rather than relying solely on expert experience, and can theoretically find the optimal solution or approximate optimal solution in a certain period of time. Commonly used intelligent optimization algorithms are: genetic algorithm, simulated annealing algorithm, tabu search algorithm, particle swarm optimization algorithm, ant colony algorithm.

This paper mainly introduces genetic algorithm and ant colony algorithm in detail.

1. Genetic algorithm

Genetic algorithm (GA) is a Genetic and evolutionary adaptive (adaptive adjustment of Genetic parameters) global optimization (random mutation constantly looking for the global optimal solution) algorithm that simulates biology in the natural environment. The basic idea is “survival of the fittest”, and it is the most widely used and effective intelligent optimization algorithm.

1.1 Coding method

The algorithm model carries out binary coding (01 coding), natural number coding, real number coding and tree coding on individual (also known as solution). In the individual fitness calculation, decoding is needed to realize the conversion between the solution space of the problem and the algorithm search space.

1.2 Fitness function

Each individual has a Fitness function, which makes quantitative evaluation on the merits of the individual. The Fitness function is the basis for the algorithm to implement “survival of the fittest and survival of the fittest”. Fitness function need to be set according to the objective function, the g (x) g (x) represents the target function, the g (x) fitness function g (x), from the objective function g (x) g (x) is mapped to the fitness function g (x) g (x) is referred to as the process of calibration.

For maximum optimization problem, g(x)g(x) can be directly set as fitness function g(x)g(x) g(x), that is, G(x)=g(x) g(x) =g(x); For minimum optimization problems, G(x)=-\min G(x) G(x)=−min_g_(x); In the rules of genetic algorithm, the fitness function is positive, but the above two equations cannot guarantee, so minimum or maximum value and piecewise function need to be added.

1.3 Select Operations

Selection refers to the Selection of individuals from the current population with a large fitness function value, and these good individuals are likely to be the father of the next generation. The greater the fitness function, the greater the probability of being selected as the father (possible!).

There are many selection algorithms, the most basic is roulette algorithm:

P_i = \ frac {F_i} {\ sum_ {I = 1} ^ {N} F_i} Pi_ = ∑ _i _n__fi__fi_ = 1

P_i_Pi_ represents the probability of individual being selected. F_i_Fi_ represents individual fitness function value; N_N_ indicates population size.

According to the selection probability P_i_Pi_, the disc wheel is divided into N_N_ parts, and the center Angle of the i_i_ sector is 2\ PI P_i2_πPi_. The number r_r_ is randomly generated between 0 and 1, and the cumulative probability of falling in the i_i_ sector is Q_i = \sum_{j=1}^ ip_j_qi_ =∑_j_=1_i__Pj_, then the individual I_I_ is selected and repeated N_N_ times. I can pick n sub N sub individuals.

1.4 Cross Operations

Two individuals recombine to create a new individual by crossing parts of the chromosome, or creating a new solution. Random pairing is required before crossing.

In general, the binary encoded individual is crossed by points, that is, one or more intersection points are randomly selected between the two paired strings and the numerator strings are exchanged to produce a new string

Whether two individuals to crossover operation is determined by the crossover probability and the larger crossover probability can make genetic algorithm to generate more data processing, to maintain the diversity of population, and can prevent premature algorithm is mature, but crossover probability assembly area, makes the algorithm search too much unnecessary solution consuming too much computing time, average value of around 0.9.

1.5 Mutation Operation

In biological evolution, some chromosomes may be mutated to produce new chromosomes, which is another important way to generate new solutions. Crossover operation is equivalent to global exploration, and mutation operation is equivalent to local development, which are the two necessary search capabilities of intelligent optimization algorithms.

Individual variation depends on the mutation probability, too low will make part of useful genes can not enter the chromosome, can not improve the quality of solution; In general, the mutation probability is about 0.005. As a result, the algorithm loses the ability to learn from past search experience.

It is worth noting that Rudolph proved by Markov chain correlation theory that the genetic algorithm using only three operations of selection, crossover and mutation could not converge to the global optimal solution, while the genetic algorithm using elite retention strategy was globally convergent.

The overall process of the algorithm is shown in the figure below:

1.6 Algorithm Analysis

The key of a good intelligent algorithm lies in the balance between global exploration and local development. The purpose of global exploration is to explore the solution space more comprehensively, while the main purpose of local development is to search the known region more finely, hoping to obtain new solutions of better quality.

Genetic algorithm can balance global exploration and local development by setting selection pressure. In the initial stage of the algorithm, setting less selection pressure can make the algorithm have better global exploration ability and carry out wide area search. In the later stage of the algorithm, setting a higher selection pressure can make the algorithm perform a relatively fine local search.

The selection pressure setting can be calibrated from the fitness function and the selection strategy.

Fitness function calibration, in the early stage of the algorithm, should narrow the individual fitness gap, reduce the elimination rate; In the final stage of the algorithm, the gap of individual fitness is enlarged to ensure that the algorithm can conduct a centralized search on the solution region of individuals with high fitness and accelerate the convergence speed of the algorithm. Common methods are:

  • Linear scaling H= aF+b_H_=aF+b

  • \sigma_σ_ Truncation method H=F+(\hat F – c\sigma)H=F+(F^−_cσ_)

  • Power law scaling H= F^\alpha_H_=Fα

Selection strategy, low selection pressure can select multiple types of individuals, strengthen the search of unknown solution region, avoid the algorithm falling into local extremum, but the algorithm optimization speed will be slow; High selection pressure can select excellent individuals, accelerate the optimization speed, but population diversity will decrease, and reduce the probability of finding the global optimal value. In addition to the roulette algorithm, selection strategies include:

  • Graded selection method

  • Tournament selection method

  • The Boltzmann method of choice

2. Ant colony algorithm

2.1 Ant colony optimization algorithm

Ant ColonyOptimization (ACO) algorithm is a simulation algorithm originated from nature, and its idea absorbs the behavior characteristics of Ant colony foraging process. Ant colony algorithm has good optimization performance in TSP problem, secondary assignment problem, graph coloring problem, vehicle scheduling problem and load balancing problem in communication network.

Ants in nature without vision, rely on similar pheromone decided to go in the environment, isolation of ants along the pheromone track mobile companion, and at the same time release their pheromones, thereby enhancing the pheromone on the route number, as more and more ants through this route, a better route is formed (this path is not necessarily the shortest, But enough for NP-hard problems).

2.1.1 Algorithm model

Taking the TravelingSalesman Problem (TSP) as an example, it is called minimal Hamilton Problem in graph theory.

Write G=(V,E) G=(V,E) as weighting graph, V=(1,2,3… , N) V = (1, 2, 3,… N) is a vertex set E_E_ for edge set, the distance between each vertex d_ {ij} _dij_ known (d_ {ij} > 0, d_ \ infty = {2}, I, j/V) in (_dij_ > 0, _dii_ = up, I, j_ ∈ _V), Let x_{ij} = 1, if (I,j) is on the optimal loop; Otherwise, 0_xij_=1, if (I,j) is on the optimal loop; Otherwise 0

Then the classic TSP problem can be expressed as follows:

\ min Z = \ sum_ {I = 1} ^ {n} \ sum_ ^ {j = 1} {n} d_ x_ {ij} {the ij} min_Z_ = I = 1 ∑ ∑ _n__j_ = 1 _n__dij__xij_

Obey the following constraints:

  • Constraint 1: \ sum_ ^ {j = 1} {n} x_ {ij} = 1, the I \ V in ∑ _j_ = 1 _n__xij_ = 1, i_ ∈ _V

  • Constraints 2: \ sum_ {I = 1} ^ {n} x_ {ij} = 1, j/V in ∑ _i_ = 1 _n__xij_ = 1, j_ ∈ _V

  • Constraint 3: \ sum_ {I \ S in} \ sum_ {j \ inS} x_ {ij} \ le | S | 1, \ \ subset forall S V ∑ _i_ ∈ _S_ ∑ _j_ ∈ _S__xij_ ∣ or less _S_ ∣ – 1, ∀ _S_ ⊂ _V_

  • Constraint 4: x_{ij}\in \{0, 1\}_xij_∈{0,1}

The | S | ∣ _S_ ∣ contained in the figure on the number of vertices for collection; Constraints 1 and 2 have only one edge in and one edge out for each point, that is, there is only one optimal path between any two points; Constraint 3 guarantees that no subloops are created.

When D_ {ij} = D_ {JI}_dij_=dji_, the problem is called symmetric TSP; If the inequality d_{ij}+d_{jk}\ge d_{IK}_dij_+_djk_≥_dik_ holds for all 1\le I,j,k \le N1 ≤_i,j,k_≤_n, the problem is said to satisfy the triangle inequality, denoted as \Delta δ TSP. The triangle inequality is satisfied in many cases, even if it is not satisfied, it can be converted to closure form to obtain the TSP optimal solution.

Basic model of ant colony optimization algorithm:

1. Ant communities are always looking for the cheapest possible solution

2. Ants have memory, store the information of the current path, construct feasible solutions, evaluate the quality of solutions, and track back

3. Ants in the current state can move to any point in the feasible domain

4. Each ant is assigned an initial state and several termination conditions

5. The ant constructs the solution recursively from the initial state to the feasible domain state, and the construction process ends when one ant satisfies at least one termination condition

6. Ants move to domain nodes according to certain probabilistic decision rules

7. Pheromone track is updated after movement, which is called “one-step online pheromone update”.

8. Once a solution is constructed, the ant follows the original direction and updates the pheromone track, which is called “online delayed pheromone update”.

2.1.2 Algorithm analysis

The algorithm complexity is O(nc\cdot n^2\cdot m)O(NC_ ⋅_n_2⋅_m), where M represents the number of ants, nc represents the number of iterations or searches, and n represents the number of vertices. The effect of the algorithm is affected by \alpha, \beta_α_,_β_, etc. The influence of \beta_β_ is to reflect the persistence of pheromone trajectory, too small value means that the information disappears too fast. If the value is too large, it is easy to fall into the local optimum, so its value is usually about 0.7.

The basic ant colony optimization algorithm can be combined with other heuristic algorithms, the most typical of which is embedded local search algorithm. After each ant has formed its own route, local adjustment method (2-opt, 3-opt) is used to improve it. In addition, the combination with genetic algorithm, simulated annealing and tabu search also has certain effectiveness.

The main steps of the hybrid ant colony optimization algorithm are:

1. Begin

2. Ant initialization;

3. The LOOP:

4. \quad ant path construction;

5. \ Quad implements a local search algorithm on an ant

6. \ Quad Ant track update

7. \quad If the number of iterations is not reached, go to LOOP;

8. Output the current best solution

9. End

Click to follow, the first time to learn about Huawei cloud fresh technology ~