A list,

1 introduction

Under the action of heredity, selection and variation, natural organisms survive the fittest, and constantly evolve and develop from lower to higher levels. It has been noted that the evolution of survival of the fittest can be modelled to form some optimization algorithms; In recent years, evolutionary computing algorithms have been widely concerned. Differential Evolution (DE) is an emerging evolutionary computing technology [1]. Originally conceived as a solution to Chebyshev polynomial problems, it was later found to be an effective technique for solving complex optimization problems. Differential evolution algorithm is an optimization algorithm based on swarm intelligence theory. It is an intelligent optimization search algorithm produced by the cooperation and competition among individuals in a group. However, compared with evolutionary computing, it retains the global search strategy based on population, and adopts real number coding, simple variation operation based on difference and “one to one” competitive survival strategy, which reduces the complexity of evolutionary computing operation. At the same time, the differential evolution algorithm is characteristic of the memory ability make it can dynamically track the current search, to adjust its search strategy, it has strong global convergence ability and robustness, and do not need to use the characteristic information of the problem is suitable for solving some using conventional mathematical programming methods are difficult to solve even unable to solve the complex optimization problem [2-5]. Therefore, differential evolution algorithm, as an efficient parallel search algorithm, has important academic significance and engineering value for its theoretical and applied research. At present, differential evolution algorithms have been applied in many fields, such as artificial neural networks, electricity, mechanical design, robotics, signal processing, biological information, economics, modern agriculture and operations research. However, although differential evolution algorithms have been widely studied, compared with other evolutionary algorithms, their research results are rather scattered and lack of systematization, especially in the theoretical aspects have not been a major breakthrough.

Differential evolution algorithm is a random heuristic search algorithm, which is easy to use and has strong robustness and global optimization ability. It is a random search algorithm from a mathematical point of view and an adaptive iterative optimization process from an engineering point of view. In addition to good convergence, the differential evolution algorithm is very easy to understand and implement. It contains only a few control parameters, and the values of these parameters can remain unchanged during the whole iteration process. Differential evolution algorithm is a self-organizing minimization method that requires very little input from the user. Its key idea is different from the traditional evolution method: the traditional method determines the vector disturbance with the pre-determined probability distribution function; The self-organizing program of the differential evolution algorithm uses two randomly selected different vectors in the population to interfere with an existing vector, and each vector in the population must be interfered with. Differential evolution algorithms use a population of vectors where random perturbations of the population vector can be performed independently and are therefore parallel. If the cost of the new vector corresponds to the value of the function is less than the cost of their predecessor, they will replace the predecessor. Like other evolutionary algorithms, differential evolution algorithm also operates on the population of candidate solutions, but its population propagation scheme is different from other evolutionary algorithms: it generates a new parameter vector by adding the weighted difference vector between two members of the population to the third member, which is called “mutation”. Then, the parameters of the variation vector and other pre-determined target vector parameters are mixed according to certain rules to generate the test amount. This operation is called “crossing”. Finally, if the cost function of the test vector is lower than that of the target vector, the test vector replaces the target vector in the next generation. This operation is called “selection”. All members of the population must perform this operation once as target vectors in order to have the same number of competitors in the next generation. The optimal parameter vector of each generation is evaluated during evolution to record the minimization process. In this way, by using random deviation disturbance to generate new individuals, a very good convergence result can be obtained and the search process can be guided to approximate the global optimal solution [6-7].

2.2 Characteristics of Differential evolution Algorithm Differential evolution algorithm has been widely studied and successfully applied in just 20 years since it was proposed. The algorithm has the following characteristics: (1) Simple structure and easy to use. Differential evolution algorithm mainly uses differential mutation operator to carry out genetic operation, because this operator only involves the vector addition and subtraction operation, so it is easy to implement; The algorithm adopts probabilistic transition rules and does not need deterministic rules. In addition, the differential evolution algorithm has few control parameters, and the influence of these parameters on the algorithm performance has been studied to some extent, and some guiding suggestions have been obtained, so it is convenient for users to choose a better parameter setting according to the problem. (2) Superior performance. Differential evolution algorithm has good reliability, high efficiency and robustness. For large-space, nonlinear and non-differentiable continuous problems, its solving efficiency is better than other evolutionary methods. Moreover, many scholars continue to improve the differential evolution algorithm to continuously improve its performance. (3) Adaptive. The differential mutation operator of differential evolution algorithm can be either a fixed constant or an adaptive mutation step and search direction, which can be automatically adjusted according to different objective functions, so as to improve the search quality. (4) The differential evolution algorithm has inherent parallelism, can cooperate search, and has the ability to use individual local information and group global information to guide the algorithm to further search. Under the same accuracy requirement, the differential evolution algorithm has faster convergence speed. (5) The algorithm is universal, and can operate on the structured object directly without relying on the problem information, and there is no restriction on the objective function. Differential evolution algorithm is very simple to operate and easy to programming, especially for solving high-dimensional function optimization problems.

3 Types of differential evolution algorithms 3.1 Basic differential evolution algorithmsThe operation procedures of the basic differential evolution algorithm are as follows [8] : (1) initialization; (2) variation; (3) cross; (4); (5) Boundary condition processing.Initialize theThe differential evolution algorithm uses NP real numerical parameter vectors with dimension D and takes them as the population of each generation. Each individual is expressed as:Another method is to conduct boundary absorption processing, in which the individual value exceeding the boundary constraint is set to the adjacent boundary value.

3.2 Other forms of differential evolution algorithmThe above is the most basic differential evolution algorithm operation procedures, practical applications have also developed several differential evolution algorithm deformation forms, with the symbol DE/x/ Y/Z to distinguish, where: X defines the current variable vector is “random” or “best”; Y is the number of difference vectors that we use; Z indicates how the crossover program operates. The cross operation described earlier is represented as “bin”. Using this representation, the basic differential evolution algorithm strategy can be described as DE/ RAND /1/bin. There are other forms [5, such as:3.3 Improved differential evolution algorithm Adaptive differential evolution algorithmAs an efficient parallel optimization algorithm, differential evolution algorithm develops rapidly, and many improved differential evolution algorithms appear. The following is a differential evolution algorithm with adaptive operators [9].

4 Process of differential evolution algorithmThe differential evolution algorithm adopts real number coding, simple variation operation based on difference and “one to one” competitive survival strategy. The specific steps are as follows: (1) Determine the control parameters of the differential evolution algorithm and the specific strategy to be adopted. The control parameters of differential evolution algorithm include population number, mutation operator, crossover operator, maximum evolution algebra, termination condition and so on. (2) The initial population is randomly generated, and the evolutionary algebra K =1. (3) Evaluate the initial population, that is, calculate the objective function value of each individual in the initial population. (4) Determine whether the termination condition or the maximum evolutionary algebra is reached: if yes, the evolution is terminated, and the optimal individual at this time is output as the solution; Otherwise, go to the next step. (5) Mutation operation and crossover operation were carried out, and boundary conditions were processed to obtain temporary population. (6) Evaluate the temporary population and calculate the objective function value of each individual in the temporary population. (7) Select the individuals in the temporary population and the corresponding individuals in the original population by “pair -” to obtain the new population. (8) Evolution algebra K = K +1, go to step (4). The operation process of differential evolution algorithm is shown in Figure 3.1.

Control parameters have a great influence on a global optimization algorithm. There are also some empirical rules for the selection of control variables in differential evolution algorithm.

Population number NP In general, the larger the population size AP is, the more individuals there will be, the better the diversity of the population will be, and the stronger the ability of optimization will be. However, it also increases the difficulty of calculation. So, NP cannot be infinitely large. According to experience, the reasonable selection of population number NP is between 5D and 10D, and NP must be ≥4 to ensure that the differential evolution algorithm has enough different variation vectors.

Mutation operator F mutation operator FE[0,2] is a real constant factor that determines the amplification of the bias vector. If mutation operator is small, the algorithm may be “precocious”. With the increase of/value, the ability to prevent the algorithm from falling into the local optimal value increases, but when F>1, it becomes very difficult for the algorithm to quickly converge to the optimal value. This is due to the fact that when the disturbance of the difference vector is greater than the distance between two individuals, the convergence of the population becomes very poor. Current research shows that F values less than 0.4 and greater than 1 are only occasionally valid, and /=0.5 is usually a good starting choice. If the species group converges prematurely, then F or NP should increase.

Crossover operator CR crossover operator CR is a real number in the range [0,1] that controls the probability that an experimental vector parameter is derived from a randomly selected variation vector rather than the original vector. The larger the crossover operator CK is, the more likely the crossover will occur. A good choice for CR is 0.1, but larger CK usually accelerates convergence, and to see if a quick solution is possible, try CR=0.9 or CF=1.0 first.

Maximum evolutionary algebra G Maximum evolutionary algebra 6 is a parameter representing the end condition of differential evolution algorithm. It means that the differential evolution algorithm stops running after it runs to the specified evolutionary algebra and outputs the best individual in the current population as the optimal solution of the problem. Generally, 6 takes 100~500.

In addition to the maximum evolution algebra, other decision criteria can be added to the termination condition of differential evolution algorithm. Generally, the program terminates when the value of the objective function is less than the threshold value, which is usually 10-6. In the above parameters, F and CR, like NP, are constants in the search process. Generally, F and CR affect the convergence rate and robustness of the search process, and their optimization values are not only dependent on the characteristics of the objective function, but also related to NP. Suitable values for F, CR, and NP can usually be found by doing some experiments with different values and using the experiment and the resulting errors.

Two, case and complete source code

1 case

2 Complete code

% % % % % % % % % % % % % % % % % differential evolution algorithm for function extremum % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % initialization % % % % % % % % % % % % % % % % % % % % % % % % % clear all; % clear all variables close all; % qing figure CLC; CLS NP % =50; % Number of individuals D=10; % The dimension of the variable G=200; % maximum evolutionary algebra F0=0.4; % Initial mutation operator CR=0.1; % crossover operator Xs=20; Xx = % cap- 20; Z = % lower limit10^- 6; Threshold % % % % % % % % % % % % % % % % % % % % % % % % % % initialise % % % % % % % % % % % % % % % % % % % % % % % % x = zeros (D, NP); % initial population V = Zeros (D,NP); % variation population U = Zeros (D,NP); % select population x=rand(D,NP)*(xS-xx)+Xx; Initialise % % % % % % % % % % % % % % % % % % % % % calculating objective function % % % % % % % % % % % % % % % % % % % %for m=1:NP
    Ob(m)=func1(x(:,m));
end
trace(1)=min(Ob); % % % % % % % % % % % % % % % % % % % % % % % difference evolutionary cycle % % % % % % % % % % % % % % % % % % % % %for gen=1: G % % % % % % % % % % % % % % % % % % % % % % mutation % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % adaptive mutation operator % % % % % % % % % % % % % % % % % % % lambda =exp(1-G/(G+1-gen));
    F=F0*2^(lamda); %%%%%%%%%%%%%%%%% R1, R2, R3 and M are not the same %%%%%%%%%%%%%%%%for m=1:NP
        r1=randi([1,NP],1.1);
        while (r1==m)
            r1=randi([1,NP],1.1);
        end
        r2=randi([1,NP],1.1);
        while (r2==m)|(r2==r1)
            r2=randi([1,NP],1.1);
        end
        r3=randi([1,NP],1.1);
        while (r3==m)|(r3==r1)|(r3==r2)
            r3=randi([1,NP],1.1);
        end
        v(:,m)=x(:,r1)+F*(x(:,r2)-x(:,r3)); End % % % % % % % % % % % % % % % % % % % % % % crossover operation % % % % % % % % % % % % % % % % % % % % % % % r = randi ([1,D],1.1);
    for n=1:D
        cr=rand(1);
        if (cr<=CR)|(n==r)
            u(n,:)=v(n,:);
        elseu(n,:)=x(n,:); End end % % % % % % % % % % % % % % % % % % % boundary condition processing % % % % % % % % % % % % % % % % % % % % %for n=1:D
        for m=1:NP
            if(u(n,m)<Xx)|(u(n,m)>Xs) u(n,m)=rand*(Xs-Xx)+Xx; End end end % % % % % % % % % % % % % % % % % % % % % % selects % % % % % % % % % % % % % % % % % % % % % % %for m=1:NP
        Ob1(m)=func1(u(:,m));
    end
    for m=1:NP
        if Ob1(m)<Ob(m)
            x(:,m)=u(:,m);
        end
    end  
    for m=1:NP
        Ob(m)=func1(x(:,m));
    end
    trace(gen+1)=min(Ob);
    if min(Ob(m))<yz
        break
    end
end
[SortOb,Index]=sort(Ob);
x=x(:,Index);
X=x(:,1); % Optimal variable Y=min(Ob); The optimal value % % % % % % % % % % % % % % % % % % % % % % % % % % drawing % % % % % % % % % % % % % % % % % % % % % % % % % %figure
plot(trace);
xlabel('Number of iterations')
ylabel('Objective function value')
title('Fitness evolution curve')

Copy the code
% % % % % % % % % % % % % % % % % % % % % % % fitness function % % % % % % % % % % % % % % % % % % % % % % % % function result = func1 summ (x) = sum (x ^2);
result=summ;
Copy the code

Problem solving process and operation result

1 problem solving process

2 Operation Results

Iv. Matlab version and references

1 Matlab version 2014A

2 references “Intelligent optimization algorithm and MATLAB examples (2nd edition)” baozi Yang Yuji Zhou Yang Shan by publishing House of Electronics Industry