Particle swarm optimization

Particle swarm optimization (PSO) was proposed in 1995 by Dr Eberhart and Dr Kennedy, based on a study of the predatory behaviour of flocks of birds. Its basic core is to make use of the information sharing of individuals in the group so that the movement of the whole group can evolve from disorder to order in the problem solving space, so as to obtain the optimal solution of the problem. Consider the scene: a flock of birds are foraging for food, and there is a field of corn in the distance. None of the birds know exactly where the field is, but they know how far away they are from it. So the best strategy for finding a cornfield, the simplest and most effective strategy, is to search the area around the nearest flock.

In PSO, the solution of each optimization problem is a bird in the search space, called a “particle”, and the optimal solution of the problem corresponds to the “corn field” in the bird flock. All particles have a position vector (the position of the particle in the solution space) and a velocity vector (which determines the direction and speed of the next flight), and the fitness value of the current position can be calculated according to the objective function, which can be understood as the distance from the “corn field”. In each iteration, the examples in the population can learn not only from their own experience (historical location), but also from the “experience” of the optimal particles in the population, so as to determine how to adjust and change the direction and speed of flight in the next iteration. In this way, the whole population of examples will gradually approach the optimal solution.

The above explanation may seem abstract, but a simple example is used to illustrate it

There are two people in a lake who can communicate with each other and can detect the lowest point in their position. The initial position is shown in the picture above, and since the right side is deep, the person on the left will move the boat to the right.

Now it’s deeper on the left, so the person on the right will move the boat a little bit to the left

The process is repeated until the two boats meet

A locally optimal solution is obtained

Each individual is represented as a particle. The position of each individual at a given time is x(t), and the direction is v(t).

P (t) is the optimal solution of x individual at time t, g(t) is the optimal solution of all individuals at time t, v(t) is the direction of the individual at time t, and x(t) is the position of the individual at time T

The next position is shown above and is determined by x, P and g

The particles in the population can find the optimal solution of the problem by continuously learning from the historical information of themselves and the population.

However, in subsequent studies, the table shows that there is a problem in the original formula above: the update of V in the formula is too random, so that the global optimization ability of the whole PSO algorithm is strong, but the local search ability is poor. In fact, we need PSO to have strong global optimization ability at the early stage of algorithm iteration, while in the later stage of algorithm, the whole population should have stronger local search ability. Therefore, based on the above disadvantages, Shi and Eberhart modified the formula by introducing inertia weight, and thus proposed the inertia weight model of PSO:

The components of each vector are represented as follows

W as PSO inertia weight, it values between [0, 1] interval, general applications adopt adaptive accessor methods, namely the beginning w = 0.9, makes the PSO global optimization ability is stronger, with the deepening of the iteration, diminishing parameter w, so that the PSO with strong local optimization ability, at the end of an iteration, w = 0.1. The parameters C1 and c2 are called learning factors and are generally set to 1,4961. R1 and r2 are random probability values between [0,1].

The algorithm framework of the whole particle swarm optimization algorithm is as follows:

Step1 population initialization, random initialization or specific initialization method can be designed according to the optimized problem, and then the individual adaptive value is calculated to select the local optimal position vector of the individual and the global optimal position vector of the population.

Step2 set iteration: set the iteration number and set the current iteration number to 1

Step3 Speed update: Update the speed vector of each individual

Step4 Position update: Update the position vector of each individual

Step5 update local position and global position vector: update the local optimal solution of each individual and the global optimal solution of the population

Step 6 Judgment of termination conditions: when judging the number of iterations, the maximum number of iterations is reached. If so, output the global optimal solution; otherwise, continue the iteration and jump to Step 3.

The application of particle swarm optimization algorithm is mainly about the design of velocity and position vector iterative operator. The effectiveness of the iterator will determine the performance of the whole PSO algorithm, so how to design the iterator of PSO is the focus and difficulty of the application of PSO algorithm.

Second, Competitive Learning particle Swarm Optimization algorithm (CLPSO)

The formula only improves the speed updating term of the classical particle swarm optimization algorithm, replacing its historical optimum term with a new comprehensive learning factor and deleting the global optimum term.



So next, we will focus on analyzing his comprehensive learning factor. First look at how it came about:

for k=1:ps ar=randperm(D); ai(k,ar(1:m(k)))=1; fi1=ceil(ps*rand(1,D)); fi2=ceil(ps*rand(1,D)); fi=(pbestval(fi1)<pbestval(fi2))'.*fi1+(pbestval(fi1)>=pbestval(fi2))'.*fi2; bi=ceil(rand(1,D)-1+Pc(k)); if bi==zeros(1,D),rc=randperm(D); bi(rc(1))=1; end f_pbest(k,:)=bi.*fi+(1-bi).*f_pbest(k,:); endCopy the code

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

Simple description is: for each particle, two particles were randomly selected in the whole population, compare two particle fitness and get better as a stay, then according to the article defined the crossover probability of Pc to choose by dimensions will stay particles and history most of the particles of a cross, to generate a comprehensive learning factor, the formula of Pbest. Fri. At the same time, the iteration stagnation times of unchanged particles are recorded in the iteration process (note that the changed particle stagnation times are not reset to zero in the source code). When the stagnation times of a particle is greater than the threshold value, the comprehensive learning factor is generated for it again. Its Pc is defined as follows

t=0:1/(ps-1):1; t=5.*t; Pc = 0.0 + (0.5 0.0). * (exp (t) - exp (t (1))). / (exp (ps) (t) - exp (t (1)));Copy the code

  • 1
  • 2

It is also easy to understand that a linearly increasing Pc (Pc=[0,0.5]) is defined for each particle in the entire population and Pc does not vary with iteration. In fact, all the improvements in the whole algorithm are very simple and easy to understand but effective. Here is my understanding of the improvement: First of all, why do I remove the global optimal term? In fact, many improvements to CLPSO deliberately add the global optimal term to enhance its mining capacity. In my opinion, HCLPSO[2] is a variant with better effect. It mainly introduces the multi-population topology, retains a CL group itself and generates a new CL group with global term. The topology of CLPSO itself is destroyed, so the experiment proves that HCLPSO is inferior to CLPSO in dealing with multi-peak problems. First of all, the comprehensive learning factor itself will select the better one of the two random particles, that is to say, there is a high probability that the global optimization will be selected as the learning target by one or more particles in the population in each iteration. Second, global optimality substantially offsets the exploration capability of the integrated learning factor (the oscillation problem described below), because the intrinsic integrated learning factor is a probabilistic combination of the particle’s own optimality and the random learning target. There is an explanation in the original text why the integrated learning factor works on multimodal problems, namely the oscillations in the evolution of particle swarm optimization itself! From the evolution formula of the speed of the classical particle swarm optimization algorithm, we can see that in addition to the inertia term W*V, the latter two terms actually control the direction of the whole particle evolution process, and also determine the emphasis on exploration and mining in the particle search. Then, when the two terms (pbest-x) and (gbest-x) are opposite to each other or in the same direction as a whole in multiple dimensions, one of them must lose or even react, that is, in mechanics, the two forces are in the same direction or completely opposite. Particles trapped in multi-generation evolution will inevitably lose their searching ability, not only wasting calculation times, but also easily falling into premature convergence. That’s why CLPSO actually avoids this phenomenon by mixing its own historical optimality with randomly selected targets and then removing the global optimality, which turns out to be effective!

Three, part of the code

function [gbest,gbestval,fitcount]= CLPSO_new_func(fhd,Max_Gen,Max_FES,Particle_Number,Dimension,VRmin,VRmax,varargin) % [gbest, gbestval fitcount] = CLPSO_new_func (" f8 ", 3500200, 000,30,30, 5.12, 5.12) rand (' state ', the sum (100 * clock)); me=Max_Gen; ps=Particle_Number; D=Dimension; cc=[1 1]; %acceleration constants t=0:1/(ps-1):1; t=5.*t; Pc = 0.0 + (0.5 0.0). * (exp (t) - exp (t (1))). / (exp (ps) (t) - exp (t (1))); Pc % = 0.5. * 'ones (1, ps); m=0.*ones(ps,1); Iwt = 0.9 - (1: me) * (0.7 / me); % iwt = 0.729 - (1: me) * (0.0 / me); Cc = [1.49445 1.49445]; if length(VRmin)==1 VRmin=repmat(VRmin,1,D); VRmax=repmat(VRmax,1,D); End the mv = 0.2 * (VRmax - VRmin); VRmin=repmat(VRmin,ps,1); VRmax=repmat(VRmax,ps,1); Vmin=repmat(-mv,ps,1); Vmax=-Vmin; pos=VRmin+(VRmax-VRmin).*rand(ps,D); for i=1:ps; e(i,1)=feval(fhd,pos(i,:),varargin{:}); end fitcount=ps; vel=Vmin+2.*Vmax.*rand(ps,D); %initialize the velocity of the particles pbest=pos; pbestval=e; %initialize the pbest and the pbest's fitness value [gbestval,gbestid]=min(pbestval); gbest=pbest(gbestid,:); %initialize the gbest and the gbest's fitness value gbestrep=repmat(gbest,ps,1); stay_num=zeros(ps,1); ai=zeros(ps,D); f_pbest=1:ps; f_pbest=repmat(f_pbest',1,D); for k=1:ps ar=randperm(D); ai(k,ar(1:m(k)))=1; fi1=ceil(ps*rand(1,D)); fi2=ceil(ps*rand(1,D)); fi=(pbestval(fi1)<pbestval(fi2))'.*fi1+(pbestval(fi1)>=pbestval(fi2))'.*fi2; bi=ceil(rand(1,D)-1+Pc(k)); if bi==zeros(1,D),rc=randperm(D); bi(rc(1))=1; end f_pbest(k,:)=bi.*fi+(1-bi).*f_pbest(k,:); end stop_num=0; i=1; while i<=me&fitcount<=Max_FES i=i+1; for k=1:ps if stay_num(k)>=5 % if round(i/10)==i/10%|stay_num(k)>=5 stay_num(k)=0; ai(k,:)=zeros(1,D); f_pbest(k,:)=k.*ones(1,D); ar=randperm(D); ai(k,ar(1:m(k)))=1; fi1=ceil(ps*rand(1,D)); fi2=ceil(ps*rand(1,D)); fi=(pbestval(fi1)<pbestval(fi2))'.*fi1+(pbestval(fi1)>=pbestval(fi2))'.*fi2; bi=ceil(rand(1,D)-1+Pc(k)); if bi==zeros(1,D),rc=randperm(D); bi(rc(1))=1; end f_pbest(k,:)=bi.*fi+(1-bi).*f_pbest(k,:); end for dimcnt=1:D pbest_f(k,dimcnt)=pbest(f_pbest(k,dimcnt),dimcnt); end aa(k,:)=cc(1).*(1-ai(k,:)).*rand(1,D).*(pbest_f(k,:)-pos(k,:))+cc(2).*ai(k,:).*rand(1,D).*(gbestrep(k,:)-pos(k,:)); %~~~~~~~~~~~~~~~~~~~~~~ vel(k,:)=iwt(i).*vel(k,:)+aa(k,:); vel(k,:)=(vel(k,:)>mv).*mv+(vel(k,:)<=mv).*vel(k,:); vel(k,:)=(vel(k,:)<(-mv)).*(-mv)+(vel(k,:)>=(-mv)).*vel(k,:); pos(k,:)=pos(k,:)+vel(k,:); if (sum(pos(k,:)>VRmax(k,:))+sum(pos(k,:)<VRmin(k,:)))==0; e(k,1)=feval(fhd,pos(k,:),varargin{:}); fitcount=fitcount+1; tmp=(pbestval(k)<=e(k)); if tmp==1 stay_num(k)=stay_num(k)+1; end temp=repmat(tmp,1,D); pbest(k,:)=temp.*pbest(k,:)+(1-temp).*pos(k,:); pbestval(k)=tmp.*pbestval(k)+(1-tmp).*e(k); %update the pbest if pbestval(k)<gbestval gbest=pbest(k,:); gbestval=pbestval(k); gbestrep=repmat(gbest,ps,1); %update the gbest end end end % if round(i/100)==i/100 % plot(pos(:,D-1),pos(:,D),'b*'); hold on; % for k=1:floor(D/2) % plot(gbest(:,2*k-1),gbest(:,2*k),'r*'); % end % hold off % title(['PSO: ',num2str(i),' generations, Gbestval=',num2str(gbestval)]); % axis([VRmin(1,D-1),VRmax(1,D-1),VRmin(1,D),VRmax(1,D)]) % drawnow % end if fitcount>=Max_FES break; end if (i==me)&(fitcount<Max_FES) i=i-1; end end gbestvalCopy the code

4. Simulation results

Five, reference and code private message blogger

[1] J.J. Liang, A.K. Qin, P.N. Suganthan, S. Baskar, Comprehensive learning particle swarm optimizer for global optimization of multimodal functions, IEEE Trans. Evol. Comput. 10 (3) (2006) 281 — 295.