Introduction to particle swarm optimization

Particle swarm optimization (PSO) was proposed in 1995 by Dr Eberhart and Dr Kennedy, based on a study of the predatory behaviour of flocks of birds. Its basic core is to make use of the information sharing of individuals in the group so that the movement of the whole group can evolve from disorder to order in the problem solving space, so as to obtain the optimal solution of the problem. Consider the scene: a flock of birds are foraging for food, and there is a field of corn in the distance. None of the birds know exactly where the field is, but they know how far away they are from it. So the best strategy for finding a cornfield, the simplest and most effective strategy, is to search the area around the nearest flock.

In PSO, the solution of each optimization problem is a bird in the search space, called a “particle”, and the optimal solution of the problem corresponds to the “corn field” in the bird flock. All particles have a position vector (the position of the particle in the solution space) and a velocity vector (which determines the direction and speed of the next flight), and the fitness value of the current position can be calculated according to the objective function, which can be understood as the distance from the “corn field”. In each iteration, the examples in the population can learn not only from their own experience (historical location), but also from the “experience” of the optimal particles in the population, so as to determine how to adjust and change the direction and speed of flight in the next iteration. In this way, the whole population of examples will gradually approach the optimal solution.

The above explanation may seem abstract, but a simple example is used to illustrate it

There are two people in a lake who can communicate with each other and can detect the lowest point in their position. The initial position is shown in the picture above, and since the right side is deep, the person on the left will move the boat to the right.

Now it’s deeper on the left, so the person on the right will move the boat a little bit to the left

 

The process is repeated until the two boats meet

A locally optimal solution is obtained

 

Each individual is represented as a particle. The position of each individual at a given time is x(t), and the direction is v(t).

 

 

 

P (t) is the optimal solution of x individual at time t, g(t) is the optimal solution of all individuals at time t, v(t) is the direction of the individual at time t, and x(t) is the position of the individual at time T

 

 

 

The next position is shown above and is determined by x, P and g

 

 

 

The particles in the population can find the optimal solution of the problem by continuously learning from the historical information of themselves and the population.

However, in subsequent studies, the table shows that there is a problem in the original formula above: the update of V in the formula is too random, so that the global optimization ability of the whole PSO algorithm is strong, but the local search ability is poor. In fact, we need PSO to have strong global optimization ability at the early stage of algorithm iteration, while in the later stage of algorithm, the whole population should have stronger local search ability. Therefore, based on the above disadvantages, Shi and Eberhart modified the formula by introducing inertia weight, and thus proposed the inertia weight model of PSO:

 

The components of each vector are represented as follows

 

 

W as PSO inertia weight, it values between [0, 1] interval, general applications adopt adaptive accessor methods, namely the beginning w = 0.9, makes the PSO global optimization ability is stronger, with the deepening of the iteration, diminishing parameter w, so that the PSO with strong local optimization ability, at the end of an iteration, w = 0.1. The parameters C1 and c2 are called learning factors and are generally set to 1,4961. R1 and r2 are random probability values between [0,1].

The algorithm framework of the whole particle swarm optimization algorithm is as follows:

Step1 population initialization, random initialization or specific initialization method can be designed according to the optimized problem, and then the individual adaptive value is calculated to select the local optimal position vector of the individual and the global optimal position vector of the population.

Step2 set iteration: set the iteration number and set the current iteration number to 1

Step3 Speed update: Update the speed vector of each individual

Step4 Position update: Update the position vector of each individual

Step5 update local position and global position vector: update the local optimal solution of each individual and the global optimal solution of the population

Step 6 Judgment of termination conditions: when judging the number of iterations, the maximum number of iterations is reached. If so, output the global optimal solution; otherwise, continue the iteration and jump to Step 3.

The application of particle swarm optimization algorithm is mainly about the design of velocity and position vector iterative operator. The effectiveness of the iterator will determine the performance of the whole PSO algorithm, so how to design the iterator of PSO is the focus and difficulty of the application of PSO algorithm.

2. VRP model

(1) Introduction to vehicle path planning

After 60 years of research and development, vehicle routing planning has changed from a simple vehicle scheduling problem to a complex system problem. The initial vehicle path planning problem can be described as: there is a starting point and several customer points, the geographical location and demand of each point are known, and under the condition of satisfying various constraints, how to plan the optimal path so that it can serve each customer point and finally return to the starting point. Different kinds of vehicle path planning problems can be derived by imposing different constraints and changing the objective of optimization. At the same time, the vehicle path planning problem is a typical NP-hard problem, and its precise algorithm can solve the scale is very small, so the heuristic algorithm has become a research focus.

(2) the introduction of VRPTW

VRPTW(Vehicle Routing Problem with Time Windows) is a Vehicle path planning problem with time Windows, which adds constraints of time Windows to each demand point. That is, for each demand point, the earliest and latest time of service start are set. Require the vehicle to start serving the customer within the time window.

The limitation of Time Window of demand point can be divided into two types. One is Hard Time Window, which requires the vehicle to start serving customers within the Time Window. If the vehicle arrives early, it must wait and be rejected if it is late, and the other is Soft Time Window. However, starting service in the time window must be punished. Replacing waiting with punishment is the biggest difference between soft time window and hard time window.

The mathematical model of VRPTW is as follows:

 

(2.2) Ensure that each customer is visited only once

(2.3) It ensures that the cargo loaded does not exceed the capacity

(2.4) (2.5) (2.6) ensures that each vehicle starts from the Depot and returns to the depot

(2.7) (2.8) Ensure that services are started within the time window

Three, code,

CLC % Clear command line window clear % Remove all variables from the current workspace and free them from system memory Close all % Remove all Windows whose handles are not hidden Tic % Save the current time %% Ga-PSO algorithm solve CDVRP % Input: %City Demand point latitude and longitude %Distance Distance matrix %Demand each point Demand %Travelcon travel constraint %Capacity vehicle Capacity constraint %NIND population number %MAXGEN inherited to the MAXGEN generation program stop % output: %Gbest shortest path %GbestDistance Shortest path length %% Load ('.. /test_data/ city.mat ') % load('.. /test_data/ distance.mat ') % Distance matrix load('.. /test_data/ demand.mat ') % load('.. /test_data/ travelcon.mat ') % load('.. /test_data/ capacity.mat ') %% CityNum=size(City,1)-1; % number of points %% initialization algorithm parameter NIND=60; % Particle count MAXGEN=100; % Max iteration %% 0 matrix Population = zeros(NIND,CityNum*2+1); PopDistance = Zeros (NIND,1); PopDistance = Zeros (NIND,1); % GbestDisByGen = zeros(MAXGEN,1); % pre-allocated memory matrix for I = 1: initialize the particle Population (NIND % % I, :) = InitPop (Travelcon CityNum, short, Demand, Capacity); % % % randomly generated TSP path computation path length PopDistance (I) = CalcDis (Population (Travelcon I, :), short, Demand, Capacity); % Calculate path length end %% store Pbest data Pbest = Population; % initialize Pbest to the current particle set PbestDistance = PopDistance; % initialize the objective function value of Pbest to the objective function value of the current particle set %% store Gbest data [mindis,index] = min(PbestDistance); Gbest = Pbest(index,:); % Initial Gbest particle GbestDistance = mindis; % initial Gbest particle objective function value %% start iteration gen=1; While gen <= MAXGEN %% update for I =1:NIND %% particle with Pbest cross Population(i,2:end-1)=Crossover(Population(i,2:end-1),Pbest(i,2:end-1)); % % cross new path length shorter record to Pbest PopDistance (I) = CalcDis (Population (I, :), short, Demand, Travelcon, Capacity); % if PopDistance(I) < PbestDistance(I) % if the new path length becomes shorter Pbest(I,:)=Population(I,:); % update PbestDistance(I)=PopDistance(I); End %% Number of iterations gen=gen+1; End % for I =1:length(Gbest)-1 if Gbest(I)==Gbest(I +1) Gbest(I)=0; End end Gbest(Gbest==0)=[]; % delete redundant zero element Gbest= gbest-1; Minus 1 for each % code, Calculation result are consistent with the text of the encoding % % data output to the command line disp (' -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - ') toc % shows running time Fprintf (' Total km short = % s \ n ', num2str (GbestDistance) TextOutput (Gbest, short, Demand, Capacity) % shows that the optimal path Disp (' -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - ') % % iteration figure figure plot (GbestDisByGen, 'our LineWidth, 2) Xlim ([1 gen-1]) % set x range set(gca, 'LineWidth',1) xlabel('Iterations') ylabel('Min Distance(km)') title('HPSO Process') %% DrawPath(Gbest,City)Copy the code

The code downloadwww.cnblogs.com/ttmatlab/p/…

Four, the results