A list,
Particle swarm Optimization (PSO) is an evolutionary computation technology. It comes from the study of predation behavior in flocks of birds. The basic idea of particle swarm optimization algorithm is to find the optimal solution through the cooperation and information sharing among individuals in the group. The advantage of PSO is that it is simple and easy to implement without many parameters adjustment. It has been widely used in function optimization, neural network training, fuzzy system control and other applications of genetic algorithms.
2. Analysis of particle swarm optimization
2.1 Basic Ideas
Particle swarm optimization (PSO) simulates a bird in a flock by designing a massless particle that has only two properties: speed and position. Speed represents how fast the bird moves, and position represents the direction of the bird. Each particle separately searches for the optimal solution in the search space, and records it as the current individual extreme value, and shares the individual extreme value with other particles in the whole particle swarm, and finds the optimal individual extreme value as the current global optimal solution of the whole particle swarm. All particles in a swarm adjust their speed and position based on the current individual extremum they find and the current global optimal solution shared by the whole swarm. The following GIF vividly shows the process of the PSO algorithm:
2 Update Rules
PSO initializes as a group of random particles (random solutions). Then find the optimal solution through iteration. At each iteration, the particle updates itself by tracking two “extreme values” (PBest, GBest). After finding these two optimal values, the particle updates its velocity and position by using the formula below.
The first part of formula (1) is called [memory term], which represents the influence of the magnitude and direction of the last speed. The second part of Formula (1) is called [self cognition term], which is a vector pointing from the current point to the particle’s own best point, indicating that the particle’s action comes from its own experience. The third part of Formula (1) is called [group cognition term], which is a vector from the current point to the best point of the population, reflecting the cooperation and knowledge sharing among particles. The particle is determined by its own experience and the best experience of its companions. Based on the above two formulas, the standard form of PSO is formed.
Formula (2) and Formula (3) are regarded as standard PSO algorithms.
3. Process and pseudocode of PSO algorithm
Ii. Source code
%%%% Optimization of LSTM prediction single sequence CLC clear All close all % loaded data, reconstructed as row vector % loaded data, reconstructed as row vector data =xlsread('Typhoon Day Data 2'.'Sheet1'.'B2:E481'); % Assign your load data to the data variable. %% % before the sequence90% for training, after10NumTimeStepsTrain =round(0.8*size(data,1));
dataTrain = data(1:numTimeStepsTrain,:)';
dataTest = data(numTimeStepsTrain+1:end-1,:)';
numTimeStepsTest = size(data,1)-numTimeStepsTrain- 1; [dataTrainStandardized, ps_input] = mapMinmax (dataTrain,0.1);
[dataTestStandardized, ps_output] = mapminmax(dataTest,0.1); % Enter the LSTM time series to alternate a time step XTrain = dataTrainStandardized(2:end,:);
YTrain = dataTrainStandardized(1, :); [dataTrainStandardized_zong, ps_input_zong] = mapminmax(data'.0.1);
XTest_zong = dataTrainStandardized_zong(2:end,end); % test set type YTest_zong = dataTest(1.1)'; % test set output XTest = dataTestStandardized(2:end,:); Enter YTest = dataTest(1,:)'; % test set output %% % creates an LSTM regression network, specifying the number of hidden cells in the LSTM layer96*3% sequence prediction, therefore, input one-dimensional, output one-dimensional numFeatures =3; % Number of input layers numResponses =1; % Output layers numHiddenUnits =20*3; layers = [ ... sequenceInputLayer(numFeatures) lstmLayer(numHiddenUnits) fullyConnectedLayer(numResponses) regressionLayer]; %% initializes population N =5; % Initial population d =1; % space dimension ger =100; % Max iteration limit = [0.001.0.01; ] ; % set position parameter limit (matrix can be multi-dimensional) vlimit = [0.005.0.005; ] ; % Set speed limit c_1 =0.8; % inertia weight C_2 =0.5; % self-learning factor C_3 =0.5; % group learning factorfor i = 1:d
x(:,i) = limit(i, 1) + (limit(i, 2) - limit(i, 1)) * rand(N, 1); % initial population position end v =0.005*rand(N, d); % Initial population velocity xm = x; % Historical best location for each individual ym = zeros(1, d); % historical best location of population FXM =1000*ones(N, 1); % Historical best fitness for each individual FYm =1000; % population history optimum fitness %% particle swarm operation iter =1;
times = 1;
record = zeros(ger, 1); % recorderwhile iter <= ger
iter
for i=1:N % specifies training options, solver set to Adam,250Round of training. % Gradient threshold is set to1. Specifies the initial learning rate0.005In the125The wheel is trained by multiplying the factor0.2To reduce the learning rate. options = trainingOptions('adam'.'MaxEpochs'.250.'GradientThreshold'.1.'InitialLearnRate',x(i,:), ...
'LearnRateSchedule'.'piecewise'.'LearnRateDropPeriod'.125.'LearnRateDropFactor'.0.1.'Verbose'.0); % training LSTM.net = trainNetwork (XTrain, YTrain, the layers, the options). % net = resetState(net); net = predictAndUpdateState(net,XTrain); % YPred1 = [];for mm = 1:numTimeStepsTest
[net,YPred1(:,mm)] = predictAndUpdateState(net,XTest(:,mm),'ExecutionEnvironment'.'cpu'); End mint=ps_output.xmin(1);
maxt=ps_output.xmax(1); YPred=postmnmx(YPred1,mint,maxt); % inverse normalizationfor mm = 1:numTimeStepsTest
if dataTest(2,mm)>25
YPred(mm)=0; End end % de-standardizes the forecast using the previously calculated parameters. % Calculate root mean square error (RMSE). rmse =sqrt(mean((YPred-YTest'). ^2)); % mean square error fx(I) = RMse; % Individual current fitness endfor i = 1:N
iffxm(i) > fx(i) fxm(i) = fx(i); % update individual history best fitness xm(I,:) = x(I,:); % YPred_best1=YPred; end endiffym > min(fxm) [fym, nmax] = min(fxm); Ym = xm(nmax, :); % update the best location of population history YPred_best=YPred_best1; End %% CLC clear all close all %'Typhoon Day Data 2'.'Sheet1'.'B2:E481'); % Assign your load data to the data variable. %% % before the sequence90% for training, after10NumTimeStepsTrain =round(0.8*size(data,1));
dataTrain = data(1:numTimeStepsTrain,:)';
dataTest = data(numTimeStepsTrain+1:end-1,:)';
numTimeStepsTest = size(data,1)-numTimeStepsTrain- 1; [dataTrainStandardized, ps_input] = mapMinmax (dataTrain,0.1);
[dataTestStandardized, ps_output] = mapminmax(dataTest,0.1); % Enter the LSTM time series to alternate a time step XTrain = dataTrainStandardized(2:end,:);
YTrain = dataTrainStandardized(1, :); [dataTrainStandardized_zong, ps_input_zong] = mapminmax(data'.0.1);
XTest_zong = dataTrainStandardized_zong(2:end,end); % test set type YTest_zong = dataTest(1.1)'; % test set output XTest = dataTestStandardized(2:end,:); Enter YTest = dataTest(1,:)'; % test set outputCopy the code
3. Operation results
Fourth, note
Version: 2019B complete code or ghostwrite plus 1564658423