A list,

1. Basic principle of least squares support vector machine LSSVM

Least square support vector machine is an improvement of support vector machine, The inequality constraint in traditional support vector machines is changed to equality constraint, and the sumsquare error loss function is taken as the experience loss of training set. Thus, solving quadratic programming problems is transformed into solving linear equations, and the speed and convergence accuracy of solving problems are improved.

Common kernel types:



2. How to use LSSVM toolbox

2.1 Least Square Support Vector Machine Matlab Toolbox download link:www.esat.kuleuven.be/sista/lssvm…

2.2 Add the LS-SVM file to the matLAN usage path, and it can be used directly.

Specific use steps:

1. Import training data: LOAD reads MAT file and ASCII file; Xlsread Reads.xls files; Csvread Reads. CSV files.

Data preprocessing: the effect is to speed up the training.

Methods are: Normalized processing (change each group of data into a number between -1 and +1, and the functions involved are premnmx, POST MNMX and TRAMnmx) standardized processing (change each group of data into a group of data with a mean of 0 and a variance of 1, The functions involved are prestd, POATSTD, traSTD) principal component analysis (orthogonal processing, reduce the dimension of the input data, the functions involved are prepCA, trapca)

3 LS-SVM Lab for functional regression mainly uses three functions, trainlssVM function for training model, SimLSSVM function for prediction model, plotlSSVM function is the special drawing function of LS-SVM Lab toolbox.

4 Parameter Description:

A = csvread (‘ traindata. CSV ‘);

Ptrain0=A(:, [ 1:13] ); Ttrain0=A(:, [ 14:16);

[Ptrain, Ptrain, stdptrain] = Ptrain (Ptrain0);

[Ttrain, meant, STDT] = prestd(T train0 ‘);

Prestd() is the data normalization function, where meanptrain is the vector mean value before the unnormalized data and STdptrain is the vector standard deviation before the unnormalized data.

gam =10; sig2=0. 5; Type = ‘function estimation’; Ls-svm requires two parameters to be adjusted. Gam and SIG2 are the parameters of the least square support vector machine, where GAM is the regularization parameter, which determines the minimization and smoothness of adaptive error, and SIG2 is the parameter of RBF function. There is a function in the toolkit, GridSearch, that can be used to find the optimal range of parameters within a certain range. There are two types of type: classfication and function estimation. [alpha, b] = trainlssVM ({Ptrain ‘, Ttrain ‘, type, gam, sig2, ‘RBF_kernel’, ‘preprocess’}); Alpha is the support vector and b is the threshold. Preprocess indicates that the data has been normalized, or ‘original’ indicates that the data has not been normalized. By default, it is’ preprocess ‘. Plotlssvm ({P, T, type, GAM, SIG2, ‘RBF _ kernel’, ‘preprocess’}, {alpha, b}) Similar to the plot function. Simlssvm function is also an important function of LS-SVM toolbox. Its parameters are shown above, and its principle is similar to that of SIM function in neural network toolbox. By calling trainlssVM function and SI M LSSVM function we can see that the structure of least square support vector machine and neural network have a lot in common. Compared with the neural network, the model established by the neural network is better than that of LS-SVM, but in terms of prediction, LS-SVM is superior to the neural network, with better generalization ability, and the training speed is faster than that of the neural network.

Ii. Source code

% = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = % initialization CLC close all clear formatlongTic % = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = % % import data data = xlsread ('value. XLSX'.'Sheet1'.'A2:E41'); % training data1 = xlsread ('value. XLSX'.'Sheet1'.'G2:J31'); % test/row, col = size (data); train_x=data(:,1:col- 1);
train_y=data(:,col);
test_x=data(:,1:col- 1);
% test_y=data(:,col);
 
train_x=train_x';
train_y=train_y';
test_x=test_x';
% test_y=test_y'; %% data normalization [train_x, MINx, MAxx, train_YY, MINy, MAxy] =premnmx(train_x,train_y); test_x=tramnmx(test_x,minx,maxx); train_x=train_x';
train_yy=train_yy';
train_y=train_y';
test_x=test_x';
% test_y=test_y'; %% parameter initialization eps =10^ (- 6); %% Defines the LSSVM parameter type='f';
kernel = 'RBF_kernel';
proprecess='proprecess';
lb=[0.01 0.02]; % c, g ub=[1000 100]; % upper limit of changes in parameters c and g dim=2; % dimension, which is an optimization parameter SearchAgents_no=20; % Number of search agents
Max_iter=50; % Maximum numbef of iterations
% initialize position vector and score for the leader
Leader_pos=zeros(1,dim);
Leader_score=inf; %change this to -inf for maximization problems
%Initialize the positions of search agents
% Positions=initialization(SearchAgents_no,dim,ub,lb);
Positions(:,1) =ceil(rand(SearchAgents_no,1).*(ub(1)-lb(1))+lb(1));
Positions(:,2) =ceil(rand(SearchAgents_no,1).*(ub(2)-lb(2))+lb(2));
Convergence_curve=zeros(1,Max_iter);
t=0; % Loop counter % Main loop woa1; Plot (Convergence_curve,'LineWidth'.2);
title(['Fitness Curve of Whale Optimization Algorithm'.'(parameters c1 =',num2str(Leader_pos(1)),',c2=',num2str(Leader_pos(2)),', termination algebra ='.,num2str(Max_iter),') '].'FontSize'.13);
xlabel('Evolutionary algebra'); ylabel('Error fitness');
 
bestc = Leader_pos(1);
bestg = Leader_pos(2); gam=bestc; sig2=bestg; model=initlssvm(train_x,train_yy,type,gam,sig2,kernel,proprecess); Trainlssvm (model); Train_predict_y, zT,model = SIMlSSVM (model,train_x); [test_predict_y,zt,model]=simlssvm(model,test_x); Predict_y =postmnmx(train_predict_Y,miny,maxy); % test_predict=postmnmx(test_predict_y,miny,maxy);figure
plot(train_predict,':og')
hold on
plot(train_y,'- *')
legend('Predicted output'.'Expected output')
title('Whale Optimized SVM Network Prediction Output'.'fontsize'.12)
ylabel('Function output'.'fontsize'.12)
xlabel('samples'.'fontsize'.12)
disp(['Predicted output'])YPred_best TOC % Calculation timeCopy the code

3. Operation results



Fourth, note

Version: 2014 a