Neural network – support vector machine

Support Vector Machine (SVM) was first proposed by Cortes and Vapnik in 1995. It shows many unique advantages in solving small sample size, nonlinear and high-dimensional pattern recognition, and can be generalized to other Machine learning problems such as function fitting. 1 Mathematics section 1.1 Two-dimensional space​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ 2 algorithm Part​​ ​​ ​​

Two, intelligent algorithm – whale algorithm

1, inspired

Whale optimization Algorithm (WOA) is a new swarm intelligence optimization algorithm proposed by Mirjalili et al from Griffith University in Australia in 2016. Its advantages lie in simple operation, few parameters and strong ability to jump out of local optimum.​​

Figure 1 Hunting feeding behavior of humpback whales

2. Surround the prey

Humpback whales recognize the location of prey and circle it. Since the position of the optimal position in the search space is unknown, the WOA algorithm assumes that the current best candidate solution is the target prey or close to the optimal solution. After the best candidate solution is defined, the other candidate locations attempt to move to the best and update their positions. This line is represented by the following equation:

3. Hunting behavior

According to the hunting behavior of humpback whales, it swims toward prey in spiral motion, so the mathematical model of hunting behavior is as follows:

4. Hunt for prey

The mathematical model is as follows:

Three, code,

% The Whale Optimization Algorithm function [Leader_score,Leader_pos,Convergence_curve]=WOA(SearchAgents_no,Max_iter,lb,ub,dim,fobj) % initialize position vector and score for the leader Leader_pos=zeros(1,dim); Leader_score=inf; %change this to -inf for maximization problems %Initialize the positions of search agents % Positions=initialization(SearchAgents_no,dim,ub,lb); Positions=ceil(rand(SearchAgents_no,dim).*(ub-lb)+lb); Convergence_curve=zeros(1,Max_iter); t=0; % Loop counter % Main loop while t<Max_iter for i=1:size(Positions,1) % Return back the search agents that go beyond the  boundaries of the search space Flag4ub=Positions(i,:)>ub; Flag4lb=Positions(i,:)<lb; Positions(i,:)=(Positions(i,:).*(~(Flag4ub+Flag4lb)))+ub.*Flag4ub+lb.*Flag4lb; % Calculate objective function for each search agent fitness=fobj(Positions(i,:)); % Update the leader if fitness<Leader_score % Change this to > for maximization problem Leader_score=fitness; % Update alpha Leader_pos=Positions(i,:); end end a=2-t*((2)/Max_iter); 19:00 :00 :00 :00 :00 :00 :00 :00 :00 :00 :00 :00 :00 :00 :00 :00 :00 :00 :00 :00 :00 :00 :00 :00 :00 :00 :00 :00 :00 :00 :00 :00 :00 :00 :00 :00 :00 :00 :00 a2=-1+t*((-1)/Max_iter); % Update the Position of search agents for i=1:size(Positions,1) r1=rand(); % r1 is a random number in [0,1] r2=rand(); % r2 is a random number in [0,1] a =2*a*r1-a; % Eq. (2.3) in the paper C=2*r2; % Eq. (2.4) in the paper b=1; % parameters in Eq. (2.5) l=(a2-1)*rand+1; % parameters in Eq. (2.5) p = rand(); % p in Eq. (2.6) for j=1:size(Positions,2) if P <0.5 if ABS (A)>=1 rand_leader_index = floor(SearchAgents_no*rand()+1); X_rand = Positions(rand_leader_index, :); D_X_rand=abs(C*X_rand(j)-Positions(i,j)); % Eq. (2.7) Positions (I, j) = X_rand (j) - A * D_X_rand; % Eq. (2.8) elseif ABS (A)<1 D_Leader= ABS (C*Leader_pos(j) -positions (I,j)); % Eq. (2.1) Positions (I, j) = Leader_pos (j) - A * D_Leader; % Eq. (2.2) end elseif P >=0.5 distance2Leader=abs(Leader_pos(j) -positions (I,j)); % Eq. (2.5) Positions (I, j) = distance2Leader * exp (b. * l). * cos (2 * PI * l.) + Leader_pos (j); end end end t=t+1; Convergence_curve(t)=Leader_score; % [t Leader_score] endCopy the code

5. References:

The book “MATLAB Neural Network 43 Case Analysis”