I. Introduction of whale algorithm and LSTM

Introduction to Whale Optimization Algorithm (WOA), which simulates the social behavior of humpback whales and introduces bubble net hunting strategies.

1.1 the inspirationWhales are considered to be the largest mammals in the world. An adult whale can be 30 meters long and weigh 180 tons. There are seven main species of these giant mammals, including killer whales, minke whales, Minke whales, humpback whales, right whales, fin whales and blue whales. Whales are often thought of as carnivores who never sleep because they must travel to the surface of the ocean to breathe, but in fact half their brains are asleep. Whales have human-like cells in certain areas of the brain, called spindle cells. These cells are responsible for human judgment, emotion and social behavior. In other words, the spindle cell sets us apart from other living things. Whales have twice as many of these cells as adults, which is the main reason they are highly intelligent and more emotional. Whales have been shown to think, learn, judge, communicate and even become emotional like humans, but apparently only at a very low level of intelligence. Whales (mainly killer whales) have also been observed to develop their own dialects. Another interesting point is about the social behavior of whales, they can live alone or in groups, but most of what we observe is still in groups. Some of them (killer whales, for example) can live in a family for their entire life cycle. One of the largest baleen whales is the humpback whale, which is almost as big as a school bus in an adult. Their favorite prey are krill and small fish. Figure 1 shows this mammal.The most interesting thing about humpback whales is their special hunting method. This feeding behavior is known as the bubble-net feeding method. Humpback whales like to hunt krill or small fish close to the surface. It has been observed that this foraging is accomplished by creating distinctive bubbles along a circular or similar numerical “9” shaped path, as shown in Figure 1. Until 2011, this behavior was based solely on sea surface observations. However, researchers have studied this behavior using tag sensors. They captured 300 tagged bubble net feeding events in nine humpback whales. They identified two bubble-related strategies, which they named outer-spirals and doubleloops. In the former strategy, the humpback dives to about 12 meters of water, then starts creating a spiral bubble around its prey and swims to the surface. The latter strategy consists of three distinct stages: the coral cycle, the tail flap against the water and the capture cycle. I won’t go into details here. However, bubble net feeding is a special behavior unique to humpback whales, and the whale optimization algorithm simulates spiral bubble net feeding strategy to achieve the optimization purpose.

1.2 Mathematical Modeling and Optimization Algorithm 1.2.1 Encircling preyHumpback whales can recognize the location of prey and surround it. Since the position of the optimal design in the search space is not a priori known, the WOA algorithm assumes that the current best candidate solution is the target prey or close to the optimal solution. After the best search agent is defined, other search agents will therefore attempt to update their location to the best search agent. This behavior is expressed by the following equation: Figure 2A describes the basic principle of equation (2) for 2D problems. The position of the search agent (X, Y) can be updated according to the position of the current optimal solution (X ∗, Y ∗). By adjusting the values of vectors A ⃗ and C, different places near the optimal agent at the next time relative to the current position can be found. The possible update location of the agent is searched in 3D space, as shown in Figure 2b. By defining a random vector R, it is possible to reach any position in the search space between the key points shown in FIG. 2, so equation (2) allows any search agent to update its position in the neighborhood of the current optimal solution, thus simulating encirclement predation by whales. A similar concept can be extended to n-dimensional search Spaces. Note that the two graphs in Figure 2 are in the case of a=1 and C=1. Figure 2 2D and 3D position vectors and their possible next positions1.2.2 Bubble-net Attacking Method (Utilization stage)Two methods were designed to model the bubble net behavior of humpback whales: the constriction enveloping mechanism: achieved by reducing the value of A in Equation (3). Note that the fluctuation range of A is also reduced by A, in other words, A is A random value within an interval [-a, A], and A decreases from 2 to 0 as the iteration progresses. Set the random value in A between [-1,1], and the new location of the search agent can be defined as any location between the original location of the agent and the current optimal location of the agent. Figure 3A shows all possible positions from (X, Y) near (X ∗, Y ∗) in 2D space when 0 ≤ A ≤ 1 0. This mechanism is essentially an enveloping predator. Screw to update position. As shown in Figure 3B, the method first calculates the distance between whale position (X, Y) and prey position (X ∗, Y ∗), and then creates a spiral equation between whale and prey position to simulate the spiral movement of humpback whales: (a) Contraction containment mechanism(b) Spiral update location The bubble net search mechanism implemented in Figure 3 WOA It is noteworthy that the humpback whale swims around the prey in an ever-shrinking circle while following a spiral path. To model this simultaneous behavior, assume a 50% chance of choosing between the contract-enveloping mechanism and the spiral model in order to update the whale’s position during optimization, the mathematical model is as follows:Where p pp is a random number between [0,1].1.2.3 Search for Prey (Exploration Phase)In addition to the bubble net approach, humpbacks also randomly search for prey, again based on variable A vectors. In fact, humpbacks randomly search each other based on their positions, thus using A ⃗ with A random value greater than 1 or less than -1 to force the search agent away from the reference whale. In contrast to the exploit phase, the location of the search agent in the exploration phase is updated based on randomly selected search agents, rather than the best search agent so far. This mechanism and ∣ A ⃗ ∣ > 1 emphasize exploration and allow the WOA algorithm to perform global searches. The mathematical model is as follows: Where X → r a n d is a random position vector selected from the current population (representing a random whale). Some possible solutions near A particular solution satisfying A ⃗ > 1 are shown in Figure 4.The exploration mechanism in Figure 4 WOA (X* is a randomly selected search agent) The WOA algorithm starts by randomly initializing a set of solutions, and in each iteration, the search agent updates their position based on the randomly selected search agent or the best solution obtained so far. The parameters of A and AA were reduced from 2 to 0 with the number of iterations, so as to gradually develop from exploration to utilization. Given A ⃗ ∣ > 1 select random search agent, given A ⃗ ∣ < 1 select the optimal solution to update the search agent position. Depending on the value of p pp, WOA can switch between helical and circular motion. Finally, the WOA algorithm is terminated by satisfying the termination criterion. The pseudo code of WOA algorithm is shown in Figure 5.FIG. 5 WOA pseudocode1.3 Code AnalysisAs long as you understand the basic process of the principle, in fact, the code does not explain the difficulties, we mainly introduce how to achieve the above analysis of several important principles, the problem to optimize is the minimum sum of squares of thirty numbers (1) Parameter initialization. At the beginning, the number of agents and the maximum number of iterations can be mainly set. Other parameters related to the algorithm need to be set in the iteration because they are related to the current number of iterations.

SearchAgents_no=30; % Number of agents to search Max_iteration=500; % Maximum number of iterations' '** (2Population initialization ** Randomly initialize the location values of all proxies in each dimension and ensure that the values are within the range. ```c Positions=rand(SearchAgents_no,dim).*(ub-lb)+lb;Copy the code

(3) Population assessment. Evaluate the target value of each agent in the population, if there is an agent due to the current optimal solution, then set it as the optimal solution.

for i=1:size(Positions,1) % calculate the target value of each agent fitness=fobj(Positions(I,:)); % Update the optimal solutionifFitness <Leader_score % if it's a maximization problem, here it is">"
        Leader_score=fitness; 
        Leader_pos=Positions(i,:);
    end
end

Copy the code

(4) Set algorithm parameters related to iteration times.

a=2-t*((2)/Max_iter); % equation (3), a with the number of iterations from2Linear decline to0% a2 from- 1Linear decline to2 -And when we calculate l, a2 is equal to- 1+t*((- 1)/Max_iter);
Copy the code

(5) Update the position of each dimension of each agent.

% Update the Position of search agents 
for i=1:size(Positions,1) r1=rand(); R1 for [%0.1] r2=rand(); R2 for [%0.1] is A random number between A=2*a*r1-a; % equation (3C =)2*r2; % equation (4B =)1; % equation (5B l= a2- 1)*rand+1; % equation (5Random number l p = rand(); % equation (6) in probability pfor j=1:size(Positions,2)
        if p<0.5   
            if abs(A)>=1
                rand_leader_index = floor(SearchAgents_no*rand()+1);
                X_rand = Positions(rand_leader_index, :);
                D_X_rand=abs(C*X_rand(j)-Positions(i,j)); % equation (7) Positions (I, j) = X_rand (j) - A * D_X_rand; % equation (8)elseif abs(A)<1
                D_Leader=abs(C*Leader_pos(j)-Positions(i,j)); % equation (1) Positions (I, j) = Leader_pos (j) - A * D_Leader; % equation (2) end elseif p > =0.5

            distance2Leader=abs(Leader_pos(j)-Positions(i,j)); % equation (5) Positions (I, j) = distance2Leader *exp(b.*l).*cos(l.*2*pi)+Leader_pos(j);
        end
    end
end

Copy the code

Two, some source code

clear all 
clc

SearchAgents_no=30; % Number of search agents

Function_name='F1'; % Name of the test function that can be from F1 to F23 (Table 1.2.3 in the paper)

Max_iteration=500; % Maximum numbef of iterations

% Load details of the selected benchmark function
[lb,ub,dim,fobj]=Get_Functions_details(Function_name);

[Best_score,Best_pos,WOA_cg_curve]=WOA(SearchAgents_no,Max_iteration,lb,ub,dim,fobj);

figure('Position'[269   240   660   290])
%Draw search space
subplot(1.2.1);
func_plot(Function_name);
title('Parameter space')
xlabel('x_1');
ylabel('x_2');
zlabel([Function_name,'( x_1 , x_2 )'])

%Draw objective space
subplot(1.2.2);
semilogy(WOA_cg_curve,'Color'.'r')
title('Objective space')
xlabel('Iteration');
ylabel('Best score obtained so far');

axis tight
grid on
box on
legend('WOA')

display(['The best solution obtained by WOA is : ', num2str(Best_pos)]);
display(['The best optimal value of the objective funciton found by WOA is : ', num2str(Best_score)]);

    function [Leader_score,Leader_pos,Convergence_curve]=WOA(SearchAgents_no,Max_iter,lb,ub,dim,fobj)

% initialize position vector and score for the leader
Leader_pos=zeros(1,dim);
Leader_score=inf; %change this to -inf for maximization problems


%Initialize the positions of search agents
Positions=initialization(SearchAgents_no,dim,ub,lb);

Convergence_curve=zeros(1,Max_iter);

t=0; % Loop counter % Main loopwhile t<Max_iter
    for i=1:size(Positions,1)
        
        % Return back the search agents that go beyond the boundaries of the search space
        Flag4ub=Positions(i,:)>ub;
        Flag4lb=Positions(i,:)<lb;
        Positions(i,:)=(Positions(i,:).*(~(Flag4ub+Flag4lb)))+ub.*Flag4ub+lb.*Flag4lb;
        
        % Calculate objective function for each search agent
        fitness=fobj(Positions(i,:));
        
        % Update the leader
        if fitness<Leader_score % Change this to > for maximization problem
            Leader_score=fitness; % Update alpha
            Leader_pos=Positions(i,:);
        end
        
    end
    
    a=2-t*((2)/Max_iter); % a decreases linearly fron 2 to 0 in Eq. (2.3)
    
    % a2 linearly dicreases from - 1 to 2 - to calculate t in Eq. (3.12)
    a2=- 1+t*((- 1)/Max_iter);
    
    % Update the Position of search agents 
    for i=1:size(Positions,1)
        r1=rand(); % r1 is a random number in [0.1]
        r2=rand(); % r2 is a random number in [0.1]
        
        A=2*a*r1-a;  % Eq. (2.3) in the paper
          % Eq. (2.4) in the paper
        
        
                %  parameters in Eq. (2.5)
        l=(a2- 1)*rand+1;   %  parameters in Eq. (2.5)
        
        p = rand();        % p in Eq. (2.6)
        
        for j=1:size(Positions,2)
            
            if p<0.5   
                if abs(A)>=1
                    rand_leader_index = floor(SearchAgents_no*rand()+1);
                    X_rand = Positions(rand_leader_index, :);
                    D_X_rand=abs(C*X_rand(j)-Positions(i,j)); % Eq. (2.7)
                    Positions(i,j)=X_rand(j)-A*D_X_rand;      % Eq. (2.8)
                    
                elseif abs(A)<1
                    D_Leader=abs(C*Leader_pos(j)-Positions(i,j)); % Eq. (2.1)
                    Positions(i,j)=Leader_pos(j)-A*D_Leader;      % Eq. (2.2)
                end
                
            elseif p>=0.5
              
              
                % Eq. (2.5)
                Positions(i,j)=distance2Leader*exp(b.*l).*cos(l.*2*pi)+Leader_pos(j);
                
            end
            
        end
    end    




Copy the code

3. Operation results

Matlab version and references

1 matlab version 2014A

[1] Yang Baoyang, YU Jizhou, Yang Shan. Intelligent Optimization Algorithm and Its MATLAB Example (2nd Edition) [M]. Publishing House of Electronics Industry, 2016. [2] ZHANG Yan, WU Shuigen. MATLAB Optimization Algorithm source code [M]. Tsinghua University Press, 2017. [3] Zhou Pin. Design and Application of MATLAB Neural Network [M]. Tsinghua University Press, 2013. [4] Chen Ming. MATLAB Neural Network Principle and Examples of Fine Solution [M]. Tsinghua University Press, 2013. [5] FANG Qingcheng. MATLAB R2016a Neural Network Design and Application of 28 Cases Analysis [M]. Tsinghua University Press, 2018.