A list,

1 introduction:

Grey Wolf Optimizer (GWO) is a population intelligent optimization algorithm proposed by Mirjalili et al from Griffith University in Australia in 2014. This algorithm is an optimization search method developed by the gray Wolf predation activity, and it has strong convergence performance, few parameters, easy to implement and so on. In recent years, it has been widely concerned by scholars, and has been successfully applied to shop scheduling, parameter optimization, image classification and other fields.

2 algorithm principle:

Gray wolves belong to the gregarious canid family and are at the top of the food chain. Gray wolves adhere to a rigid social dominance hierarchy. As shown in figure:



Level 1: The alpha Wolf of the pack is known as \alpha. \alpha Wolf is responsible for making decisions about activities such as predation, habitat and sleep schedule. The alpha Wolf is also called the dominant Wolf because other wolves need to obey the orders of the alpha Wolf. In addition, the \alpha Wolf is not necessarily the strongest Wolf of the pack, but in terms of management ability, the \alpha Wolf is certainly the best.

Second layer of the social hierarchy: \beta Wolf, who is subordinate to \ Alpha Wolf and assists \ Alpha Wolf in making decisions. After the death or aging of \ Alpha Wolf, \ Beta Wolf will become the best candidate for \ Alpha Wolf. Although beta wolves obey alpha wolves, beta wolves have control over wolves in other social hierarchies.

The third layer of the social hierarchy: the \ Delta Wolf, who obeys the \alpha and beta wolves and dominates the rest of the hierarchy. \ Delta wolves are generally composed of pups, sentinels, hunters, old wolves, and nursing wolves.

Level 4 of the social hierarchy: omega Wolf, which usually needs to obey other wolves in the social hierarchy. Although omega wolves seem to play a small role in wolves, without omega wolves, wolves would have internal problems such as cannibalism.

The GWO optimization process includes social stratification, tracking, encircling, and attacking of gray wolves, as described below.

When designing THE GWO, the gray Wolf Social Hierarchy model should be first constructed. The fitness of each individual in the population was calculated, and the three wolves with the best fitness were labeled \alpha, \beta, \delta, and the remaining wolves were labeled \omega. In other words, the social hierarchy of the gray Wolf group is from high to low; \alpha, \beta, \delta and \omega. The optimization process of GWO is mainly guided by the best three solutions (i.e. \alpha, \beta, \delta) in each generation population.

When gray wolves search for Prey, they gradually approach the Prey and encircle it. The mathematical model of this behavior is as follows:



Where, t is the current iteration number:. Represents the Hadamard product operation; A and C are synergy coefficient vectors; Xp represents the position vector of prey; X(t) represents the position vector of the current gray Wolf; During the entire iteration, a decreases linearly from 2 to 0; R1 and r2 are random vectors in [0,1].

3) Hunring

Gray wolves have the ability to identify the location of potential prey (optimal solution), and the search process is mainly guided by \alpha, \beta and \delta wolves. However, the solution space characteristics of many problems are unknown, and gray wolves cannot determine the exact location of prey (optimal solution). In order to simulate the search behavior of the gray Wolf (candidate solution), it is assumed that \alpha, \beta and \delta have strong ability to identify the location of potential prey. Therefore, during each iteration, the best three wolves in the current population (\alpha, \beta, \delta) are retained, and the locations of other search agents (including \omega) are updated based on their location information. The mathematical model of this behavior can be expressed as follows:



Type: X_ {{\ alpha}}, X{ {\ \ beta}}, X{ {\delta}} represent the position vectors \alpha, \beta, \delta in the current population respectively; X represents the position vector of the gray Wolf; D{ {\ alpha}}, D{ {\ \ beta}}, D{_{\delta}} represents the distance between the current candidate wolves and the best three wolves. When | A | > 1, gray wolves try to disperse in different areas and search for prey. When | A | < 1, the Wolf will focus its search on one or A few areas of prey.



As can be seen from the figure, the position of the candidate solution finally falls in the random circular position defined by \alpha, \beta, \delta. In general, \alpha, \beta, \delta need to predict prey first

At the approximate location of the optimal solution), and then the other candidate wolves randomly update their positions near the prey, guided by the current optimal blue Wolf.

In the process of constructing the Attacking Prey model, the decrease of a value will cause the fluctuation of A value according to the formula in 2). In other words, A is A random vector on the interval [-a, A] (note: it is [-2a,2a] in the first paper of the original author, and is corrected as [-a, A] in the following paper), where A decreases linearly during the iteration. When A is in the range [-1, 1], the next position of the Search Agent can be anywhere between the current gray Wolf and the prey.

Gray wolves mainly rely on \alpha, \beta, \delta information to find Prey. They start dispersing to search for information about the location of prey, and then focus on attacking prey. For the establishment of decentralized model, the search agent can be removed from the prey by | A | > 1. This search method enables GWO to conduct global search. Another search coefficient in the GWO algorithm is C. It can be seen from the formula in 2) that C vector is a vector composed of random values in the interval range [0,2], and this coefficient provides random weight for prey to increase (| C | > 1) or decrease (| C | < 1). This helps GWO exhibit random search behavior during optimization to avoid the algorithm falling into local optimality. It is worth noting that C does not decrease linearly. C is a random value in the process of iteration, and this coefficient helps the algorithm to jump out of the local area, especially the algorithm becomes particularly important in the later stage of iteration.

3 VRP problem description: Suppose that in a supply and demand relationship system, vehicles pick up goods from the source and distribute to several corresponding distribution points. The vehicle has a maximum cargo capacity, and distribution may have a time limit. It is necessary to arrange the pickup time reasonably and organize the proper driving route to satisfy the user’s demand and minimize a cost function, such as the minimum total working time and the shortest path.

It can be seen that TSP problem is a simple special form of VRP problem. Therefore, VRP is also a NP hard problem.

Ii. Source code

Multi-threshold segmentation based on minimum cross entropy of gray Wolf algorithm Clear All;clc
rng('default');
I = imread('lena.jpg'); % Read image SearchAgents_no=50; % Population count Max_iteration=100; % Maximum number of iterations Dim =4; % Number of thresholds lb = ones(1,dim); % lower boundary1
ub = 255.*ones(1,dim); % on the border255fobj =@(thresh)fun(I,thresh); % fitness function [Best_score,Best_pos,GWO_cg_curve]=GWO(SearchAgents_no,Max_iteration,lb, UB,dim,fobj); % Grey Wolf Optimizer function [Alpha_score,Alpha_pos,Convergence_curve]=GWO(SearchAgents_no,Max_iter,lb,ub,dim,fobj) % initialize alpha, beta,and delta_pos
Alpha_pos=zeros(1,dim);
Alpha_score=inf; %change this to -inf for maximization problems

Beta_pos=zeros(1,dim);
Beta_score=inf; %change this to -inf for maximization problems

Delta_pos=zeros(1,dim);
Delta_score=inf; %change this to -inf for maximization problems

%Initialize the positions of search agents
Positions=initialization(SearchAgents_no,dim,ub,lb);

Convergence_curve=zeros(1,Max_iter);

l=0; % Loop counter % Main loopwhile l<Max_iter
    for i=1:size(Positions,1)  
        
       % Return back the search agents that go beyond the boundaries of the search space
        Flag4ub=Positions(i,:)>ub;
        Flag4lb=Positions(i,:)<lb;
        Positions(i,:)=(Positions(i,:).*(~(Flag4ub+Flag4lb)))+ub.*Flag4ub+lb.*Flag4lb;               
        
        % Calculate objective function for each search agent
        fitness=fobj(Positions(i,:));
        
        % Update Alpha, Beta, and Delta
        if fitness<Alpha_score 
            Alpha_score=fitness; % Update alpha
            Alpha_pos=Positions(i,:);
        end
        
        if fitness>Alpha_score && fitness<Beta_score 
            Beta_score=fitness; % Update beta
            Beta_pos=Positions(i,:);
        end
        
        if fitness>Alpha_score && fitness>Beta_score && fitness<Delta_score 
            Delta_score=fitness; % Update delta
            Delta_pos=Positions(i,:);
        end
    end
    
    
    a=2-l*((2)/Max_iter); % a decreases linearly fron 2 to 0
    
    % Update the Position of search agents including omegas
    for i=1:size(Positions,1)
        for j=1:size(Positions,2)     
                       
            r1=rand(); % r1 is a random number in [0.1]
            r2=rand(); % r2 is a random number in [0.1]
            
            A1=2*a*r1-a; % Equation (3.3)
            C1=2*r2; % Equation (3.4)
            
            D_alpha=abs(C1*Alpha_pos(j)-Positions(i,j)); % Equation (3.5)-part 1
            X1=Alpha_pos(j)-A1*D_alpha; % Equation (3.6)-part 1
                       
            r1=rand();
            r2=rand();
            
            A2=2*a*r1-a; % Equation (3.3)
            C2=2*r2; % Equation (3.4)
            
            D_beta=abs(C2*Beta_pos(j)-Positions(i,j)); % Equation (3.5)-part 2
            X2=Beta_pos(j)-A2*D_beta; % Equation (3.6)-part 2       
            
            r1=rand();
            r2=rand(); 
            
            A3=2*a*r1-a; % Equation (3.3)
            C3=2*r2; % Equation (3.4)
            
            D_delta=abs(C3*Delta_pos(j)-Positions(i,j)); % Equation (3.5)-part 3
            X3=Delta_pos(j)-A3*D_delta; % Equation (3.5)-part 3             
            
            Positions(i,j)=(X1+X2+X3)/3; % Equation (3.7)
            
        end
Copy the code

3. Operation results





Fourth, note

Version: 2014 a