1. Mathematical representation of fault information

In the figure above, K stands for circuit breaker, and there is an FTU device on each circuit breaker, which can feed back whether the circuit breaker is overcurrent, and represents the uploaded fault information, which reflects whether the fault current flows through the switch at each section. If the fault current is 1, otherwise, it is 0). That is:

 

Since the information uploaded by FTU can be divided into two types: fault-free information and fault-free information, it can only be fault-free information and fault-free information for segmented interval, so we can use binary code rules to conduct mathematical modeling for distribution network fault location. The radial distribution network shown in the figure above is taken as an example. The system has 12 segmtioned switches. We can use a string of 12-bit binary codes to represent the upload information of FTU as the input of the program, where 1 indicates that the corresponding switch has overcurrent information and 0 indicates that the corresponding switch has no overcurrent information. At the same time, another string of 12-bit binary code is used as the output of the program, which means failure occurs in the corresponding feeder interval and no failure.

The optimization of traditional distribution network mainly involves the adjustment of generator terminal voltage, transformer tap and capacitor capacity. The optimization of distribution network will also include the control of distributed power and energy storage devices after the connection of distributed power and energy storage devices. The objective function of distribution network operation optimization is to minimize the active power loss of the system and reduce the operating cost of equipment. Optimization variables include continuous variables, i.e., active and reactive power of distributed power supply and energy storage device, and discrete variables, i.e., the number of switch groups of transformer taps and capacitors, location and capacity of access equipment, etc. The constraint conditions mainly include: 1. Maximum and minimum limit of generator terminal voltage; 2. Limit of tap position of transformer and limit of capacitor capacity. 4. Active and reactive power constraints of distributed power sources and energy storage devices. Considering the objective function, variables and constraints of distribution network optimization, the optimization problem can be regarded as a multi-objective and multi-variable mixed integer nonlinear programming problem. At present, there are two main methods to solve the optimization problem of distribution network: traditional mathematical optimization method and artificial intelligence method. Traditional mathematical optimization methods mainly include linear/nonlinear programming and dynamic programming, while artificial intelligence methods mainly include genetic algorithm, simulated annealing and particle swarm optimization algorithm. The traditional optimization algorithm considers the whole optimization problem from the global, the principle is strict, the calculation time is short. However, the initial value of the objective function and optimization variables is highly required. Artificial intelligence algorithm has low requirements on objective function and initial value, and can solve high-dimensional optimization problems. Its disadvantages are easy to fall into local optimal, and the calculation time is long. In conclusion, power distribution network optimization direction of the main contents are: (1) containing distributed generation equipment operation mode and the transformation of energy storage device (2) access to distributed generation in distribution network equipment and the installation location and capacity of energy storage device (3) considering the choice of the distributed power generation equipment as well as the energy storage device running and planning optimization research

Second, the concept of particle swarm optimization

Particle Swarm Optimization (PSO) is a kind of evolutionary computation. It comes from the study of predation behavior in flocks of birds. The basic idea of particle swarm optimization algorithm is to find the optimal solution through the cooperation and information sharing among individuals in the group. The advantage of PSO is that it is simple and easy to implement without many parameters adjustment. It has been widely used in function optimization, neural network training, fuzzy system control and other applications of genetic algorithms.

1. Basic ideas

Particle swarm optimization (PSO) simulates a bird in a flock by designing a massless particle that has only two properties: speed and position. Speed represents how fast the bird moves, and position represents the direction of the bird. Each particle separately searches for the optimal solution in the search space, and records it as the current individual extreme value, and shares the individual extreme value with other particles in the whole particle swarm, and finds the optimal individual extreme value as the current global optimal solution of the whole particle swarm. All particles in a swarm adjust their speed and position based on the current individual extremum they find and the current global optimal solution shared by the whole swarm. The following GIF vividly shows the process of the PSO algorithm:

2. Update the rules

PSO initializes as a group of random particles (random solutions). Then find the optimal solution through iteration. At each iteration, the particle updates itself by tracking two “extreme values” (PBest, GBest). After finding these two optimal values, the particle updates its velocity and position by using the formula below.

The first part of formula (1) is called [memory term], which represents the influence of the magnitude and direction of the last speed. The second part of Formula (1) is called [self cognition term], which is a vector pointing from the current point to the particle’s own best point, indicating that the particle’s action comes from its own experience. The third part of Formula (1) is called [group cognition term], which is a vector from the current point to the best point of the population, reflecting the cooperation and knowledge sharing among particles. The particle is determined by its own experience and the best experience of its companions. Based on the above two formulas, the standard form of PSO is formed.

%function main() clear; clc; tic; psoOptions = get_psoOptions; psoOptions.Vars.ErrGoal = 1e-6; % minimum error LL=5; Parameters common across all functions psooptions.sparams.c1 = 0.02; % boundary parameter psooptions.sparams.w_beta = 0.5; Obj. F2eval = 'fitness_4geDG'; psoOptions.Obj.lb = ones(1,LL); % psooptions.obj. lb = ones(1,32); psoOptions.Obj.ub = [10 7 15 21 11]; % psooptions.obj. Ub = 20*ones(1,32); % psoOptions. Obj. Ub (1, 1, 5) = 4; psoOptions.SParams.Xmax =psoOptions.Obj.ub; % DimIters = [5;... %Dimensions 300]; %Corresponding iterations x = DimIters; psoOptions.Vars.Dim = x(1,:); psoOptions.Vars.Iterations = x(2,:); Swarmsize = [50] % psoOptions population size. The Vars. Swarmsize = swarmsize; disp(sprintf('This experiment will optimize %s function', psoOptions.Obj.f2eval)); disp(sprintf('Population Size: %d\t\tDimensions: %d.', psoOptions.Vars.SwarmSize, psoOptions.Vars.Dim)); temp = 5e6; fVal = 0; % run QPSO [TFxmin, xmin,PBest,fPBest, tHistory] = QPSO(psoOptions); fVal=tfxmin; if temp>tfxmin temp=tfxmin; record=tHistory; end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% toc; Disp (sprintf (' \ nminxfmin = 2.10 g 't \ \ t %, temp)); Optimal function adaptive value xmin % optimized switch combination fPBest % alternative switch combination function adaptive value PBest % alternative switch combination (used in which switch failure case, the alternative scheme, A = FBM (xmin) figure(1) plot(Record (:,2)) xlabel(' iteration number ') ylabel(' fitness value ')Copy the code