# [optimization prediction] based on MATLAB GA optimization BP regression prediction (including the comparison before optimization)

## A list,

**1 BP neural network prediction principle introduction**BP neural network is a multi-layer feed-forward neural network, commonly used as a three-layer structure of input layer - single hidden layer - output layer, as shown in the figure below.The main idea of BP neural network training: the input signal characteristic data is first mapped to the hidden layer (activation function implementation), and then mapped to the output layer (default linear transfer function), to get the expected output value. The expected output value is compared with the actual measured value, the error function J is calculated, and then the error is back propagated, and the weight and threshold of BP network are adjusted by the algorithm such as gradient descent. This process is repeated until the set goal error or the maximum number of iterations is met and the training is stopped.

Use the following example to understand what each layer does.

1) Input layer: it is equivalent to human facial features, which obtain external information and correspond to the process of neural network model input port receiving input data. 2) Hidden Layer: corresponding to the human brain, the brain analyzes and thinks about the data transmitted by the five senses. The hiddenLayer of the neural network maps the data X from the input Layer, which can be simply understood as a formula hiddenLayer_output=F(w*x+b). Where, w and b are weight and threshold parameters, F() is a mapping rule, also known as activation function, and hiddenLayer_output is the output value of the hidden layer to the transmitted data map. In other words, the hidden layer maps the input influence factor data X and generates the mapping value. 3) Output layer: it can correspond to human limbs. After thinking about the information from the facial features (hidden layer mapping), the brain controls the limbs to perform actions (respond to the outside). Similarly, the output layer of the BP neural network maps hiddenLayer_output again, outputLayer_output=w *hiddenLayer_output+b. Where w and B are weight and threshold parameters, and outputLayer_output is the output value (also called simulation value and predicted value) of the output layer of the neural network (understood as the external execution of the human brain, such as the baby tapping the table). 4) Gradient descent algorithm: Calculate the deviation between outputLayer_output and the y value passed in by the neural network model, and use the algorithm to adjust parameters such as weight and threshold accordingly. And what happens is, you can think of it as the baby slaps the table, misses it, adjusts its body according to how far it misses it, so that the arm that moves again gets closer and closer to the table and hits it.

Function of BP neural network

"If you can use all the stars, you can know all the chess positions." Go embodies the ways of nature. When AlphaGo defeated a human go champion, it used algorithms to find the ways of go and realized a man-machine battle. BP neural network training results: get the law between the multidimensional data X and Y, namely to achieve the mapping from X to approximate y. Whether the model trained by BP is reliable or not depends on whether more accurate predictive values can be output for other untrained data when input into BP. Therefore, after the training of BP neural network, it is also necessary to add index factor X1 to the trained BP network to get the corresponding BP output value (predicted value), predicT1, and calculate indicators such as Mse, Mape and R square by plotting, etc., to compare the closeness of PREDICT1 and Y1. You can tell if the model is accurate. This is the test process of the BP model, that is, the prediction process.

In summary, BP neural network realizes: a). A model is trained according to the training set data, b). The reliability and accuracy of the model are predicted by the test set (different from the training sample data), and the accuracy of the prediction is verified by comparing with the actual value. C). Only input is given and predicted value is obtained (it can be understood that measured value is lost in the data of test set, which is essentially the same. Input is put into BP and output is obtained). Since there is no output in this case, pure prediction, unable to test the accuracy of qualified, writing a paper without much significance and need not realize the steps of this case.

**2. Genetic algorithm GA optimization of BP neural network principle**

In the process of BP neural network training, the weight threshold is updated by the algorithm through forward propagation data and reverse transmission of error. On the one hand, in this process, how to determine the weight and threshold of the first forward propagation process, that is, how to initialize the weight and threshold. The method of deep learning is to obtain initial weights and threshold parameters by randomization. On the other hand, after the initial parameters are selected, the gradient descent algorithm takes the initial parameter values as the starting point to optimize and update the parameters.

In the development of optimization algorithms, there are two types: deterministic algorithms and heuristic algorithms. Deterministic algorithm refers to the use of mathematical methods to solve the optimal problem, and the results found are related to the initial point of derivation, generally a definite value. The heuristic algorithm is inspired by the law of biological evolution in nature, with the main idea of iterative approximation to optimal, and the optimization result is a variable value that meets the requirements of engineering precision (infinitely close to the theoretical optimal value).

In the above process, as a deterministic algorithm, the convergence of the GRADIENT descent algorithm is proved, but the convergence value is not necessarily the global optimal, which is related to the initial parameter value (the starting point of the gradient descent algorithm). Since the initial random parameters are not necessarily the optimal starting point (meaning both accurate training and reliable prediction), the reliability and stability of the training model are greatly affected by the initial random parameters. As a heuristic algorithm, genetic algorithm GA has good global search ability. GA is introduced to solve this problem.

The main idea is to take the parameter as the decision variable of the problem and the precision of the model as the objective function of the problem. The algorithm flow chart of genetic algorithm GA optimization of BP neural network is as follows:

**3. Establishment of GA-BP model**3.1 Model and Data Introduction The following takes the official chemical sensor data set provided by MATLAB as an example to carry out modeling. Data introduction: Collect the data of a chemical experiment process, and take the sampling data of eight sensors as input (x) and the sampling data of the ninth sensor as output (y). The data format is as follows:Read data:

`Data =xlsread('data. XLSX'.'Sheet1'.'A1:I498'); Input =data(:,1:end- 1); Output =data(:,end); N=length(output); % Total number of samples testNum=100; TrainNum =N-testNum; % Count the number of training samplesCopy the code`

3.2 Parameter setting of GA and BP 1) Parameter setting of BP

The parameters related to weight and threshold are described as follows: a). Nodes at the input layer and output layer are directly obtained by using the size function. [M,N]=size(A) where M is the number of rows and N is the number of columns in A. Size (A,2) gives you the second argument, N, which is the number of columns. In this data, 8 dimension indicators are input and 1 dimension indicator is output. That is, the input layer node is 8 and the output layer node is 1.

`inputnum=size(input,2); Outputnum =size(output,2); % Number of output layer neuron nodesCopy the code`

B). In the determination process of hidden layer nodes, loop is used to traverse hidden layer nodes and training errors within the range. Because the minimum error is to be found, a large number is set for MSE when the training error is initialized to determine the optimal hidden layer node in the cycle.

```
% Determine the number of hidden layer nodes % Adopt the empirical formula hiddenNum =sqrt(m+n)+a, where m is the number of nodes at the input layer, n is the number of nodes at the output layer, and a is generally1- 10Between MSE=1e+5; % Minimum initialization errorfor hiddennum=fix(sqrt(inputnum+outputnum))+1:fix(sqrt(inputnum+outputnum))+10
Copy the code
```

C). Other BP parameters, learning rate, training times, training target error, etc

`% Network parameter net.trainparam. epochs=1000; % Training times net.trainparam. lr=0.01; % Learning rate net.trainparam. goal=0.000001; % Minimum error of training targetCopy the code`

2) GA parameter setting

`% Initializes the GA parameter PopulationSize_Data=30; % Initial population size MaxGenerations_Data=50; % Max evolution algebra CrossoverFraction_Data=0.8; % Cross probability MigrationFraction_Data=0.2; % Probability of variationCopy the code`

When genetic algorithm is used to solve optimization problems, there are three coding methods for decision variables (optimization variables) : binary coding, vector coding and matrix coding.

As the weight and threshold are in the form of m×n dimensional matrix and vector, they exist in the BP neural network structure (NET). In order to facilitate the optimization of each element, the elements are taken out respectively, and then put into the vector (chromosome) according to the order of taking, to complete the coding. The empirical range of weight and threshold is [-1,1], and the optimization range can be appropriately widened to [-3,3]. The number of optimization variables (elements) is calculated as follows:

`nvars=inputnum*hiddennum_best+hiddennum_best+hiddennum_best*outputnum+outputnum; % variable dimension lb=repmat(- 3,nvars,1); % argument lower limit %repmat gets an nvars×1The vector of dimensions, each of which has a value of zero- 3That is, the lower limit of optimization variable ub=repmat(3,nvars,1); % Upper limit of independent variablesCopy the code`

2) The design of fitness function adopts the following formula to calculate fitness value.Where, TraingingSet and TestingSet are samples of training set and test set respectively. The higher the prediction accuracy, the lower the error, so the formula is designed to solve the minimum mean square error. After using genetic algorithm, the smaller the fitness function value is, the more accurate the training is, and the prediction accuracy of the model is better. 3) Algorithm design treats the genetic algorithm as a "black box" optimizer. After the optimization variables and the target fitness function are determined, the minimum error (the best value of accuracy) and the optimal solution variables can be output only through the "black box", and then the variables can be assigned to the corresponding positions of the weight matrix and threshold vector of the BP neural network for optimized BP training and testing. In the genetic algorithm of the "black box" solver algorithm operations are: selection, crossover and mutation.

## Ii. Source code

`** a).mian. M is the main program, including the realization of BP prediction and GA optimization BP prediction two parts. Change your own data set in the external data EXCEL, and set the corresponding data range in the MATLAB program to run the results. The code is clearly annotated in Chinese. B). The data set is in EXCEL format. When changing data, set the corresponding reading range of EXCEL data in THE MATLAB program. C). Empirical formula is adopted to determine the number of nodes in hidden layer by using cycles, and the process is provided for the number of neuron nodes in input layer, hidden layer and output layer. ** Data introduction: The code is implemented using the data set samples of chemical sensors commonly used in deep learning. The data format is EXCEL. Direct set of data running. There is no limit to the number of input indicators, and the output is single. A). Replace your own data set in the external data EXCEL file of the program; C % read data= xlsRead (' Sheet1',' Sheet1','A1:I498'); Input =data(:,1:end-1); Output =data(:,end); The last column of %data is the output index valueCopy the code`

## Third, the operation result

## Fourth, note

Version: 2014 a