Backpropagation (BP), short for “error Backpropagation”, is a common method used in conjunction with optimization methods such as gradient descent to train artificial neural networks. The method computes the gradient of loss function for ownership reweight in the network. This gradient is fed back to the optimization method to update the weights to minimize the loss function.

When learning deep neural network, many students find it difficult to understand the details of back propagation. There is a foreign technical blog, which uses examples to carry out a very clear derivation. We have made this Chinese, and provide the relevant code. If you’re interested, come and take a look.

The relevant code

The original address

Suppose you have a network layer like this.

Now assign them an initial value as shown below:

Forward propagation process

1. Input layer —-> hidden layer:

2. Hidden layer —-> Output layer:

Back propagation process

Next, you can calculate the back propagation

1. Calculate the total error

2. Weight update of hidden layer —-> output layer:

The following figure can be more intuitive to see how the error is back propagation

We calculate the value of each formula separately:

And then you multiply all three

Looking at the formula above, we find:

3. Hidden layer —-> Weight update of hidden layer:

Similarly, let’s figure out

Add them together and you get the total

And then finally, you multiply all three

This completes the error back propagation method, and finally we recalculate the updated weights and iterate over and over again.

PC side view the complete code

— — — — — — — — — — — — — — — — — — — — –, Mo (address: momodel. Cn) is a platform support Python modeling of artificial intelligence, can help you to rapidly develop and deploy the AI applications.