Small knowledge, big challenge! This article is participating in the creation activity of “Essential Tips for Programmers”.

Neural networks can be used to simulate regression problems, which is essentially a single-input single-output neural network model.

1. Import relevant modules

Numpy: used to generate training data

2. Pyplot: Used to plot scatter plots and visually display fitting results

3. Sequential model (build a deep neural network by stacking multiple layers)

4. Dense: Fully connected neural layer

import numpy as np
import matplotlib.pyplot as plt
from keras.models import Sequential
from keras.layers import Dense
Copy the code

Second, generate data

Generate 500 random data and add random noise to them to regenerate random data scatter graph

Use numpy to generate 500 random points
x_data = np.random.rand(500)
# Add noise
noise = np.random.normal(0.0.01,x_data.shape)
y_data = x_data*0.1 + 0.2 + noise

# a scatter diagram
plt.scatter(x_data,y_data)
plt.show()
Copy the code

Third, build a model

The Sequential model was established, and the neural layer was added with model.add. The Dense fully connected neural layer was added. The Dense fully connected neural layer has two parameters: one is the dimension of input data, and the other unit represents the number of neurons, i.e., the number of output units. If the next neural layer needs to be added, there is no need to define the input latitude, because by default it takes the output of the previous layer as the input of the current layer. Then use Model.pile to set loss function and optimizer, error function is mSE mean square error; The optimizer uses SGD stochastic gradient descent.

# Build a neural network model
model = Sequential()
Add a full connection layer to the model
model.add(Dense(units=1,input_dim=1))

# Select the Loss function and optimizer
# MSE :Mean Squared Error
SGD: Stochastic gradient Descent
model.compile(loss='mse', optimizer='sgd')
Copy the code

4. Training model

Train x_DATA and Y_data in batches with model.train_on_batch. The default return value is cost, and the result is printed every 500 steps. Finally, the weights and bias values after training are printed

# Training process
print('Training -----------')
# Training 5001 batches
forstepinrange(5001):
    cost = model.train_on_batch(x_data, y_data)
ifstep % 500= =0:
        print("After %d trainings, the cost: %f"% (step, cost))

Print weights and offset values
W, b = model.layers[0].get_weights()
print('Weights=', W,'\nbiases=', b)
Copy the code

Visualization of training results

Finally, the predicted results are drawn and compared with the values of the training set.

# scatter(x_data, y_data) # scatter(x_data, y_data) # scatter(x_data, y_data) # scatter(x_data, y_data) Y_pred,'r',lw = 3) plt.show()Copy the code