\

Looking back on my learning path about Pytorch, I found all kinds of materials in the beginning, but I didn’t feel the learning process. Some tutorials are just to write a CNN at the beginning. Although the content is very simple, it is easy for people to find the key point and learn in a fog. Some tutorials and shallow tasted, master to the threshold before the total feeling that he has not entered the door, the tutorial is over. \

Therefore, I summarized my original learning route and planned to further consolidate my Pytorch foundation. On the other hand, I would like to put together a tutorial from scratch, starting with no contact with PyTorch and finishing up some of the work in the latest paper. To organize their own learning notes as the main line, we can refer to.

In the first note, we’ll start with a simple classifier. The main process is divided into the following three parts:

1. A customized training set is generated, specifically some points on the two-dimensional plane, which can be divided into two categories;

2. Build a shallow neural network to achieve feature fitting, mainly to understand how to build the network structure in PyTorch;

3. Complete the training and testing section and become familiar with how PyTorch trains and tests the network.

1. Customize the generated data set

n_data = torch.ones(100.2)
x0 = torch.normal(2*n_data, 1)
y0 = torch.zeros(100)
x1 = torch.normal(-2*n_data, 1)
y1 = torch.ones(100)


x = torch.cat((x0, x1)).type(torch.FloatTensor)
y = torch.cat((y0, y1)).type(torch.LongTensor)
Copy the code

In this article, we will first consider to implement classification on a self-defined simple data set. In this way, we can easily understand how to build a neural network model with PyTorch.

So this code for those of you who are familiar with Numpy should be able to guess what it’s about, but numpy is a Numpy array, PyTorch is a tensor. Here I briefly introduce the role of these lines of code, to have the need of the students to straighten out the idea.

First of all, n_data is the base number, and it’s going to generate something else, and it’s going to be a tensor with 100 rows and two columns, and it’s going to have values of 1. X0 is the coordinate value of a class of data, generated by this n_data.

The specific method of generating this is to use the function torch.normal(), with the first argument to mean and the second argument to STD. Therefore, the returned result x0 is of the same shape as n_data, but the data is randomly selected from a normal distribution with 2 as the mean and 1 as the standard deviation. Y0 is a 100 dimensional tensor, which has zero values.

So we can think of x0 and y0, and x0 is a tensor of 100 rows and two columns, and it’s going to be distributed randomly around the center of two, it’s going to be normally distributed, and the labels for those points are going to be y0, which is zero. In contrast, x1 corresponds to a center of -2 and is labeled y1, meaning that each point is labeled 1.

And then the resulting x and y, you combine all the data, x0 and x1 together as data, y0 and y1 together as labels.

2. Construct a shallow neural network

class Net(torch.nn.Module) :
    def __init__(self, n_feature, n_hidden, n_output):
        super(Net, self).__init__()
        self.n_hidden = torch.nn.Linear(n_feature, n_hidden)
        self.out = torch.nn.Linear(n_hidden, n_output)


    def forward(self, x_layer):
        x_layer = torch.relu(self.n_hidden(x_layer))
        x_layer = self.out(x_layer)
        x_layer = torch.nn.functional.softmax(x_layer)
        return x_layer




net = Net(n_feature=2, n_hidden=10, n_output=2)
# print(net)


optimizer = torch.optim.SGD(net.parameters(), lr=0.02)
loss_func = torch.nn.CrossEntropyLoss()
Copy the code

The Net() class above is how to build a neural network. This is a simple enough example if we are writing a neural network in PyTorch for the first time. The contents consist of two parts, the __init__() function and the forward() function.

__init__() is the definition of the network structure, which layers, and what function each layer has. For example, in this function, self.n_hidden defines a linear fitting function, i.e., the full connection layer, which acts as a mapping to the hidden layer. The input is n_feature, and the output is the number of neurons in the hidden layer n_hidden. Then self.out is also a fully connected layer, the input is the number of neurons in the hidden layer, n_hidden, and the output is the final output, n_output.

Next comes the forward() function, which defines the order of execution of our neural network. So here you can see that the above hidden layer function is performed on the input X_layer, which is the first full connection self.n_hidden(), and the activation function relu is performed on the output. Then do the same, passing through an output layer self.out() to get the final output. The output X_Layer is then returned.

Optimizer is defined here, where LR is the learning rate parameter. And then the loss function we chose the cross entropy loss function, which is the last line of code up here. Optimization algorithms and loss functions can be directly selected from different APIS in PyTorch, using the fixed form above.

3. Complete training and testing

for i in range(100) :out = net(x)
    # print(out.shape, y.shape)
    loss = loss_func(out, y)


    optimizer.zero_grad()
    loss.backward()
    optimizer.step()
Copy the code

Let’s take a look at the training process. Net() is an object instantiated by net() class, so net() can be used to directly complete the operation of the model. Out is the result predicted by the model, and Loss is the error calculated from the real value according to the cross entropy loss function.

The following three lines of code are a standard form for backpropagation of gradients. At this point our training is complete, and the network can now be used directly to predict and categorize the test data set.

# train result
train_result = net(x)
# print(train_result.shape)
train_predict = torch.max(train_result, 1)[1]
plt.scatter(x.data.numpy()[:, 0], x.data.numpy()[:, 1], c=train_predict.data.numpy(), s=100, lw=0, cmap='RdYlGn')
plt.show()
Copy the code

In order to make you better understand the role of this model, here we do some visualization work, see the learning effect of a model. This can be done with matplotlib, a data visualization library that is very common in Python. The specific usage of Matplotlib is not covered.

The function here is to show the classification effect of the trained model on the training set, which can be understood as training error.

# test
t_data = torch.zeros(100, 2)
test_data = torch.normal(t_data, 5)
test_result = net(test_data)
prediction = torch.max(test_result, 1)[1]


plt.scatter(test_data[:, 0], test_data[:, 1], s=100, c=prediction.data.numpy(), lw=0, cmap='RdYlGn')
plt.show()
Copy the code

Then we randomly generate some data with 0 mean to see how the model classifies these data points.

Although the trained dividing line is not drawn, we can also see that the model learns a segmentation interface to divide the data into two categories.

Recommended reading:

Read common cache problems in high concurrency scenarios \

Using Django to develop DApp\ based on Ethereum smart contract

Let’s read Python tasks like celery\

5 minutes on chain calls in Python

Create a Bitcoin price alert application in Python

▼ clickBecome a community member and click on itIn the see