This is the first day of my participation in the Gwen Challenge in November. Check out the details: the last Gwen Challenge in 2021

import random
import torch
from d2l import torch as d2l
Copy the code

D2l is a supporting package prepared by Teacher Li Mu’s team for this book, which needs to be downloaded by yourself.

# generate y = Xw + b + noise
def synthetic_data(w, b, num_examples) :
    X = torch.normal(0.1, (num_examples, len(w)))  # torch.normal(means, std, out=None) 
    # num_examples Number of samples
    # len(w) The number of eigenvalues for each set of data should be the same as the number of weights
    y = torch.mv(X, w) + b
    Add gaussian noise
    y += torch.normal(0.0.01, y.shape)
    return X, y.reshape((-1.1))
    
true_w = torch.tensor([2, -3.4])
true_b = 4.2

features, labels = synthetic_data(true_w, true_b, 1000)
Copy the code
  • This step is to manually generate a data set, because there is no existing data set, so we create our own y=Xw+by =Xw+by =Xw+b.

  • Torch. Normal (means, STD, out=None) generates a random number tensor that follows a normal distribution of means and standard deviation STD.

  • Y is X times w plus B, and that’s going to be a one-dimensional tensor, so use Y.leshape to translate that into a two-dimensional tensor. It goes from size(m) to size(m,1), where m is the number of samples.

  • Manually set real W and B, and generate feature and labels as we think of x and y

Mini-batch Specifies the size of the read batch
batch_size=10

def data_iter(batch_size, features, labels) :
    Get the length of y
    num_examples = len(features)
    Generate index for each sample
    indices = list(range(num_examples))
    # These samples are read at random, in no particular order
    random.shuffle(indices)
    
    for i in range(0, num_examples, batch_size):
        batch_indices = torch.tensor(indices[i: min(i + batch_size, num_examples)])
        yield features[batch_indices], labels[batch_indices]

Copy the code
  • In this case, data_iter is a generator function, not a normal function. Calling data_iter generates a generator instead of executing data_iter.

  • [Python iterators and generators](Python iterators and generators – Nuggets (juejin. Cn))

Randomly initialize w, b
w = torch.normal(0.0.01, size=(2.1), requires_grad=True)
b = torch.zeros(1, requires_grad=True)

# Linear regression model.
def linear_regression(X, w, b) :
    return torch.matmul(X, w) + b

# Loss function mean square error
def squared_loss(y_hat,y) :
    return (y_hat-y.reshape(y_hat.shape))**2/2

# Define the optimization algorithm
    """ Small batch stochastic gradient descent." ""
def sgd(params, lr, batch_size) :
    with torch.no_grad():
        for param in params:
            param -= lr * param.grad / batch_size
            param.grad.zero_()
Copy the code
  • This step is what you normally use to define gradient descent. There should be no explanation.
lr = 0.003
num_epochs = 3
loss = squared_loss
Copy the code
  • Set the learning rate

  • Set the gradient descent step

  • Give your losses individual names

# epochs is the number of steps required for gradient descent
for epoch in range(num_epochs):
    # Take one mini-batch at a time from the generator
    for X, y in data_iter(batch_size, features, labels):
        # Calculate the predicted value of y using linear regression
        y_pred = linear_regression(X, w, b)
        # To calculate the loss,loss() is a batch size vector, which needs to be summed up, because loss is a number, so you can't understand the formula of square loss
        l = loss(y_pred, y).sum(a)# Backpropagation
        l.backward()
        Update parameters with gradient of parameters
        sgd([w, b], lr, batch_size)
        
    with torch.no_grad():
        train_l = loss(linear_regression(features, w, b), labels)
        print(f'epoch {epoch + 1}, loss {float(train_l.mean()):f}')

# print({true_w-w.shape (true_w.shape)}')
# print({true_b-b}')
Copy the code
  • In the last one with torch. No_grad (), the w and B after each step drop are brought back to the whole sample to see the effect.

  • With torch.no_grad()(PyTorch Autograd process parsing and a few little things encountered)

  • The last two sentences that I’ve commented out are to see how far gradient descent ends up from the w and B that you set manually.