Try to understand deep learning concepts and techniques from code Snippets and PyTorch source code

Return to the main table of contents article

The video notes are updated over time, with more and more new as you go down

Most of the videos are limited to 5-8 minutes, and very few are more than 10 minutes.


How to use PyTorch’s Numpy

How to understand PyTorch.Variable

How does PyTorch take a derivative

How does PyTorch find normL1L2

Print function formulas in code

Draw the function quickly

Thinking about building and dissecting source code

Understand Norm_L2 in context

How to browse pyTorch source code from Python to C

The PyTorch source code base is structurally superficial


Pytorch official tutorials

tutorials 01-03

How to build a model with PyTorch:

  • nn.Module, nn.Functional, forward, num_flat_features, inherit, overwrite

How do I complete a forward and backward pass with PyTorch

  • net.parameters, loss.grad_fn.next_functions[0][0], net.zero_grad
  • criterion = nn.MSELoss(), loss = criterion(output, target)
  • Optimizer = optim.sgd (net.parameters(), lr=0.01), optimizer.zero_grad, optimizer.step

Net.zero_grad () == utility == optimizer.zero_grad()

  • Net.parameters () == utility == Optimizer.param_groups [0][‘params’]

Net and Optimizer call parameters in a different way

  • Net. The parameters () to generate the generator; Call all parameters with for in loop
  • Param_groups [0] is the dict, and optimizer.param_groups[0][‘params’] fetches all parameters
  • LST = list(net.parameters()) converts the generator to a list, but a value must be assigned

How does optimizer.step update parameters

  • p.data.add_(-group[‘lr’], d_p)

How to call Method, Attributes inside Net. conv1

  • net.conv1.weight, net.conv2.bias.grad, net.fc1.zero_grad

What happened when optim.sgd was built

  • optimizer.param_groups[0].keys()

How does PyTorch use THNN to calculate MSELoss and verify if THNN is working

  • nn.MSELoss, nn._Loss, nn.module.Module, THNN,
  • pytorch/ torch/lib/ THNN/generic/MSECriterion.c, THNN_(MSECriterion_updateOutput)

What does your model Net inherit from nn.Module

  • ‘super(Net, self).__init__()’ inherits all methods while running super class init
  • Net overwrite `init(), forward()` write a new func `num_flat_features()` for itself

Nn. Module dir and repr functions

  • self._modules.keys(), self.__dict__.keys()
  • lst = list(self._buffers.keys()), sorted(keys)

Nn. What was built in Conv2d

  • nn.Conv2d -> nn._ConvND -> nn.Module
  • nn._ConvND: init, reset_parameters

What happened in F.Conv2D?

  • ConvNd = torch._C._functions.ConvNd

Self.conv1 (x) runs __getattr__ and __call__

How to install GDB to debug python from C/C++ all the way

  • The installation for help
  • C/C++ GDB problem basic solution part solution
  • No module libpython seems to be solved
  • The rest of the can’t read Symbols warning is not really solved

Nn. MSELoss parsing 01

  • MSELoss -> _Loss -> Module
  • Contains init, forward, pre_forward_hooks, forward_hooks, and backward_hooks

Nn. MSELoss parsing 02

  • _functions. THNN. MSELoss. Apply (input, target, size_average) calls the torch. _C in used to calculate MSELoss function
  • CTX == _ContextMethodMixin
  • Try to understand the generality of this approach

Optim. SGD parsing

  • __init__: Repackage params and defaults(including dict arguments) into self.param_groups
  • Easy to use zero_grad and step

The pyTorch common modeling code is combed throughout the process

  • part1 part2 part3 part4 part5 part6 part7 part8 part9
  • Backend1: from pytorch ConvNd to Torch. CSRC. Autograd. Functions provides… ConvForward
  • From Pytorch. Relu passes backend to torch.Threshold
  • From pytorch. Maxpool2d_ through backend_ to torch. C.s. patialDilatedMaxPooling
  • From Pytorch. Mselos_ through backend_ to Torch. Mseloss

The pyTorch multi-classification modeling code is combed throughout the process

  • part1, part2, part3

Note for Loss setting of binary classification problems: Code 3

  • If use BCEWithLogitsLoss
  • Targets type should be used as torch.FloatTensor
  • Targets size should be (-1, 1)
  • If use CrossEntryLoss
  • Targets type must be Torch.LongTensor
  • Fumble process: The process of actually finding errors and finding solutions
  • part1, part2, part3

Explore lengthy readings inside keras

  • View the main modules inside Keras 0:00-7:50
  • Keras.models.Sequential internal structure –13:38
  • keras.legacy.interfaces… Wrapper lets Keras1 communicate with Keras2 — 15:36
  • Keras. Models. The add – 22.10

Why pytorch is more friendly to beginners

  • Easier to debug layer by layer. This video shows how difficult it is to debug keras code

Build your own data Class code document with PyTorch, Part1, Part2, Part3, Part4, summary version

  • Store your own data, transform, Batch, shuffle

How to use Variable. Backward?

y.backward()
y.backward(torch.FloatTensor(x.size())
Copy the code

How do I show the input and output values for processing the middle tier

net.conv2.register_forward_hook(printnorm)
net.conv2.register_backward_hook(printgradnorm)
Copy the code

How do I view the parameters code document for a layer

conv2_param_list = list(self.parameters()) # self: conv2
conv2_param_list.__len__() # 2
conv2_param_list[0].size() 
Copy the code

transfer_learning_tutorial

How to overlay multiple image transformation code documents

data_transforms = { 'train': transforms.Compose([ transforms.RandomSizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), Transforms the Normalize ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]), 'val: transforms.Compose([ transforms.Scale(256), transforms.CenterCrop(224), transforms.ToTensor(), Transforms the Normalize ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]),}Copy the code

ImageFolder how to convert an ImageFolder to a model data format

data_dir = '/Users/Natsume/Desktop/data/hymenoptera_data'
image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x),
                                          data_transforms[x])
                  for x in ['train', 'val']}
Copy the code

How do I inject variable information into the color of a time series

ax1 = plt.subplot2grid((2, 1), (0, 0), colspan=1, rowspan=1)
ax1.set_title("original close price with mv_avg_volume window %d" %vol_window)
# plot predictions(pct) as color into prices
for start, stop, col in zip(xy[:-1], xy[1:], color_data):
    x, y = zip(start, stop)
    ax1.plot(x, y, color=uniqueish_color3(col))
Copy the code

How to use Dataloader to do batch and random

dataloders = {x: torch.utils.data.DataLoader(
             image_datasets[x], batch_size=4, shuffle=True, num_workers=4)
                                             for x in ['train', 'val']}
Copy the code

Plotting to a small batch of maps

def imshow(inp, title=None): ""Imshow for Tensor."" inp = inp.numpy(). Transpose ((1, 2, 0)) mean = np.array([0.485, 0.456, ) STD = np.array([0.229, 0.224, 0.225]) inp = STD * inp + mean plt.imshow(inp) if title is not None: PLT. Title (title) PLT. Pause (0.001) # Pause a bit so that all plots are checked, classes = next(iter(dataloders['train'])) out = torchvision.utils.make_grid(inputs) imshow(out, title=[class_names[x] for x in classes])Copy the code

Call the famous model and its parameters directly

model_ft = models.resnet18(pretrained=True)
Copy the code

Call the famous model internals

model_ft = models.resnet18(pretrained=True)
Copy the code

Tailor-made modifications to trained advanced models

model_ft.fc = nn.Linear(num_ftrs, 2)
Copy the code

Debug the LR usage structure of the optimization algorithm

Exp_lr_scheduler = lr_scheduler.StepLR(Optimizer_ft, step_size=7, gamma=0.1)Copy the code

Train_model Specifies the structure of the training function

For epoch in range(NUM_epochs): for phase in ['train', 'val']: for data in dataloders[phase]: # -0.26s valCopy the code

Scheduler. Step and model.train

def step(self, epoch=None): if epoch is None: epoch = self.last_epoch + 1 self.last_epoch = epoch for param_group, lr in zip(self.optimizer.param_groups, self.get_lr()): param_group['lr'] = lr def get_lr(self): return [base_lr * self.gamma ** (self.last_epoch // self.step_size) for base_lr in self.base_lrs] def train(self, mode=True): """Sets the module in training mode. This has any effect only on modules such as Dropout or BatchNorm. """ self.training  = mode for module in self.children(): module.train(mode) return selfCopy the code

How do most of the parameters of the borrowed high-level model remain the same

# prevent gradients param.requires_grad = False from evaluating the parameterCopy the code

Draw a batch map after training

for j in range(inputs.size()[0]):
    images_so_far += 1
    ax = plt.subplot(num_images//2, 2, images_so_far)
    ax.axis('off')
    ax.set_title('predicted: {}'.format(class_names[preds[j]]))
    imshow(inputs.cpu().data[j])
Copy the code

How to construct your dataset class

from torch.utils.data import TensorDataset. DataLoader
train_dataset = TensorDataset(train_features.data. train_targets.data)
train_loader = DataLoader(train_dataset. batch_size=64.
                        shuffle=True. num_workers=1)
Copy the code


YunJey | pytorch-tutorial

Let PyTorch use the TensorBoard code document

Let pytorch use tensorboard 1. Torchvision. Datasets. MNIST () 1. The iter (data_loader) : 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 plot curves: loss, acc are scalar; plot histogram: params, grads, np.array; plot images: from tensor to (m, h, w)Copy the code


AI-challenger stock

  • Data processing preparation code document lengthy interpretation
  • Model 1: Training code flow Verbose code interpretation
  • Model 2: Predict Code interpretation