This is the 8th day of my participation in the First Challenge 2022

In PyTorch nn, nn is short for Neural Network. In nn package, there is a Module class, which is the base class of neural network modules. We define layers and blocks, which are mostly subclasses of Module.

  • Define class and inherit from nn.Module
  • In class, define its property as Layer
  • Implement the forward method in class
class Network:
  def __init__(self) :
    self.layer = None

  def forward(self,t) :
    t = self.layer(t)
    return t
Copy the code

Nn. Module does a lot of work behind the back.

class Network(nn.Module) :
  def __init__(self) :
    super(Network,self).__init__()
    self.layer = None

  def forward(self,t) :
    t = self.layer(t)
    return t
Copy the code
class Network(nn.Module) :
  def __init__(self) :
    super(Network,self).__init__()
    self.conv1 = nn.Conv2d(in_channels=1,out_channels=6,kernel_size=5)
    self.conv2 = nn.Conv2d(in_channels=6,out_channels=12,kernel_size=5)

    self.fc1 = nn.Linear(in_features=12*4*4,out_features=120)
    self.fc2 = nn.Linear(in_features=120,out_features=60)
    self.out = nn.Linear(in_features=60,out_features=10)

  def forward(self,t) :
    return t
Copy the code

We can see the structure of a network by instantiating it.

network = Network()
network
Copy the code
Network( (conv1): Conv2d(1.6, kernel_size=(5.5), stride=(1.1)) (conv2): Conv2d(6.12, kernel_size=(5.5), stride=(1.1)) (fc1): Linear(in_features=192, out_features=120, bias=True) (fc2): Linear(in_features=120, out_features=60, bias=True) (out): Linear(in_features=60, out_features=10, bias=True))Copy the code
class CustomLinear(nn.Module) :
  def __init__(self,in_features,out_features,bias=True) :
      super(CustomLinear,self).__init__()
      self.in_features = in_features
      self.out_features = out_features

      self.weight = Parameter(torch.Tensor(out_features,in_features))
      if bias:
        self.bias = Parameter(torch.Tensor(out_features))
      else:
        self.register_parameter('bias'.None)
      
      self.reset_parameters()
    
    def forward(self,input) :
      return F.linear(input,self.weight,self.bias)
Copy the code

Parameter is a class inherited from Torch.Tensor, and self.weight is a variable that the model needs to learn to update during training. Self.register_parameter (name,value) registers the parameter in the source code.

The difference between argument and parameter is that when you call a function and you pass in argument, argument is the actual number that you pass in to the function that you’re calling. And the thing that holds variables inside the function is the parameter, so the parameter is universal inside the function, so for that reason we can view the parameter as a placeholder. We can use argument to update the internal parameter.

self.conv1 = nn.Conv2d(in_channels=1,out_channels=6,kernel_size=5)
self.conv2 = nn.Conv2d(in_channels=6,out_channels=12,kernel_size=5)

self.fc1 = nn.Linear(in_features=12*4*4,out_features=120)
self.fc2 = nn.Linear(in_features=120,out_features=60)
self.out = nn.Linear(in_features=60,out_features=10)
Copy the code
  • Hyperparameter
  • Data dependent hyperparameter

The parameters in Conv2d were in_channels, out_channels and kernel_size, while for Linear there were only in_features and out_features. So Hyperparameter is artificially selected parameters, but not randomly selected, mainly based on verification and error feedback to select good results to adjust these parameters. In the convolution layer and the full connection layer kernel_size, out_features and out_channels are artificially defined parameters.

In_channels and out_features of conv1 were dependent on Data and task, so these parameters were parameters of Data dependent hyperparameter.