Tensor, you can have 0-dimensional, 1-dimensional and multi-dimensional arrays. You can think of it as Numpy in the neural network world. There is a shared memory and a simple conversion between Numpy and Tensor.

Tensor: Numpy in Pytorch Neural Networks

Tensor

Tensor, which can be arrays of zero dimensions, one dimension and multiple dimensions, you can think of it as Numpy in the neural network world, which is similar to Numpy in that they can share memory and convert between them very easily.

But they’re not the same, the big difference is that Numpy will accelerate Ndarray in the CPU, and Tensor, produced by Torch, will accelerate on the GPU.

Tensor, from the interface division, we can roughly divide into two categories:

1. Torch. Function: such as torch. Sum, torch.

Tensor. Function: e.g. tensor. View, tensor.

In terms of whether to modify itself, it can be divided into the following two categories:

1. Do not modify your own data, e.g. x.dd (y), x’s data is unchanged, and a new Tensor is returned.

2. Modify your own data, for example, x.dd_ (y). The operation result is stored in X, and x is modified.

The simple way to think about it is that the method name is not underlined.

Now, let’s add the two arrays in their respective positions and see what happens:

import torch

x = torch.tensor([1, 2])
y = torch.tensor([3, 4])
print(x + y)
print(x.add(y))
print(x)
print(x.add_(y))
print(x)
Copy the code

After running, the effect is as follows:

Now, let’s formally explain how to use Tensor.

Create a Tensor

Tensor, like Numpy, has many ways to create it. You can generate it with its own functions, you can transform it with lists or Ndarray, you can specify dimensions, and so on. The specific method is shown in the following table (array is tensor) :

Tensor has upper case methods and lower case methods, so let’s look at the code:

Import torch t1 = torch.tensor(1) t2 = torch.tensor(1) print(" value {0}, type {1}".format(t1, T1. The type ())) print (" value {0}, {1} "type. The format (t2, t2. The type ()))Copy the code

After running, the effect is as follows:

Other examples are as follows:

import torch
import numpy as np

t1 = torch.zeros(1, 2)
print(t1)
t2 = torch.arange(4)
print(t2)
t3 = torch.linspace(10, 5, 6)
print(t3)
nd = np.array([1, 2, 3, 4])
t4 = torch.from_numpy(nd)
print(t4)
Copy the code

Other examples are basically similar to the above, so I won’t repeat them here.

Modify the Tensor dimension

Tensor has dimension modification functions, as shown in the table below:

The sample code looks like this:

import torch

t1 = torch.Tensor([[1, 2]])
print(t1)
print(t1.size())
print(t1.dim())
print(t1.view(2, 1))
print(t1.view(-1))
print(torch.unsqueeze(t1, 0))
print(t1.numel())
Copy the code

After running, the effect is as follows:

Intercepting element

Of course, we created the Tensor in order to use the data in it, so inevitably we need to get the data for processing, and we will capture the elements in the Tensor in the form of table:

The sample code looks like this:

Import torch # set random seed, Torch. Manual_seed (100) t1 = torch. Randn (2, 3) :]) # print(torch. Masked_select (t1 > 0)) # print(torch. The second value in the third column is the value in the first row. Tensor([[0, 1, 1], [1, 1, 1]]) 0 z = torch. Zeros (2, 3) print(z.scatter_(1, index, a))Copy the code

After running, the effect is as follows:

We a =torch. Gather (T1, 0, index) and make a diagram for you to understand. As shown in the figure below:

Of course, we have a formula to calculate directly, because so many data lines are really not good to look at, here the blogger lists the conversion formula for your reference:

When dim = 0, out [I, j] = input [index [I, j]] [j] when the dim = 1, the out [I, j] = input [I] [index [I] [j]]Copy the code

Simple math

Tensor, like Numpy, supports mathematical operations. Here, the blogger lists some commonly used mathematical operations for your reference:

Note that all of the function operations in the table above create a new Tensor. If you don’t need to create a new Tensor, use the underlined “_” version of these functions.

The following is an example:

t = torch.Tensor([[1, 2]]) t1 = torch.Tensor([[3], [4]]) t2 = torch.Tensor([5, 6]) # t + 0.1 * (t1 / t2) print (torch. Addcdiv (t, 0.1, t1, t2)) # t + 0.1 * (t1 * t2) print (torch. Addcmul (t, 0.1, t1, t2)) print(torch.pow(t,3)) print(torch.neg(t))Copy the code

After running, the effect is as follows:

All of these functions are pretty easy to understand, except for one function that I believe is not so easy to understand without machine learning. The sigmoid() activation function has the following formula:

Merge operation

The simple understanding is to merge or sum tensors, such as operations, the input and output dimensions are generally not the same, and often the input is greater than the output dimension. Tensor’s merge function is shown in the table below:

The sample code looks like this:

t = torch.linspace(0, 10, 6)
a = t.view((2, 3))
print(a)
b = a.sum(dim=0)
print(b)
b = a.sum(dim=0, keepdim=True)
print(b)
Copy the code

After running, the effect is as follows:

Note that after the sum function is added, the number of dim elements is 1, so it should be removed. To keep this dimension, keepdim should be True, which defaults to False.

Comparison operation

In quantitative trading, we usually compare stock prices. The Tensor also supports comparisons, usually element by element. The specific functions are shown in the following table:

The sample code looks like this:

t = torch.Tensor([[1, 2], [3, 4]]) t1 = torch.Tensor([[1, 1], [4, 4]]) # print(torch. Max (t)) T1)) # print(torch. Eq (t, t1))Copy the code

After running, the output is as follows:

Matrix operations

There are a lot of matrix operations in machine learning and deep learning. As usual with Numpy matrix operations, one is the multiplication of each element and the other is the dot product multiplication. The functions are shown in the following table:

The dot() function computes only 1-dimensional tensors, the mm() function computes only 2-dimensional tensors, and the BMM computes only 3-dimensional matrix tensors. The following is an example:

Tensor([1, 2]) b = torch.tensor ([3, 4]) print(torch.dot(a, b)) 3)) b = torch. Randint (6, (3, 4)) print(torch. Mm (a, b)) # (2, 3, 4)) print(torch.bmm(a, b))Copy the code

After running, the output is as follows:

Click follow to learn about the fresh technologies of Huawei Cloud