Translator: bat67

The latest version will be synchronized in the Translator repository first.

PyTorch is a Python-based scientific computing package aimed at two groups of people:

  • As an alternative to NumPy, computations can take advantage of the GPU’s performance
  • As a high flexibility, fast deep learning platform

An introduction to

tensor

Tensor is similar to NumPy’s DARray, but you can also use it on your GPU to speed up calculations.

from __future__ import print_function
import torchCopy the code

Create an uninitialized 5*3 matrix:

x = torch.empty(5, 3)
print(x)Copy the code

Output:

Tensor ([[2.2391e-19, 4.5869e-41, 1.4191e-17], [4.5869e-41, 0.0000e+00, 0.0000e+00], [0.0000e+00, 0.0000e+00, 0.0000 e+00], [e+00 e+00 0.0000, 0.0000, 0.0000 e+00], [e+00 e+00 0.0000, 0.0000, 0.0000 e+00]])Copy the code

Create a random initialization matrix:

x = torch.rand(5, 3)
print(x)Copy the code

Output:

Tensor ([[0.5307, 0.9752, 0.5376], [0.2789, 0.7219, 0.1254], [0.6700, 0.6100, 0.3484], [0.0922, 0.0779, 0.2446]. [0.2967, 0.9481, 0.1311]])Copy the code

Construct a matrix filled with zeros and data type long:

x = torch.zeros(5, 3, dtype=torch.long)
print(x)Copy the code

Output:

tensor([[0, 0, 0],
        [0, 0, 0],
        [0, 0, 0],
        [0, 0, 0],
        [0, 0, 0]])Copy the code

Construct tensors directly from data:

X = Torch. Tensor ([5.5, 3]) print(x)Copy the code

Output:

Tensor ([5.5000, 3.0000])Copy the code

Or build a new tensor from an existing tensor. Unless the user provides a new value, these methods will reuse the properties of the input tensor, such as dtype, etc. :

x = x.new_ones(5, 3, dtype=torch.double) # new_* methods take in sizes print(x) x = torch.randn_like(x, Dtype =torch. Float) # reload dtype! Print (x) # print(x) #Copy the code

Output:

Tensor ([[1, 1, 1], [1, 1, 1], [1, 1, 1], [1, 1, 1], [1, 1, 1]], dtype = torch. Float64) tensor ([[1.6040, 0.6769, 0.0555], [0.6273, 0.7683, 0.2838], [0.7159, 0.5566, 0.2020], [0.6266, 0.3566, 1.4497], [0.8092, 0.6741, 0.0406]])Copy the code

Get its shape:

print(x.size())Copy the code

Output:

torch.Size([5, 3])Copy the code

Note:

Torch.Size is essentially a tuple, so all operations on tuples are supported.

operation

An operation has multiple syntax. In the following example, we will examine addition.

Addition: Form one

y = torch.rand(5, 3)
print(x + y)Copy the code

Output:

Tensor ([[2.5541, 0.0943, 0.9835], [1.4911, 1.3117, 0.5220], [0.0078, 0.1161, 0.6687], [0.8176, 1.1179, 1.9194]. [0.3251, 0.2236, 0.7653]])Copy the code

Addition: Form two

print(torch.add(x, y))Copy the code

Output:

Tensor ([[2.5541, 0.0943, 0.9835], [1.4911, 1.3117, 0.5220], [0.0078, 0.1161, 0.6687], [0.8176, 1.1179, 1.9194]. [0.3251, 0.2236, 0.7653]])Copy the code

Addition: Given an output tensor as a parameter

result = torch.empty(5, 3)
torch.add(x, y, out=result)
print(result)Copy the code

Output:

Tensor ([[2.5541, 0.0943, 0.9835], [1.4911, 1.3117, 0.5220], [0.0078, 0.1161, 0.6687], [0.8176, 1.1179, 1.9194]. [0.3251, 0.2236, 0.7653]])Copy the code

Addition: in-place operation

# adds x to y
y.add_(x)
print(y)Copy the code

Output:

Tensor ([[2.5541, 0.0943, 0.9835], [1.4911, 1.3117, 0.5220], [0.0078, 0.1161, 0.6687], [0.8176, 1.1179, 1.9194]. [0.3251, 0.2236, 0.7653]])Copy the code

Note:

Any in-place operation that changes a tensor is followed by a fixed _. For example, x. popy _(y) and x.t_() will change x

You can also use various index operations like standard NumPy:

print(x[:, 1])Copy the code

Output:

Tensor ([0.6769, 0.7683, 0.5566, 0.3566, 0.6741])Copy the code

Change shape: If you want to change shape, use torch. View

x = torch.randn(4, 4)
y = x.view(16)
z = x.view(-1, 8)  # the size -1 is inferred from other dimensions
print(x.size(), y.size(), z.size())Copy the code

Output:

torch.Size([4, 4]) torch.Size([16]) torch.Size([2, 8])Copy the code

If you have a tensor with just one element, you can use.item() to get python values

x = torch.randn(1)
print(x)
print(x.item())Copy the code

Output:

Tensor (0.0445479191839695 [0.0445])Copy the code

Follow-up reading:

At least 100 tensor operations, including transposes, indexes, slicing, math, linear algebra, random numbers, go here

NumPy bridge

Converting a Torch tensor to a NumPy array is a snap, and vice versa.

The Torch tensor and the NumPy array share their underlying memory location, and changing one changes the other.

Translate Torch’s Tensor into a NumPy array

Input:

a = torch.ones(5)
print(a)Copy the code

Output:

tensor([1., 1., 1., 1., 1.])Copy the code

Input:

b = a.numpy()
print(b)Copy the code

Output:

[1. 1. 1.Copy the code

See how the NumPy array changes the values inside:

a.add_(1)
print(a)
print(b)Copy the code

Output:

tensor([2., 2., 2., 2., 2.])
[2. 2. 2. 2. 2.]Copy the code

Convert the NumPy array to the Torch tensor

See how changing the NumPy array automatically changes the Torch tensor:

import numpy as np
a = np.ones(5)
b = torch.from_numpy(a)
np.add(a, 1, out=a)
print(a)
print(b)Copy the code

Output:

[2. 2. 2. 2. 2.]
tensor([2., 2., 2., 2., 2.], dtype=torch.float64)Copy the code

All tensors on the CPU (except CharTensor) support conversion to NumPy and back from NumPy.

Tensor on CUDA

Tensors can be moved to any device using the. To method:

# Let us run this cell only if CUDA is available # Let us run this cell only if CUDA is available # Let us run this cell only if CUDA is available # Let us run this cell only if CUDA is available # Let us run this cell only if CUDA is available # Let us run this cell only if CUDA is available device = torch.device("cuda") # a CUDA device object y = torch.ones_like(x, Device =device) # Create tensor x = x.tep (device) # create tensor x = x.tep (device) # create tensor x = x.tep (device) # create tensor x = x.tep (device) # create tensor x = x.tep (device) # create tensor x = x.tep (device) Torch. Double)) # '. To 'can also change the dType while movingCopy the code

Output:

= 'cuda tensor ([1.0445], device: 0') tensor ([1.0445], dtype = torch. Float64)Copy the code