This is the fifth day of my participation in the First Challenge 2022

Recently, I have been talking about my feelings every day. Maybe I have experienced more and more things as I grow older. Sometimes I always want to talk a lot. These years have been the focus of the front end, that is, the most close to the customer level, now the front end technology can be changing with each passing day, so have to follow its footsteps, the new technology of the gorgeous demo, usually closely attracted me, let me sleep, to get it, not until the application, the next to come again.

Today’s deep learning paper is the same, all the way to catch up, more or less every day. But I still feel that I stay in the application, but also in the front of the whole system, I hope I can stop impetuous, follow the heart. Also stopped tired footsteps. Lay the foundation, after all, first look at the bottom thing, although time-consuming, but over the years, the change is not much, more value.

Tensor and tensor

data = np.array([1.2.3])
Copy the code
  • torch.tensorIt’s a factory method that returns an instance of the Tensor class
  • torch.TensorThat’s a class, call that class to instantiate a Tensor
torch.tensor(data) #tensor([1, 2, 3])
Copy the code

Ok, so how do you create a tensor based on data numpy. Ndarray

t1 = torch.Tensor(data)
t2 = torch.tensor(data)
t3 = torch.as_tensor(data)
t4 = torch.from_numpy(data)
Copy the code

tensor([1., 2., 3.]) 
tensor([1, 2, 3]) 
tensor([1, 2, 3]) 
tensor([1, 2, 3])
Copy the code

You’ve probably already seen from the output that the Tensor type is floating point, and everything else is integer, so I’ll output them just to prove it.

print(t1.dtype)
print(t2.dtype)
print(t3.dtype)
print(t4.dtype)
Copy the code
torch.float32 
torch.int64 
torch.int64 
torch.int64
Copy the code

That’s because the Tensor uses the default specified data type as its type, torch.get_default_dType () is torch.float32

You can also create torch. Tensor print(torch. Array (np.array([1.,2.,3.]), dType =torch.

Tensor type

We distinguish tensor in two ways, one is the tensor’s data type, and if you have some programming experience, it should be easy to understand. The tensor is a container, so the type of tensor is determined by the type of data you put in it, and then you have to think about the precision of your data, you have enough of it, you don’t want to waste it.

t1 = torch.tensor([1.2.3])
t2 = torch.tensor([1..2..3.])
print(t1.dtype)
print(t2.dtype)
Copy the code
torch.int64 
torch.float32
Copy the code

Torch.Tensor and torch.Tensor copy data and torch. As_tensor and from_numpy share memory for numpy array objects.

data = np.array([1.2.3])
Copy the code
t1 = torch.Tensor(data)
t2 = torch.tensor(data)
t3 = torch.as_tensor(data)
t4 = torch.from_numpy(data)
Copy the code
data[0] = 0
data[1] = 0
data[2] = 0
Copy the code
print(t1) #tensor([1., 2., 3.])
print(t2) # tensor([1, 2, 3])
Copy the code
print(t3) #tensor([0, 0, 0])
print(t4) #tensor([0, 0, 0])
Copy the code
  • Due to thenumpy.ndarrayThe object is assigned to the CPU, so if torch is using the GPU, call intorch.as_tensor, the data needs to be copied from the CPU to the GPU.
  • as_tensorPython’s built-in list structured data, such as list, is not supported
  • If you want to use it betteras_tensorDevelopers need to be aware of memory, otherwise they may inadvertently modify the underlying data, which may affect multiple objects.
  • If you need to toggle between a Numpy. Ndarray object and a tensor object, useas_tensor()Can improve performance if only for a single load, useas_tensorThe advantages are less obvious.

The other thing is that the tensor is different depending on the device on which it’s running, so the tensor running on the CPU and the tensor running on the GPU can’t be calculated.

t1 + t2 #tensor([2., 4., 6.])
Copy the code
t1 = torch.tensor([1.2.3])
t2 = t1.cuda()
Copy the code
t1 + t2
Copy the code
RuntimeError                              Traceback (most recent call last)
<ipython-input-4-9ac58c83af08> in <module>()
----> 1 t1 + t2
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
Copy the code

The other thing to note here is reference and copy, even though we want the tensor,

batch = next(iter(train_loader))
Copy the code
len(batch) #2
Copy the code
type(batch) #list
Copy the code
images,labels = batch
len(images) # 10
Copy the code
grid = torchvision.utils.make_grid(images,row=10)
plt.figure(figsize=(15.15))
plt.imshow(np.transpose(grid,(1.2.0)))
Copy the code