The following methods are used:detach() cpu() numpy()As well asitem()

PyTorch deep Learning framework uses GPU to improve training speed.

import DataSet
from model.MyNet import MyNet
from torch.utils.data import DataLoader

train_dataset = DataSet() # Self-defined data classes
train_loader = DataLoader(train_dataset,...) Encapsulate data with DataLoader
model = MyNet() # Self-defined network class
model.cuda()
# # # # # # # # # # # # # # # # # # # # # # # # # # to the start # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
for idx,(img,label) in enumerate(train_loader):
	img = img.cuda()       #.cuda() feeds data and models into the GPU
	label = label.cuda() 
	output = model(img)
	show(output)			Output = output
# # # # # # # # # # # # # # # # # # # # # # # # # # to the end # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
Copy the code

We can find that when we need to perform further operation on the output, the operation will be recorded in the calculation graph to calculate the gradient and prepare for the back propagation. But in fact, we just want to display it without output for backtransmission, at which point the detach() method comes in.

detach()

Function: block back propagation. Return value: Tensor, and after detach() the variables are still on the GPU.

output= output.detach() # Block backpass
Copy the code

The problem is that output is still in video memory, and memory operations may not find the variable, that is, show(output) cannot be operated on. So CPU () appears.

cpu()

Effect: Moves the data to the CPU

output = output.detach().cpu() The return value is the Tensor on the CPU
Copy the code

numpy()

Then you can do a series of operations on the Tensor data, including Numpy (), which is used to translate the Tensor from the CPU into Numpy data.

Tensor = numpy.array()

output = output.detach().cpu().numpy()  # return numpy.array()
Copy the code

item()

You can see that item() gets torch.Tensor. The return value is of type float, as shown in the figure above. At this point, the brief introduction is complete.