Deep neural network concept reorganization, the simplest neural network is what it looks like?

Gamwatcher.blog.csdn.net/article/det…

It’s too late for undergraduates to learn deep learning and build environments

Gamwatcher.blog.csdn.net/article/det…

Comments pack to send books!! Deep learning foundation of Numpy, xiaobai easy entry numpy

Gamwatcher.blog.csdn.net/article/det…

Matplotlib deep learning basis, each sample is done, advice future reference gamwatcher.blog.csdn.net/article/det collection…

Hello everyone, I am coriander, original is not easy, welcome to like comments, learn together

The most important thing about pytorch and TensorFlow is that tensor, tensorflow is literally tensorflow, so the first lesson about deep learning is that you need to understand what a tensor is, and that’s what we’re going to do today. OK, take off

You need to understand that tensor is a tensor. In fact, numpy array, vector, matrix format is basically the same. But is specifically designed for GPU, can run on the GPU to speed up computing efficiency, don’t be scared.

In PyTorch, Tensor is the basic unit of operation. Like NDArray in NumPy, tensors represent a multidimensional matrix. The difference is that the Tensor in PyTorch runs on the GPU, whereas NumPy’s NDArray only runs on the CPU. Because Tensor runs on the GPU, it makes things a lot faster.

In a word: it’s just multidimensional data that can run on a GPU

X = torch. Zeros (5) check to see what the object is in memory and what its properties are.

A tensor at best is just a data structure, a class. So how do you create a tensor? There are a couple of ways you can do that.

Create the tensor method that PyTorch provides directly

torch.tensor(data, dtype=None, device=None,requires_grad=False)

Data – Can be list, Tuple, Numpy Array, Scalar or any other type

Dtype – Returns the desired tensor type

Device – You can specify the device to return

Requires_grad – Can specify whether to record graphs. Default is False

Shortcut creation

T1 = torch. FloatTensor ([[1, 2], [5, 6]])

So how do you put your numpy data into your tensor, and then the Pytorch has an interface for that, very handy

Torch. From_numpy (NDARry) Note: The sent tensor shares its data with Ndarry, and anything you do to the tensor will affect Ndarry and vice versa

The built-in tensor creation method torch. Empty (size) returns an empty tensor of size

Torch. Zeros (size) is all zero tensor

Torch. Zeros_like (INPUT) returns a total tensor of the same size as input

Torch. Ones (size) is all ones at tensor

Torch. Ones_like (input) returns a tensor of the same size as input

Arange (start=0, end, step=1) returns a sequence from start to end. You can enter only one end argument, just like python’s range(). Actually PyTorch also has range(), but this will be deprecated and replaced with arange

Torch. Full (size, fill_value) this is sometimes more convenient, changing the fill_value number to a tensor of the size shape

Torch. Randn (5) Randomly generates a tensor

Tensor transformations are also used in development, so let’s take a look at the two transformations you’ll use

Tensor translates to numpy a = torch. Ones (5) print(a) b = a.nampy () print(B) translate list data = torch. Zeros (3.3)
data = data.tolist()
print(data)
Copy the code

4. The operation dimension of tensors is improved

The tensor broadcasting is a way of doing things between dimensions, and it works the same way that you do things with different data types, like integers and floats, and then you promote things to more precise data types, and then you expand dimensions in a similar way.

Methods:

Go through all the dimensions, starting with the tail dimension, and each corresponding dimension is either of the same size, or one of them is 1, or one of them doesn’t exist. If it does not exist, extend the current data, as you can see in the red box below, to extend the data

a = torch.zeros(2.3)
b = torch.ones(3)
print(a)
print(b)
print(a + b)
Copy the code

Verify the result, you can see that the final result is 1:

Conclusion: In the same way that adding different data types improves accuracy, here is the dimension improvement

add

y = t.rand(2.3) # use [0.1] uniformly distributed construction matrix z = t.nes (2.3) # all of 2x31Matrix #3Print (y + z) ### add1T dd(y, z) ### addition2Subtraction a = t.randn(2.1)
b = t.randn(2.1Print (a) print(a, b) print(a) print(a, b) print(a) ###Copy the code

We learned about matrix multiplication in college, so let’s just review it, cross multiply, and understand the principle, because matrix multiplication with multiple dimensions is more complicated, and PyTorch provides support

T. mull (input, other, out=None) : Matrix multiplied by a number

T. matmul(mat, mat, out=None) : Matrix multiplication

T.m (mat, mat, out=None) : Basically equivalent to matmul

A = torch. Randn (2, 3) = b torch. Randn (3, 2)

Equivalent operations

print(torch.mm(a,b)) # mat x mat print(torch.matmul(a,b)) # mat x mat

Equivalent operations

Print (torch. Mul (a,3)) print(a * 3) print(a * 3) print(a * 3

T.div (input, other, out=None)# : if (input, other, out=None) T. abs(input, out=None)# : absolute value t.cell (input, out=None)# : Colamp (input, min, Max, out=None)# Argmax (input, dim=None, keepDim =False)# : returns the index of the maximum value of the specified dimensionCopy the code

At the end of the day, tensor is the foundation of deep learning, it’s a gateway, you can just think of it as a multidimensional data structure, with some special operations built in, you know, you know, you know, it doesn’t seem complicated, it’s just a routine operation, hold on, don’t panic, we can win, it’s not that hard to see through.