I’ve been using PyTorch for a long time, and recently decided to take a closer look at TensorFlow with a new series: PyTorch and TensorFlow’s Love-Hate Relationship.

No matter what framework or programming language you are learning, the basic data types are the most basic, so let’s take a look at each of them.

Pytorch version: 0.4.1, changed to 1.x after preparation.

Tensorflow version: 1.15.0, although the 2.x version of TensorFlow has been released, it is reported that there are some bugs in the 2.x version, so we use the current 1.x version.

1. Basic Python data types

Numbers: integer, floating point, Boolean, complex.

Non-numeric: string, list, tuple, dictionary.

Use type to view the type of a variable: type(variable name)

2. Data types in NUMpy

The name of the describe
bool_ Boolean data type (True or False)
int_ Default integer type (similar to LONG, int32, or int64 in C)
intc As with C int, it is usually int32 or int 64
intp The integer type used for indexing (similar to C ssize_T, still int32 or INT64 in general)
int8 Bytes (-128 to 127)
int16 Integer (-32768 to 32767)
int32 Integer (-2147483648 to 2147483647)
int64 Integer (-9223372036854775808 to 9223372036854775807)
uint8 Unsigned integer (0 to 255)
uint16 Unsigned integer (0 to 65535)
uint32 Unsigned integer (0 to 4294967295)
uint64 Unsigned integer (0 to 18446744073709551615)
float_ Short for type float64
float16 A semi-precision floating point number, including 1 sign bit, 5 exponent bits, and 10 mantissa bits
float32 Single-precision floating point number, including: 1 sign bit, 8 exponent bits, 23 mantissa bits
float64 A double – precision floating-point number, including 1 sign bit, 11 exponent bits, and 52 mantissa bits
complex_ Short for the complex128 type, 128-bit complex numbers
complex64 Complex number, representing double 32-bit floating point numbers (real and imaginary parts)
complex128 Complex number, representing double 64-bit floating point numbers (real and imaginary parts)

Numpy’s numeric types are actually instances of dType objects and correspond to unique characters, including NP.BOOL_, NP.int32, NP.float32, and so on.

Here’s a quick example. In general, we define an array like this:

Of course, we can also define the array by specifying the elements of the array and then creating the array

Why do we define it this way? Isn’t it easier to define it this way? This is because, in this way, we can define our own data types:

I1 here refers to int8,

Each built-in type has a character code that uniquely defines it, as follows:

character Corresponding to the type
b The Boolean
i (signed) integer
u An unsigned integer
f floating-point
c Complex floating point type
m Timedelta (Interval)
M Datetime
O (Python) objects
S, a (byte -) string
U Unicode
V Raw data (void)

As a result, look at the following example:

When it comes to data types, it is necessary to convert between data types. Naturally, the first thought is to modify the type of data by changing the type of dTYPE. However, there are some problems with this, as shown in the following example:

> > > a = np. Array ([1.1, 1.2]) > > > a. d. type dtype (' float64 ') > > > a. d. type = np, int16 > > > a. d. type dtype (' int16 ') > > > a array ([26214, 26215-26215, 16369, 13107, 13107, 13107, 16371], dtype=int16) # float64 =int16Copy the code

We’ll use Astype to modify the data type. Here’s an example:

> > > a = np. Array ([1.1, 1.2]) > > > a. d. type dtype (' float64 ') > > > a.a stype (np) int16) array ([1, 1), Dtype =int16) >>> a.dtype dtype('float64') #a = a.stype (np.int16) #a = a.stype (np.int16) #a = a.stype ('int16')  >>> a array([1, 1], dtype=int16)Copy the code

Reference:

www.runoob.com/numpy/numpy…

Blog.csdn.net/miao2009139…

Data types in PyTorch

Consider the following example: The default data type used is torch. Float32

Of course, you can also specify the class of tensors to generate by:

In most cases, we will use pyTorch’s built-in functions to create tensors, as shown in the following example:

You can view the data types of tensors in two ways:

Next, we’ll look at conversions between data types. There are three main conversions: conversions between tensors, conversions between tensors and NUMpy arrays, and conversions between CUDA tensors and CPU tensors

(1) Type conversion between different tensors

Use the. Type directly:

We can also use type() to convert:

We can also use type_as() to convert the data type of one tensor to the same data type of another tensor:

(2) Conversion between tensors and NUMpy

Convert numpy arrays to tensors: from_numpy()

Convert tensors to numoy arrays: use.numpy()

(3) Conversion between CUDA type and CPU type

CPU type to CUDA type:

A.cuda () or A.teo (device) : device = torch. Device (“cuda:0” if torch. Cuda.is_available () else “CPU “)

Cuda type to CPU type:

a.cpu()

It should be noted here that the CUDA type needs to be converted to CPU type before it can be further converted to NUMpy.

3. Tensorflow basic data type

Define a tensor:

Create a constant using tf.constant. Note that constants are not gradient updated.

(1) Type conversions between tensors: tf.to_ () or tf.cast() can be used, but the former will be removed and tf.cast() is preferred.

(2) Type conversion between tensors and numpy

Numpy transform tensor: Use tF.convert_to_tensor ()

Tensors to Numpy: Any tensor returned by session. run or eval is a Numpy array.

(3) Tensorflow seems to have no GPU tensor and CPU tensor types

 

If there are any mistakes, please also point out, what is missing please supplement, will be amended accordingly.