First, basic use

To use TensorFlow, you must understand TensorFlow:

You need to use graphs to do calculations. You need to do graphs in a Context called a Session. You need to use tensors to mean data. You need to maintain state through variables Feeds and fetches can assign values to or retrieve data from Arbitrary operations.

1. Graph

Computational graph is a basic processing method in computational algebra. We can express a given mathematical expression through a directed graph, and differentiate the variables in the expression quickly and conveniently according to the characteristics of the graph. The essence of neural network is a multilayer compound function, so it can be expressed by a graph.

This section mainly summarizes the implementation of the calculation graph, in the calculation graph this directed graph, each node OP (operation abbreviation) represents a specific operation such as summation, product, vector product, square and so on… For example, summation expressionsExpressed by a directed graph:

expressionExpressed by a directed graph:

But TensorFlow programs are usually organized into a construction phase and an execution phase. In the construction phase, the op’s execution steps are described as a diagram. In the execution phase, the op in the diagram is executed using the session execution.

For example, it is common to create a graph during the construction phase to represent and train the neural network, and then repeatedly execute the training OP in the graph during the execution phase.

2. Build diagrams

The first step in building the graph is to create the source op. The source op does not require any input, such as Constant. The output of the source op is passed to the other OP for evaluation.

In the Python library, the return value of the op constructor represents the output of the constructed OP, and these return values can be passed to other OP constructors as input.

The TensorFlow Python library has a default graph to which the OP constructor can add nodes. This default diagram is sufficient for many programs.

Graph class API link: http://www.tensorfly.cn/tfdoc/api_docs/python/framework.html#Graph

import tensorflow as tf

Create a constant op to generate a 1x2 matrix. The op is treated as a node
# add to default image.
#
The return value of the constructor represents the return value of the constant op.
matrix1 = tf.constant([[3..3.]])

# Create another constant op to produce a 2x1 matrix.
matrix2 = tf.constant([[2.], [2.]])

# create a matrix multiplication matmul op with 'matrix1' and 'matrix2' as input.
The return value 'product' represents the result of matrix multiplication.
product = tf.matmul(matrix1, matrix2)
Copy the code

The default diagram now has three nodes, two constant() op, and one matmul() op. To actually perform matrix multiplication and get the result of matrix multiplication, you must launch the graph in session.

3. Launch the diagram in a session

After the construction phase is complete, the diagram can be started. The first step in the startup diagram is to create a Session object, and if there are no creation parameters, the Session constructor will start the default diagram. The Session class API link: http://www.tensorfly.cn/tfdoc/api_docs/python/client.html#session-management

# Enable the default image.
sess = tf.Session()

# Call sess's 'run()' method to perform the matrix multiplication op, passing 'product' as an argument to the method.
As mentioned above, 'product' represents the output of the matrix multiplication op, and passing it in tells the method that we want to fetch
# output of matrix multiplication op.
#
The entire execution process is automated and the session is responsible for passing all the input required for the OP. Op is usually executed concurrently.
# 
The function call 'run(product)' triggers the execution of the three op's in the graph (two constant op's and one matrix multiplication op).
#
# return value 'result' is a numpy 'ndarray' object.
result = sess.run(product)
print (result)
# ==> [[ 12.]]

Task complete, close the session.
sess.close()
Copy the code

Results:

[[12]]Copy the code

The Session object needs to be closed after use to free up resources. In addition to explicitly calling close, you can use the “with” code block to automatically complete the closing action.

with tf.Session() as sess:
  result = sess.run([product])
  print (result)
Copy the code

Results:

[array([[12.]], dtype=float32)]
Copy the code

Constants and variables

(1) Constant

data1 = tf.constant(2,dtype = tf.int32)
print(data1)
sess = tf.Session()
print(sess.run(data1))
Tensor, shape, dtype, tF. Variable
Copy the code

Results:

Tensor("Const_6:0", shape=(), dtype=int32)
2
Copy the code

(2) Variables

data2 = tf.Variable(10,name='var')
print(data2)
sess = tf.Session()
init = tf.global_variables_initializer()
# error if not initialized
sess.run(init)
print(sess.run(data2))
Copy the code

Results:

<tf.Variable 'var_6:0' shape=() dtype=int32_ref>
10
Copy the code

Three, constant variable four operations

1. Four operations of constants

data3 = tf.constant(6)
data4 = tf.constant(2)
dataAdd = tf.add(data3,data4) # add
dataSub = tf.subtract(data3,data4) # reducing
dataMul = tf.multiply(data3,data4) # take
dataDiv = tf.divide(data3,data4) # in addition to
with tf.Session() as sess:
    print (sess.run(dataAdd))
    print (sess.run(dataSub))
    print (sess.run(dataMul))
    print (sess.run(dataDiv))
print('End! ')
Copy the code

Results:

8
4
12
3.0
End!
Copy the code

2. Four operations of variables

data5 = tf.constant(6)
data6 = tf.Variable(4)
dataAdd = tf.add(data5,data6) # add
dataCopy = tf.assign(data6,dataAdd) 
Assign dataAdd to data6
dataSub = tf.subtract(data5,data6) # reducing
dataMul = tf.multiply(data5,data6) # take
dataDiv = tf.divide(data5,data6) # in addition to

init = tf.global_variables_initializer()
with tf.Session() as sess:
    sess.run(init)
    print ('add:,sess.run(dataAdd))
    print (':',sess.run(dataSub))
    print ('by:,sess.run(dataMul))
    print ('except:,sess.run(dataDiv))
    print ('dataCopy :',sess.run(dataCopy)) 
    # dataAdd = 10
    print ('dataCopy.eval() :',dataCopy.eval()) 
    # eval(expression[, globals[, locals]]), which executes a string expression and returns the value of the expression
    # dataAdd + data5 = 10 + 6 = 16
    print ('tf.get_default_session() :',tf.get_default_session().run(dataCopy)) 
    # dataCopy + data5 = 16 + 6 = 22
print('End! ')
Copy the code

Results:

Add: 10 minus: 2 times: 24 Divided by: 1.5 dataCopy: 10 datacopy.eval () : 16 tv.get_default_session () : 22 End!Copy the code

4. Matrix foundation

1. Matrix operation

Example 1:

data1 = tf.placeholder(tf.float32)
data2 = tf.placeholder(tf.float32)
dataAdd = tf.add(data1,data2)
with tf.Session() as sess:
    print(sess.run(dataAdd,feed_dict={data1:6,data2:2}))
print('End! ')
Copy the code

Results:

8.0 the End!Copy the code

Example 2:

data3 = tf.constant([[6.6]])
data4 = tf.constant([[2],
                    [2]])
data5 = tf.constant([[3.3]])
data6 = tf.constant([[1.2.3],
                    [4.5.6],
                    [7.8.9]])
print('data6.shape:',data6.shape) # dimension
with tf.Session() as sess:
    print('data6:',sess.run(data6)) # Print whole
    print('data6[0]:',sess.run(data6[0])) Print the first line
    print('data6[1,:]:',sess.run(data6[1,:])) The second line #
    print('data6[:,1]:',sess.run(data6[:,1])) # the second column
    print(' 'data6 [0, 1].,sess.run(data6[0.1])) # first row, second column
Copy the code

Results:

data6.shape: (3, 3)
data6: [[1 2 3]
 [4 5 6]
 [7 8 9]]
data6[0]: [1 2 3]
data6[1,:]: [4 5 6]
data6[:,1]: [2 5 8]
data6[0,1]: 2
Copy the code

2. Matrix operations

(1) Add tf.add()

The usual matrix addition is defined as two matrices of the same size. The sum of two M by n matrices A and B, labeled A plus B, is also an M by n matrix in which the elements are the sum of their corresponding elements. Such as:

(2) Subtract ()

You can subtract matrices, as long as they’re of the same size. Each element in A-B is the value of the subtraction of its corresponding element, and the matrix will have the same size as A and B. Such as:

(3) multiply tf.multiply()

If A is an m× P matrix and B is A P × N matrix, then matrix C is called the product of m× N matrix A and B, denoted as C=AB, where the ith row and j column elements of matrix C can be expressed as:

As follows:

(4) Product – Hadamard product tf.matmul()

The HadamardAB product of m×n matrix A=[aij] and M ×n matrix B=[bij] is denoted as. Its elements are defined as the m by n matrix of the product (AB) of the corresponding elements of two matrices ij=aij bij. For example,

data1 = tf.constant([[6.6]])
data2 = tf.constant([[2],
                    [2]])
data3 = tf.constant([[3.3],
                    [2.2]])
data4 = tf.constant([[1.2],
                    [3.4],
                    [5.6]])
data5 = tf.constant([[1.1],
                    [2.2]])

matMul = tf.matmul(data3,data5) 
matMul2 = tf.multiply(data1,data2) 
matAdd = tf.add(data1,data4) 

with tf.Session() as sess:
    print('matMul:',sess.run(matMul))
    print('matMul2:',sess.run(matMul2))
    print('matAdd:',sess.run(matAdd))
Copy the code

Results:

matMul: [[9 9]
 [6 6]]
matMul2: [[12 12]
 [12 12]]
matAdd: [[ 7  8]
 [ 9 10]
 [11 12]]
Copy the code