preface

8.1 Mnist_soft,TensorFlow regression model construction mainly introduces the concept and formula of calculation graph and transformation of calculation graph. 8.2 mainly introduces several operators in TensorFlow, the definition of optimization function and loss function, and carries out overall series with KNN example.

Loading data, mnIST handwritten digital data is still used

# import data from tensorflow. Contrib. Learn. Python. Learn. Datasets import mnist FLAGS = None data_sets = Mnist. Read_data_sets ('/home/fonttian/Data/MNIST_data/', one_hot = True) # for mnist dataset we do a limit Xtrain, Ytrain = Data_set.train.next_batch (5000) # 5000 for training Xtest,Ytest = data_set.test.next_batch (200) # 200 for testing print('Xtrain.shape: ',Xtrain.shape,'Xtest.shape: ',Xtest.shape) print('Ytrain.shape: ',Ytrain.shape,'Ytest.shape: ',Ytest.shape)Copy the code

The realization of the KNN

  1. Calculate the L1 distance of the data
  2. Find the smallest distance data label and make sure it’s a category so we’ve just implemented a pretty primitive KNN algorithm, but it doesn’t matter for our demo, you can write a little bit more complicated if you want to practice, but actually use Skleran. The core code is as follows:
# Calculate L1 distance distance = tf.reduce_sum(tf.abs(tf.add(xtrain,tf.negative(xtest))),axis=1) # predict; Pred = tf.argmin(distance,0) # Evaluation: Determine whether a given piece of test data is correctCopy the code
  • Tf. Negative () : Take the negative number
  • Tf.abs (): take the absolute value
  • Tf.argmin (): Returns the index of the minimum value

Start session and run

Tf.session () as sess: Tf.global_variables_initializer ().run() Ntest = len(Xtest) # loop for I in range(Ntest): # for current test sample of nearest neighbor nn_index = sess. Run (Mr Pred, feed_dict = {xtrain: xtrain, xtest: xtest [I:]}) # for nearest neighbor prediction, and then compared with the real class label pred_class_label = np.argmax(Ytrain[nn_index]) true_class_label = np.argmax(Ytest[i]) print('Test',i,'Predicted Class Pred_class_label :',pred_class_label,'True Class Label:', true_class_label) accuracy += 1 print('Done! ') accuracy /= Ntest print('Accuracy:',accuracy)Copy the code

The break

We have already implemented a TensorFlow version of KNN, but KNN is so simple that many libraries can implement it at will. So what is special about TensorFlow? There are two methods: 1. 2. TensorFlow provides some basic arithmetic symbols such as “+” and “-“, which can also replace TensorFlow’s arithmetic functions tf.add() and tf.Subtract () But in fact, there are not many operation functions of TensorFlow that we often use. Apart from the above functions, the most commonly used one is probably Tf.matmul (), which is the multiplication function and is often used in the full connection of neural network. Other functions are not described here because they are not used much.

KNN doesn’t use

KNN is not significantly different from many other algorithms in that it can calculate results through existing data without the need for training and the back propagation process of reducing prediction error. However, this process is inevitable in current deep learning, so the back propagation process is inevitable. One of the great advantages of TensorFlow is that the automatic derivation of backpropagation relies on the mechanism of calculating graphs. When we use TensorFlow for deep learning development, we only need to simply define the loss function and select the optimized loss function through a simple line of code. TensorF Low can automatically take the derivative of the loss function and train the computational graph we built.

Define loss function and optimize loss function

We start withCode in 8.1For example:

# here is the loss function that defines us, # Define loss and optimizer cross_entropy = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(labels=y_, Logits = y)) # us through the following code to add our loss function train_step = tf. Train. GradientDescentOptimizer (0.5). Minimize (cross_entropy)Copy the code

Another example of a BP neural network

# Then construct the loss function: calculate the error between the predicted value and the true value of the output layer, sum the square of the difference between the two, and then take the average to obtain the loss function. Loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys-prediction)), Reduction_indices = [1])) train_step = tf. Train. GradientDescentOptimizer (0.1). Minimize # (loss) optimization algorithm to select SGD, stochastic gradient descentCopy the code

At this point, you have a basic grasp of TensorFlow through the examples provided in the first eight chapters of TensorFlow Basics

In 8.3 Construction of TensorFlow BP neural network and selection of hyperparameters, I will give the complete code of the above BP neural network. In fact, now you can try to build your own neural network with TensorFlow. Good luck.