The target

This article is intended to introduce the introduction of TensorFlow and practical examples. We hope that you will become familiar with the operation of TensorFlow after learning it

Simple classification model code

import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("MNIST", One_hot =True) # batch_size = 64 n_batchs = mnist.train.num_examples // batch_size x = tf.placeholder(dtype=tf.float32, shape=[None, 784], Y = tf.placeholder(dType =tf.float32, shape=[None, 10]); Name ='y') # softmax((x * w) + b) w = tf.variable (tf.ones(shape=[784, 10])) b = tf.Variable(tf.zeros(shape=[10])) predict = tf.nn.softmax(tf.matmul(x, w) + b) loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=predict, Opt = tf labels = y)). The "train". GradientDescentOptimizer (0.1). Minimize (loss) init = tf. Global_variables_initializer () correct = tf.equal(tf.argmax(predict,1), tf.argmax(y,1)) accuracy = tf.reduce_mean(tf.cast(correct, Tf.session () as sess: sess.run(init) total_batch = 0 last_batch = 0 best = 0 for epoch in range(100): for _ in range(n_batchs): xx,yy = mnist.train.next_batch(batch_size) sess.run(opt, feed_dict={x:xx, y:yy}) loss_value, acc = sess.run([loss, Accuracy], feed_dict={x:mnist.test.images, y:mnist.test.labels}) # always print the best accuracy information if acc > best: best = acc last_batch = total_batch print('epoch:%d, loss:%f, acc:%f' % (epoch, loss_value, Acc)) if total_batch - last_batch > 5: print('when epoch-%d early stop train'%epoch) break total_batch += 1Copy the code

The output

Extracting MNIST/train-images-idx3-ubyte.gz Extracting MNIST/train-labels-idx1-ubyte.gz Extracting MNIST/t10k-images-idx3-ubyte.gz Extracting MNIST/t10k-labels-idx1-ubyte.gz WARNING:tensorflow:From <ipython-input-18-ec0f1616d772>:16: softmax_cross_entropy_with_logits (from tensorflow.python.ops.nn_ops) is deprecated and will be removed in a future version. Instructions for updating: Future major versions of TensorFlow will allow gradients to flow into the labels input on backprop by default. See Softmax_cross_entropy_with_logits_v2. epoch:0, loss:1.697559, ACC :0.819900 Epoch :1, Loss :1.627650, Acc :0.887500 epoch:2, Loss :1.604011, ACC :0.897200 epoch:3, Loss :1.592221, ACC :0.902300 epoch:4, Loss :1.585058, Acc :0.904700 epoch:5, loss:1.579867, ACC :0.907700 epoch:6, Loss :1.575740, ACC :0.909700 epoch:7, Loss :1.572829, Acc :0.911200 epoch:8, Loss :1.570307, ACC :0.912600 epoch:9, Loss :1.567902, ACC :0.913100 epoch:10, Loss :1.565990, Acc :0.913900 epoch:11, Loss :1.564570, ACC :0.916100 epoch:13, Loss :1.561729, ACC :0.917800 epoch:14, Loss :1.560736, Acc :0.917900 epoch:15, Loss :1.559514, ACC :0.918600 epoch:17, Loss :1.557875, ACC :0.919600 epoch:18, Loss :1.557073, Acc :0.920100 epoch:21, Loss :1.554998, ACC :0.920500 epoch:22, Loss :1.554592, ACC :0.920700 epoch:23, Loss :1.553998, Acc :0.921500 epoch:24, Loss :1.553378, ACC :0.922100 epoch:28, Loss :1.551517, ACC :0.922400 epoch:29, Loss :1.551527, Acc :0.922700 epoch:31, Loss :1.550692, ACC :0.923000 epoch:32, Loss :1.550284, ACC :0.923200 epoch:33, Loss :1.550164, Acc :0.923300 epoch:34, Loss :1.549571, ACC :0.923600 epoch:35, Loss :1.549563, ACC :0.923700 epoch:38, Loss :1.548744, Acc :0.923800 epoch:39, Loss :1.548406, ACC :0.924700 epoch:41, Loss :1.547895, ACC :0.924800 epoch:45, Loss :1.547032, Acc :0.925300 epoch:49, Loss :1.546252, ACC :0.925900 epoch:51, Loss :1.545930, ACC :0.926400 epoch:56, Loss :1.545088, Acc :0.926700 epoch:59, Loss :1.544781, ACC :0.927400 epoch:65, Loss :1.544077, ACC :0.927500 epoch:66, Loss :1.543733, Acc :0.927800 epoch:70, Loss :1.543496, ACC :0.928100 epoch:76, Loss :1.542884, ACC :0.928300 epoch:80, Loss :1.542315, Acc :0.928600 when epoch-86 early stop trainCopy the code

In this paper, the reference

Reference for this article: blog.csdn.net/qq_19672707…