The target

This article is intended to introduce the introduction of TensorFlow and practical examples. We hope that you will become familiar with the operation of TensorFlow after learning it

Manually adjust the learning rate code

import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("MNIST", one_hot=True) batch_size = 64 n_batches = mnist.train.num_examples // batch_size def variable_info(var): # to save all the variables, a moment can be seen in the figure of tensorboard dynamic with tf. Name_scope (' summaries') : mean_value = tf.reduce_mean(var) tf.summary.scalar('mean', mean_value) with tf.name_scope('stddev'): stddev_value = tf.sqrt(tf.reduce_mean(tf.square(var - mean_value))) tf.summary.scalar('stddev', stddev_value) tf.summary.scalar('max', tf.reduce_max(var)) tf.summary.scalar('min', tf.reduce_min(var)) tf.summary.histogram('histogram',var) with tf.name_scope("input_layer"): x = tf.placeholder(tf.float32, [None, 784]) y = tf.placeholder(tf.float32, [None, 10]) keep_prob = tf.placeholder(tF.float32) LR = tf.variable (0.01, tF.float32) # Tf.summary.scalar (' Learning_rate ', LR) # Store LR and will be able to see its dynamic in tensorboard with tF.name_scope ('network'): # build network structure with tf.name_scope("weights"): W = tf.variable (tf.truncated_normal([784,10], stddev=0.1), name='w') variable_info(w) with tf.name_scope('baises'): Variable(tf.zeros([10]) + 0.1, name="b") variable_info(b) with tf.name_scope(' xw_plus_B '): a = tf.matmul(x,w) + b with tf.name_scope('softmax'): out = tf.nn.softmax(a) with tf.name_scope("loss_train"): # Calculate the loss value, Define the optimizer loss = tf.reduce_mean(tf.nn.softMAX_cross_entropy_with_logits (logits=out, Labels =y)) train_step = tf.train.adamoptimizer (LR).minimize(loss) tF.summary.scalar ("loss", loss) # Store the loss, With tf.name_scope("eval"): correct = tf.equal(tf.argmax(out, 1), tf.argmax(y, 1)) with tf.name_scope("accuracy"): accuracy = tf.reduce_mean(tf.cast(correct, tf.float32)) tf.summary.scalar('accuracy',accuracy) init = tf.global_variables_initializer() merged = tf.summary.merge_all() with tf.Session() as sess: Sess.run (init) writer = tf.summary.filewriter ('tflogs/', sess.graph) For epoch in range(20): Sess.run (TF.assign (LR, 0.001 * (0.95 ** epoch))) for Batch in range(N_batches): batch_x, batch_y = mnist.train.next_batch(batch_size) summary, _ = sess.run([merged, train_step], Feed_dict = {x:batch_x, Y: batch_Y, keep_PROb :0.5}) writer.add_summary(summary, epoch * N_batches + Batch) LOss_value, acc, lr_value = sess.run([loss, accuracy, lr], feed_dict = {x:mnist.test.images, y:mnist.test.labels, Keep_prob: 1.0}) print (" Iter: ", epoch, "Loss", loss_value, "Acc", Acc, "lr", lr_value)Copy the code

Results output

Iter: 0 Loss: 1.6054871 Acc: 0.895LR: 0.001 Iter: 1 Loss: 1.5699576 Acc: 0.9148 LR: 0.00095 Iter: 2 Loss: 1.6054871 Acc: 0.895LR: 0.001 Iter: 1 Loss: 1.5699576 Acc: 0.9148 LR: 0.00095 Iter: 2 Loss: 1.5598879 Acc: 0.9195 LR: 0.0009025 Iter: 3 Loss: 1.5546178 Acc: 0.9215 LR: 0.0057375 Iter: 4 Loss: 1.5502373 Acc: 0.9254 LR: 0.001450626 Iter: 5 Loss: 1.5473799 Acc: 0.9269 LR: 0.0007737809 Iter: 6 Loss: 1.5452079 Acc: 0.9277 LR: 30350919 Iter: 7 Loss: 1.5434842 Acc: 0.9294 LR: 0.0006983373 Iter: 8 Loss: 1.5427189 Acc: 0.9278 LR: 0.0006634204 Iter: 9 Loss: 1.5417348 Acc: 0.9293 LR: 0.0006302494 Iter: 10 Loss: 1.540729 Acc: 0.9293 LR: 0.0005987369 Iter: 11 Loss: 1.5403976 Acc: 0.9298 LR: 0.0005688001 Iter: 12 Loss: 1.5395288 Acc: 0.9301 LR: 0.0005403601 Iter: 13 Loss: 1.5395651 Acc: 0.9298 LR: 0.0005133421 Iter: 14 Loss: 1.5387015 Acc: 0.9307 LR: 0.000487675 Iter: 15 Loss: 1.5383359 Acc: 0.9308 LR: 0.00046329122 Iter: 16 Loss: 1.5379355 Acc: 0.931 LR: 0.00044012666 Iter: 17 Loss: 1.5374689 Acc: 0.9314 LR: 0.00041812033 Iter: 18 Loss: 1.5376941 Acc: 0.9305 LR: 0.00039721432 Iter: 19 Loss: 1.5371386 Acc: 0.9308 LR: 0.0003773536Copy the code

The point a

If you want to see how the saved data changes in the graph, follow these steps

  • PIP install tensorboard = = 1.10.0
  • Then run the command line./tensorboard –logdir= the absolute location to save
  • Open http://localhost:6006/ in the browser to see the corresponding parameter changes, very convenient

Point 2

Adjusting the learning rate is helpful for training the model. At the beginning, a larger learning rate is used to ensure rapid convergence of the model effect. When the convergence is near the optimal point, the learning rate should be minimized to avoid oscillations. Here is one of the most common learning rate decay, that is, exponential decay.

reference

Reference for this article: blog.csdn.net/qq_19672707…