The target

This article aims to introduce the basic knowledge and practical examples of TensorFlow. We hope that you will become familiar with the basic operation of TensorFlow after learning it

Simple convolutional neural network implementation

import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("MNIST", one_hot=True) batch_size = 64 n_batches = mnist.train.num_examples // batch_size def weight_variable(shape): Return tf.variable (tf.random_normal(shape, stddev=0.1)) def biases_variable(shape): Return tf.variable (tf.constant(0.1, shape=shape)) def conv2d(x, w): Conv2d (x, w, strides=[1,1,1], padding='SAME') # strides=[1,1,1] def max_pool_2X2 (x): Return tf. Nn. Max_pool (x, ksize =,2,2,1 [1], strides =,2,2,1 [1], Y = tf.placeholder(shape=[None, 784], dType = tF.placeholder) 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 B1 = biases_variable([32]) b1 = biases_variable([32] tf.nn.relu(conv2d(x_, W2 = weight_variable([5,5,32,64]) b2 = biases_variable([64]) h2 W_fc1 = weight_variable([7*7*64, 0 0 b_fc1 = biases_variable([1024]) 0 0 w_fc1.shape[0]]) h_fc1 = tf.nn.tanh(tf.matmul(p2_flat, w_fc1) + b_fc1) keep_prob = tf.placeholder(tf.float32) h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob) w_fc2 = weight_variable([1024, Prediction = tf.nn.softmax(tf.matmul(h_fc1_drop, h_fc1_drop, h_fc1_drop) w_fc2) + b_fc2) loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=prediction, Labels =y)) opt = tf.train.adamoptimizer (0.001). Minimize (loss) correct = tF.equal (tf.argmax(y,1), tF.argmax (prediction, 1)) accuracy = tf.reduce_mean(tf.cast(correct, tf.float32)) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for epoch in range(10): for _ in range(n_batches): Next_batch (batch_size) sess.run(opt, feed_dict={x:xx, y:yy, keep_prob:0.5}) acc, L = sess.run([accuracy, loss], feed_dict={x:mnist.test.images, Y :mnist.test.labels, keep_prob:1.0}) print(epoch, L, acc)Copy the code

Results output

0 1.5783486 0.8818
1 1.4810778 0.9803
2 1.4758472 0.9855
3 1.472993 0.9884
4 1.4741219 0.9866
5 1.4728734 0.9882
6 1.4742823 0.9869
7 1.4712367 0.9898
8 1.4690293 0.9922
9 1.473154 0.988
10 1.4709185 0.9904
Copy the code

The point a

For the basic knowledge of convolutional neural networks, you can go to the Internet to find a lot of learning materials, including the convolution kernel, step size, padding and so on

Point 2

Compared with the simple multi-hidden layer network + Dropout accuracy of 97.8% introduced in the last article, the accuracy can be easily improved to 99% by using convolutional neural networks, indicating that convolutional neural networks are naturally suitable for processing image data. However, due to the increasing complexity of the model, training costs more resources and time.

In this paper, the reference

Reference for this article: blog.csdn.net/qq_19672707…