Introduction: This program platform is Colab, Google’s deep learning online platform. In order to prevent beginners from being burned out by the complex tensflow environment configuration, we might as well use the online deep learning platform for learning and training

Click here can want science to get online probably nevertheless, cannot the classmate also can use domestic platform, there is no different place on the result

Mnist is used to recognize handwritten digits

Without further ado, let’s get started (I have annotated each step in detail for better understanding).

A: Import related package:

! pip install tensorflow keras numpy mnist matplotlib# import data package
import numpy as np
import mnist  Get the data set
import matplotlib.pyplot as plt  # Graph
from keras.models import Sequential  # ANN network structure
from keras.layers import Dense # the layer in the ANN
import keras
import keras.utils
from keras import utils as np_utils
Copy the code

2: Import the corresponding data in the MNIST dataset

# import data
train_images = mnist.train_images()  # Training data set images
train_labels = mnist.train_labels()   # Training tag
test_images = mnist.test_images()  # Test picture
test_labels = mnist.test_labels()  # Test tag
Copy the code

Note: Since MNIST is already a classified data set, we only need the training data and test data in the call period

Three: the corresponding data processing, image data normalization, vectorization at the same time

# normalized image normalized pixel value [0,255]
For better training of the neural network, we set the value to [-0.5, 0.5]
train_images = (train_images/255) - 0.5
test_images = (test_images/255) - 0.5
# Scale the 28 * 28 pixel image into a 28 * 28 = 784 dimensional vector
train_images = train_images.reshape((-1.784))
test_images = test_images.reshape((-1.784))
Print it out
print(train_images.shape) # 6000 training data
print(test_images.shape) # 1000 test data
Copy the code

The result is:

Four: establish neural network model

Here we use the Keras Sequential model for aspect beginners

A sequential model is a linear stack of multiple network layers.

You can create a Sequential model by passing a list of network layer instances to the Sequential constructor

Directions in official Chinese DOC

# Build a model
# 3 layer, with two layers of 64 neurons and the excitation function one layer of 10 neurons and the normalized exponential function (softmax function)
model = Sequential()
model.add( Dense(64, activation="relu", input_dim = 784))
model.add( Dense(64, activation="relu"))
model.add(Dense(10, activation="softmax"))
print(model.summary())
Copy the code

A brief description of the model we printed:

Five: model compilation and training

# build model
The loss function measures the model's performance in training and then optimizes it
model.compile(
    optimizer = 'adam',
    loss = "categorical_crossentropy",
    metrics = ["accuracy"])# Training model
from keras.utils.np_utils import to_categorical
history=model.fit(
    train_images,
    to_categorical(train_labels),
    epochs = 5.The number of iterations of the entire dataset to be trained
    batch_size = 32  # Number of samples updated per gradient for training

)

print(history.history.keys())
# print(plt.plot(history.history['loss']))
print(plt.plot(history.history['accuracy']))
Copy the code

If MY above notes we do not understand the place, the directions I am the official document, quick to point my duck

the result:

Look, our training intensive reading reached about 96%, of course the first training comparison failed 23333, but the accuracy increased with the increase of training times, you can also increase the number of iterations to get higher accuracy!

Now that the training is complete, let’s do a little evaluation of the model

Six: Evaluation model

# Evaluation model
model.evaluate(
    test_images,
    to_categorical(test_labels)
)
Copy the code

the result :

Seven: Make predictions

# Save the model
# Predict the top five pictures


predictions = model.predict(test_images[:5])
Output model predictions are also compared with standard values
print(np.argmax(predictions, axis = 1))
print(test_labels[:5])
Copy the code

It was exactly right! Amazing!

Eight: see what the picture looks like in MNIST?

for i in range(0.5):
  first_image = test_images[i]
  first_image = np.array(first_image ,dtype= "float")
  pixels = first_image.reshape((28 ,28))
  plt.imshow(pixels , cmap="gray")
  plt.show()
Copy the code

Looks like it worked out pretty well.

Nine: Recognize your own handwriting?

First you need to establish a connection:

Since catLab files run in the cloud, I need to bind them to Google Drives. Students running locally do not need to do this step

import os
from google.colab import drive
drive.mount('/content/drive')

path = "/content/drive/My Drive/data"

os.chdir(path)
os.listdir(path)
Copy the code

After that, model prediction is made

from PIL import Image
import numpy as np
import os

img = Image.open("test.jpg").convert("1")
img = np.resize(img, (28.28.1))
im2arr = np.array(img)
im2arr = im2arr.reshape(1.784)
y_pred = model.predict(im2arr)
print(np.argmax(y_pred, axis = 1))
Copy the code

Results:

It should be right,

This is my number 5,23,333

Attach source code link