Suck the cat with code! This paper is participating in[Cat Essay Campaign].

Cat and Dog classification based on paddlePaddle2.x LeNet network

Image classification is to distinguish different types of images according to their semantic information, which is an important basic problem in computer vision

Cat and dog classification is a coarse-grained classification problem in image classification

# Use PaddlePaddle framework above 2.0.0
import paddle

print(paddle.__version__)
Copy the code
2.0.1
Copy the code

Start by importing the necessary packages

Zipfile ————-> Python module, unzip zipfile

OS ————-> Python module that you can use to operate the operating system

PaddlePaddle –>PaddlePaddle Deep learning framework

OS ————-> Python module that you can use to operate the operating system

Numpy ———-> Python third-party library for scientific calculations

PIL————> Python Image Library, Python third-party Image processing Library

Matplotlib —–> Python plotting library Pyplot: Matplotlib plotting framework

OS ————-> provides rich methods for handling files and directories

Sys ————-> provides access to some of the variables used or maintained by the interpreter, as well as functions that strongly interact with the interpreter.

The pickle———-> module implements basic data sequences and deserialization

Warnings. Filterwarnings (” ignore “) — — — — — — — — — — > ignore all the warnings

Cpu_count ———-> Obtain the number of CPU cores on the computer

Import the required package
import warnings
warnings.filterwarnings("ignore")

import tarfile
import paddle
import numpy as np
from PIL import Image
import sys
import pickle
from multiprocessing import cpu_count
import matplotlib.pyplot as plt
import os
from paddle.nn import MaxPool2D,Conv2D,BatchNorm
from paddle.nn import Linear
print("This tutorial is based on the Paddle version number:"+paddle.__version__)
Copy the code
The version number for this tutorial based on Paddle is: 2.0.1Copy the code
Parameter Configuration
train_parameters = {
    "input_size": [1.28.28].Enter shape for the image
    "class_dim": 2.# class number
    "src_path":"data/data9154/cifar-10-python.tar.gz".The path to the original dataset
    "target_path":"/home/aistudio/data/".The path to unzip
    "num_epochs": 10.# Number of training rounds
    "train_batch_size": 100.# The size of each batch when training
    "learning_strategy": {                                    Optimize the function-specific configuration
        "lr": 0.001                                            # Hyperparametric learning rate
    }, 
    'skip_steps': 5.Print the results every N batches
    'save_steps': 5.Save model parameters every N batches
    "checkpoints": "/home/aistudio/checkpoints"          # Save path
}
Copy the code

Step1: Prepare data

  • (1) Decompress the original data set
  • (2) Construct dataset and Dataloader

Introduction to data set

We use the CIFAR10 dataset. The CIFAR10 dataset contains 60,000 32×32 color images in 10 categories, each containing 6,000 images. 50,000 images were used as a training set and 10,000 as a validation set. This time, we’re only predicting two of them, cats and dogs.

PaddlePaddle already has several commonly used data sets built in, and CIFAR10 is particularly easy to use

from paddle.vision.datasets import Cifar10
Copy the code

1.1 Decompress the original data set

Unzip the raw data set function
def untar_data(src_path,target_path) :
    Unpack the raw data set and unpack the tar package from src_path to target_path.
    if(not os.path.isdir(target_path + "cifar-10-batches-py")):     
        tar = tarfile.open(src_path)
        tar.extractall(path=target_path)
        tar.close()
        print('Data set decompression completed')
    else:
        print('File already exists')
Copy the code
Parameter initialization
src_path=train_parameters['src_path']
target_path=train_parameters['target_path']

Unzip the raw data to the specified path
untar_data(src_path,target_path)
Copy the code
File already existsCopy the code

1.2 Construct dataset and Dataloader

Train_dataset and eval_dataset

Custom readers handle training sets and test sets

Paddle.reader.shuffle () indicates that BUF_SIZE data items are cached and shuffled at a time

Paddle. Batch () indicates that each BATCH_SIZE constitutes a batch

def unpickle(file) :
    # Data: a 10000x3072 NUMpy array of uint8s.
    # The first 1024 entries contain the red channel values, the next 1024 the green,
    # and the final 1024 the blue. The image is stored in row-major order,
    # so that the first 32 entries of the array are the red channel values of the first row of the image.

    # labels: A list of 10000 numbers in the range 0-9.
    # The number at index i indicates the label of the ith image in the array data.
    fo = open(file, 'rb')
    dict = pickle.load(fo,encoding = 'bytes')
    train_labels = dict[b'labels']
    train_array = dict[b'data']
    train_array=train_array.tolist()

    fo.close()
    data_len=len(train_labels)
    for i in range(data_len-1, -1, -1) :if train_labels[i]==3:
            train_labels[i]=0
        elif train_labels[i]==5:
            train_labels[i]=1
        else:
            train_labels.pop(i)
            train_array.pop(i)            
    train_array=np.array(train_array)
    return train_labels, train_array
Copy the code
import paddle.vision.transforms as T
from paddle.vision.transforms import Compose, Normalize, Resize, Grayscale
from PIL import Image
Custom dataset dataset
from paddle.io import Dataset
class MyDataset(paddle.io.Dataset) :
    """ Step 1: Inherit paddles.io.Dataset class """

    def __init__(self, mode='train') :
        Step 2: Implement the constructor to define the data set size
        super(MyDataset, self).__init__()
        Save tag data
        self.data = []
        Save image data
        self.img_datas = []

        # temporary variables
        xs=[]
        ys=[]
        temp_labels=[]
        temp_datas=[]
        
        # transform definition, transform grayscale, scale to 28*28, normalized
        mean = [127.5]
        std = [127.5]
        self.transforms = Compose([Resize((28.28)),  Grayscale(),  Normalize(mean, std, 'CHW')])


        if mode == 'train':
            # Batch read training data
            for i in range(1.6):
                temp_label,temp_data=unpickle(target_path +"cifar-10-batches-py/data_batch_%d" % (i,))
                ys.append(temp_label)
                xs.append(temp_data)
            temp_labels=np.concatenate(ys)
            temp_datas=np.concatenate(xs)
        else:            
            ## Batch read test data
            temp_labels,temp_datas=unpickle(target_path +"cifar-10-batches-py/test_batch")
            temp_labels=np.array(temp_labels)
            temp_datas=np.array(temp_datas)
        Convert to 3*32*32 image data
        temp_datas = temp_datas.reshape((-1.3.32.32))
        self.data=temp_labels
        self.img_datas = temp_datas

    def __getitem__(self, index) :
        Step 3: Implement __getitem__ method, define how to get data when specified index, and return a single piece of data (training data, corresponding label) ""
        # return single data and tag
        data_image = self.img_datas[index]
        # Load Image from numpy
        data_image = Image.fromarray(data_image, 'RGB')
        # Take image and apply transform to resize, grayscale, normalize
        t_data_image = self.transforms(data_image)
        # take label
        label = self.data[index]
        return t_data_image, np.array(label, dtype='int64')
    def __len__(self) :
        Step 4: Implement the __len__ method to return the total number of datasets.
        Return total number of data
        return len(self.data)
Copy the code
Test the defined data set
train_dataset = MyDataset(mode='train')
eval_dataset = MyDataset(mode='val')
print('=============train_dataset =============')
Output the shape and label of the dataset
print('train_dataset.__getitem__(1)[0].shape',train_dataset.__getitem__(1) [0].shape)
print('train_dataset.__getitem__(1)[1]', train_dataset.__getitem__(1) [1])
Output the length of the data set
print('train_dataset.__len__()',train_dataset.__len__())
print('=============eval_dataset =============')
Output the length of the data set
print('eval_dataset.__getitem__(1)[0].shape',eval_dataset.__getitem__(1) [0].shape)
print('eval_dataset.__getitem__(1)[1]', eval_dataset.__getitem__(1) [1])
print('eval_dataset.__len__()',eval_dataset.__len__())
Copy the code
=============train_dataset =============
train_dataset.__getitem__(1)[0].shape (1, 28, 28)
train_dataset.__getitem__(1)[1] 0
train_dataset.__len__() 10000
=============eval_dataset =============
eval_dataset.__getitem__(1)[0].shape (1, 28, 28)
eval_dataset.__getitem__(1)[1] 0
eval_dataset.__len__() 2000
Copy the code
# Training data DataLoad load
train_loader = paddle.io.DataLoader(train_dataset, 
                                    batch_size=train_parameters['train_batch_size'], 
                                    shuffle=True
                                    )
# Test data DataLoad load
eval_loader = paddle.io.DataLoader(eval_dataset,
                                   batch_size=train_parameters['train_batch_size'], 
                                   shuffle=False
                                   )
Copy the code

Step2. Configure the network

(1) Network construction

*** CNN network model

In CNN model, convolutional neural network can make better use of image structure information. The following PaddlePaddle is built into a simpler convolutional neural network Lenet. Lenet-5 is an early representative of the convolutional neural network model, which was proposed by LeCun in 1998. The model adopts a sequential structure, which consists of 7 layers (2 convolutional layers, 2 pooling layers and 3 fully connected layers), and the convolutional layers and pooling layers are arranged alternately.

import paddle
import paddle.nn as nn

class LeNet(nn.Layer) :
# definition Lenet

    def __init__(self, num_classes=10) :
        Number of categories, default 10
        super(LeNet, self).__init__()
        self.num_classes = num_classes
        self.features = nn.Sequential(
            nn.Conv2D(
                1.6.3, stride=1, padding=1),
            nn.ReLU(),
            nn.MaxPool2D(2.2),
            nn.Conv2D(
                6.16.5, stride=1, padding=0),
            nn.ReLU(),
            nn.MaxPool2D(2.2))

        if num_classes > 0:
            self.fc = nn.Sequential(
                nn.Linear(400.120),
                nn.Linear(120.84), nn.Linear(84, num_classes))

    def forward(self, inputs) :
        x = self.features(inputs)

        if self.num_classes > 0:
            x = paddle.flatten(x, 1)
            x = self.fc(x)
        return x

Copy the code
# Define the network
network=LeNet(num_classes=train_parameters['class_dim'])
# Assembly model
model=paddle.Model(network)
Print the network structure
model.summary((1.1.28 , 28))
Copy the code
--------------------------------------------------------------------------- Layer (type) Input Shape Output Shape Param # =========================================================================== Conv2D-1 [[1, 1, 28, 28]] [1, 6, 28, 28] 60 ReLU-1 [[1, 6, 28, 28]] [1, 6, 28, 28] 0 MaxPool2D-1 [[1, 6, 28, 28]] [1, 6, 14, 14] 0 Conv2D-2 [[1, 6, 14, [1, 16, 10, 10] maxPool2D-2 [[1, 16, 10, 10] 0 MaxPool2D-2 [[1, 16, 10, 10] 5] 0 linear-1 [[1, 400]] [1, 120] 48,120 linear-2 [[1, 120]] [1, 84] 10,164 linear-3 [[1, 84]] [1, 2] 170 =========================================================================== Total params: 60,930 Non-trainable Params: 0 --------------------------------------------------------------------------- Input size (MB): 0.00 Forward/ Backward pass size (MB): 0.11 Params Size (MB): 0.23 Estimated Total Size (MB): 0.35 -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- {' total_params' : 60930, 'trainable_params': 60930}Copy the code

Step3. Model training and Step4. Model evaluation

Use the paddles.optimizer.adam optimizer for optimization

Use f. cros_entropy to calculate the loss value

Draw the loss function diagram
def draw_process(title,color,iters,data,label) :
    plt.title(title, fontsize=24)
    plt.xlabel("iter", fontsize=20)
    plt.ylabel(label, fontsize=20)
    plt.plot(iters, data,color=color,label=label) 
    plt.legend()
    plt.grid()
    plt.show()
Copy the code
# Model training

Initialize the LeNet model
model=LeNet(num_classes=train_parameters['class_dim'])
# Training mode
model.train()
# cross entropy
cross_entropy = paddle.nn.CrossEntropyLoss()
# the optimizer
optimizer = paddle.optimizer.Adam(learning_rate=train_parameters['learning_strategy'] ['lr'],
                                  parameters=model.parameters()) 

Draw curve variables of Loss and ACC
steps = 0
Iters, total_loss, total_acc = [], [], []

# Start training
for epo in range(train_parameters['num_epochs') :for _, data in enumerate(train_loader()):
        steps += 1
        x_data = data[0]
        x_data = paddle.to_tensor (x_data)
        y_data = paddle.to_tensor(data[1])
        y_data = paddle.unsqueeze(y_data, 1)

        predicts = model(x_data)
        # Calculate cross entropy
        loss = cross_entropy(predicts, y_data)
        # Accuracy of calculation
        acc = paddle.metric.accuracy(predicts, y_data)
        # Backpropagation
        loss.backward()
        optimizer.step()
        # Gradient zero clearing
        optimizer.clear_grad()
        if steps % train_parameters["skip_steps"] = =0:
            Iters.append(steps)
            total_loss.append(loss.numpy()[0])
            total_acc.append(acc.numpy()[0])
            Print the intermediate process
            print('epo: {}, step: {}, loss is: {}, acc is: {}'\.format(epo, steps, loss.numpy(), acc.numpy()))
        Save model parameters
        if steps % train_parameters["save_steps"] = =0:
            save_path = train_parameters["checkpoints"] +"/"+"save_dir_" + str(steps) + '.pdparams'
            print('save model to: ' + save_path)
            paddle.save(model.state_dict(),save_path)

paddle.save(model.state_dict(),train_parameters["checkpoints"] +"/"+"save_dir_final.pdparams")
draw_process("trainning loss"."red",Iters,total_loss,"trainning loss")
draw_process("trainning acc"."green",Iters,total_acc,"trainning acc")
Copy the code
epo: 0, step: 5, loss is: [1.1097064], acc is: [0.49]
save model to: /home/aistudio/checkpoints/save_dir_5.pdparams
epo: 0, step: 10, loss is: [0.8729222], acc is: [0.51]
save model to: /home/aistudio/checkpoints/save_dir_10.pdparams
epo: 0, step: 15, loss is: [0.77851003], acc is: [0.49]
save model to: /home/aistudio/checkpoints/save_dir_15.pdparams
epo: 0, step: 20, loss is: [0.68859595], acc is: [0.54]
save model to: /home/aistudio/checkpoints/save_dir_20.pdparams
epo: 0, step: 25, loss is: [0.71485907], acc is: [0.48]
save model to: /home/aistudio/checkpoints/save_dir_25.pdparams
epo: 0, step: 30, loss is: [0.69014424], acc is: [0.58]
save model to: /home/aistudio/checkpoints/save_dir_30.pdparams
epo: 0, step: 35, loss is: [0.7331408], acc is: [0.42]
save model to: /home/aistudio/checkpoints/save_dir_35.pdparams
epo: 0, step: 40, loss is: [0.6923569], acc is: [0.53]
save model to: /home/aistudio/checkpoints/save_dir_40.pdparams
epo: 0, step: 45, loss is: [0.70091367], acc is: [0.49]
save model to: /home/aistudio/checkpoints/save_dir_45.pdparams
epo: 0, step: 50, loss is: [0.69078857], acc is: [0.52]
save model to: /home/aistudio/checkpoints/save_dir_50.pdparams
epo: 0, step: 55, loss is: [0.69088614], acc is: [0.52]
save model to: /home/aistudio/checkpoints/save_dir_55.pdparams
epo: 0, step: 60, loss is: [0.7027031], acc is: [0.46]
save model to: /home/aistudio/checkpoints/save_dir_60.pdparams
epo: 0, step: 65, loss is: [0.6824346], acc is: [0.55]
save model to: /home/aistudio/checkpoints/save_dir_65.pdparams
epo: 0, step: 70, loss is: [0.6795273], acc is: [0.51]
save model to: /home/aistudio/checkpoints/save_dir_70.pdparams
epo: 0, step: 75, loss is: [0.6809163], acc is: [0.59]
save model to: /home/aistudio/checkpoints/save_dir_75.pdparams
epo: 0, step: 80, loss is: [0.7107715], acc is: [0.43]
save model to: /home/aistudio/checkpoints/save_dir_80.pdparams
epo: 0, step: 85, loss is: [0.70901597], acc is: [0.53]
save model to: /home/aistudio/checkpoints/save_dir_85.pdparams
epo: 0, step: 90, loss is: [0.7054188], acc is: [0.44]
save model to: /home/aistudio/checkpoints/save_dir_90.pdparams
epo: 0, step: 95, loss is: [0.6982265], acc is: [0.54]
save model to: /home/aistudio/checkpoints/save_dir_95.pdparams
epo: 0, step: 100, loss is: [0.6998703], acc is: [0.51]
save model to: /home/aistudio/checkpoints/save_dir_100.pdparams
epo: 1, step: 105, loss is: [0.70260566], acc is: [0.44]
save model to: /home/aistudio/checkpoints/save_dir_105.pdparams
epo: 1, step: 110, loss is: [0.67828727], acc is: [0.56]
save model to: /home/aistudio/checkpoints/save_dir_110.pdparams
epo: 1, step: 115, loss is: [0.68608195], acc is: [0.49]
save model to: /home/aistudio/checkpoints/save_dir_115.pdparams
epo: 1, step: 120, loss is: [0.697596], acc is: [0.59]
save model to: /home/aistudio/checkpoints/save_dir_120.pdparams
epo: 1, step: 125, loss is: [0.7016902], acc is: [0.5]
save model to: /home/aistudio/checkpoints/save_dir_125.pdparams
epo: 1, step: 130, loss is: [0.6790494], acc is: [0.56]
save model to: /home/aistudio/checkpoints/save_dir_130.pdparams
epo: 1, step: 135, loss is: [0.68013227], acc is: [0.57]
save model to: /home/aistudio/checkpoints/save_dir_135.pdparams
epo: 1, step: 140, loss is: [0.70905924], acc is: [0.45]
save model to: /home/aistudio/checkpoints/save_dir_140.pdparams
epo: 1, step: 145, loss is: [0.6931264], acc is: [0.51]
save model to: /home/aistudio/checkpoints/save_dir_145.pdparams
epo: 1, step: 150, loss is: [0.6971727], acc is: [0.51]
save model to: /home/aistudio/checkpoints/save_dir_150.pdparams
epo: 1, step: 155, loss is: [0.67896414], acc is: [0.58]
save model to: /home/aistudio/checkpoints/save_dir_155.pdparams
epo: 1, step: 160, loss is: [0.67097855], acc is: [0.56]
save model to: /home/aistudio/checkpoints/save_dir_160.pdparams
epo: 1, step: 165, loss is: [0.69235575], acc is: [0.51]
save model to: /home/aistudio/checkpoints/save_dir_165.pdparams
epo: 1, step: 170, loss is: [0.6894104], acc is: [0.54]
save model to: /home/aistudio/checkpoints/save_dir_170.pdparams
epo: 1, step: 175, loss is: [0.70366347], acc is: [0.51]
save model to: /home/aistudio/checkpoints/save_dir_175.pdparams
epo: 1, step: 180, loss is: [0.69162464], acc is: [0.48]
save model to: /home/aistudio/checkpoints/save_dir_180.pdparams
epo: 1, step: 185, loss is: [0.67835146], acc is: [0.55]
save model to: /home/aistudio/checkpoints/save_dir_185.pdparams
epo: 1, step: 190, loss is: [0.6919897], acc is: [0.55]
save model to: /home/aistudio/checkpoints/save_dir_190.pdparams
epo: 1, step: 195, loss is: [0.69632596], acc is: [0.51]
save model to: /home/aistudio/checkpoints/save_dir_195.pdparams
epo: 1, step: 200, loss is: [0.70401454], acc is: [0.41]
save model to: /home/aistudio/checkpoints/save_dir_200.pdparams
epo: 2, step: 205, loss is: [0.72231257], acc is: [0.47]
save model to: /home/aistudio/checkpoints/save_dir_205.pdparams
epo: 2, step: 210, loss is: [0.6722144], acc is: [0.65]
save model to: /home/aistudio/checkpoints/save_dir_210.pdparams
epo: 2, step: 215, loss is: [0.7005479], acc is: [0.43]
save model to: /home/aistudio/checkpoints/save_dir_215.pdparams
epo: 2, step: 220, loss is: [0.68955404], acc is: [0.54]
save model to: /home/aistudio/checkpoints/save_dir_220.pdparams
epo: 2, step: 225, loss is: [0.68503153], acc is: [0.55]
save model to: /home/aistudio/checkpoints/save_dir_225.pdparams
epo: 2, step: 230, loss is: [0.6742158], acc is: [0.58]
save model to: /home/aistudio/checkpoints/save_dir_230.pdparams
epo: 2, step: 235, loss is: [0.68807405], acc is: [0.53]
save model to: /home/aistudio/checkpoints/save_dir_235.pdparams
epo: 2, step: 240, loss is: [0.7038729], acc is: [0.53]
save model to: /home/aistudio/checkpoints/save_dir_240.pdparams
epo: 2, step: 245, loss is: [0.69256955], acc is: [0.49]
save model to: /home/aistudio/checkpoints/save_dir_245.pdparams
epo: 2, step: 250, loss is: [0.6998977], acc is: [0.53]
save model to: /home/aistudio/checkpoints/save_dir_250.pdparams
epo: 2, step: 255, loss is: [0.6635308], acc is: [0.64]
save model to: /home/aistudio/checkpoints/save_dir_255.pdparams
epo: 2, step: 260, loss is: [0.6831071], acc is: [0.55]
save model to: /home/aistudio/checkpoints/save_dir_260.pdparams
epo: 2, step: 265, loss is: [0.6725425], acc is: [0.55]
save model to: /home/aistudio/checkpoints/save_dir_265.pdparams
epo: 2, step: 270, loss is: [0.6881926], acc is: [0.55]
save model to: /home/aistudio/checkpoints/save_dir_270.pdparams
epo: 2, step: 275, loss is: [0.69550765], acc is: [0.51]
save model to: /home/aistudio/checkpoints/save_dir_275.pdparams
epo: 2, step: 280, loss is: [0.68708885], acc is: [0.53]
save model to: /home/aistudio/checkpoints/save_dir_280.pdparams
epo: 2, step: 285, loss is: [0.68473077], acc is: [0.58]
save model to: /home/aistudio/checkpoints/save_dir_285.pdparams
epo: 2, step: 290, loss is: [0.6903842], acc is: [0.52]
save model to: /home/aistudio/checkpoints/save_dir_290.pdparams
epo: 2, step: 295, loss is: [0.7028897], acc is: [0.48]
save model to: /home/aistudio/checkpoints/save_dir_295.pdparams
epo: 2, step: 300, loss is: [0.6931243], acc is: [0.52]
save model to: /home/aistudio/checkpoints/save_dir_300.pdparams
epo: 3, step: 305, loss is: [0.68098104], acc is: [0.57]
save model to: /home/aistudio/checkpoints/save_dir_305.pdparams
epo: 3, step: 310, loss is: [0.6757507], acc is: [0.58]
save model to: /home/aistudio/checkpoints/save_dir_310.pdparams
epo: 3, step: 315, loss is: [0.7027341], acc is: [0.44]
save model to: /home/aistudio/checkpoints/save_dir_315.pdparams
epo: 3, step: 320, loss is: [0.7009732], acc is: [0.49]
save model to: /home/aistudio/checkpoints/save_dir_320.pdparams
epo: 3, step: 325, loss is: [0.7078163], acc is: [0.47]
save model to: /home/aistudio/checkpoints/save_dir_325.pdparams
epo: 3, step: 330, loss is: [0.6958405], acc is: [0.44]
save model to: /home/aistudio/checkpoints/save_dir_330.pdparams
epo: 3, step: 335, loss is: [0.69992703], acc is: [0.53]
save model to: /home/aistudio/checkpoints/save_dir_335.pdparams
epo: 3, step: 340, loss is: [0.69363695], acc is: [0.51]
save model to: /home/aistudio/checkpoints/save_dir_340.pdparams
epo: 3, step: 345, loss is: [0.6923307], acc is: [0.56]
save model to: /home/aistudio/checkpoints/save_dir_345.pdparams
epo: 3, step: 350, loss is: [0.6739081], acc is: [0.6]
save model to: /home/aistudio/checkpoints/save_dir_350.pdparams
epo: 3, step: 355, loss is: [0.68306243], acc is: [0.53]
save model to: /home/aistudio/checkpoints/save_dir_355.pdparams
epo: 3, step: 360, loss is: [0.66385293], acc is: [0.64]
save model to: /home/aistudio/checkpoints/save_dir_360.pdparams
epo: 3, step: 365, loss is: [0.6816753], acc is: [0.56]
save model to: /home/aistudio/checkpoints/save_dir_365.pdparams
epo: 3, step: 370, loss is: [0.6921282], acc is: [0.54]
save model to: /home/aistudio/checkpoints/save_dir_370.pdparams
epo: 3, step: 375, loss is: [0.6865966], acc is: [0.58]
save model to: /home/aistudio/checkpoints/save_dir_375.pdparams
epo: 3, step: 380, loss is: [0.69338584], acc is: [0.49]
save model to: /home/aistudio/checkpoints/save_dir_380.pdparams
epo: 3, step: 385, loss is: [0.6800542], acc is: [0.58]
save model to: /home/aistudio/checkpoints/save_dir_385.pdparams
epo: 3, step: 390, loss is: [0.6839569], acc is: [0.52]
save model to: /home/aistudio/checkpoints/save_dir_390.pdparams
epo: 3, step: 395, loss is: [0.6774286], acc is: [0.57]
save model to: /home/aistudio/checkpoints/save_dir_395.pdparams
epo: 3, step: 400, loss is: [0.7004008], acc is: [0.51]
save model to: /home/aistudio/checkpoints/save_dir_400.pdparams
epo: 4, step: 405, loss is: [0.7059412], acc is: [0.44]
save model to: /home/aistudio/checkpoints/save_dir_405.pdparams
epo: 4, step: 410, loss is: [0.69455093], acc is: [0.51]
save model to: /home/aistudio/checkpoints/save_dir_410.pdparams
epo: 4, step: 415, loss is: [0.6933525], acc is: [0.53]
save model to: /home/aistudio/checkpoints/save_dir_415.pdparams
epo: 4, step: 420, loss is: [0.7079694], acc is: [0.51]
save model to: /home/aistudio/checkpoints/save_dir_420.pdparams
epo: 4, step: 425, loss is: [0.6937676], acc is: [0.51]
save model to: /home/aistudio/checkpoints/save_dir_425.pdparams
epo: 4, step: 430, loss is: [0.68947273], acc is: [0.58]
save model to: /home/aistudio/checkpoints/save_dir_430.pdparams
epo: 4, step: 435, loss is: [0.6781409], acc is: [0.58]
save model to: /home/aistudio/checkpoints/save_dir_435.pdparams
epo: 4, step: 440, loss is: [0.6878584], acc is: [0.56]
save model to: /home/aistudio/checkpoints/save_dir_440.pdparams
epo: 4, step: 445, loss is: [0.66629666], acc is: [0.58]
save model to: /home/aistudio/checkpoints/save_dir_445.pdparams
epo: 4, step: 450, loss is: [0.66666824], acc is: [0.62]
save model to: /home/aistudio/checkpoints/save_dir_450.pdparams
epo: 4, step: 455, loss is: [0.67528206], acc is: [0.58]
save model to: /home/aistudio/checkpoints/save_dir_455.pdparams
epo: 4, step: 460, loss is: [0.7015488], acc is: [0.47]
save model to: /home/aistudio/checkpoints/save_dir_460.pdparams
epo: 4, step: 465, loss is: [0.6915476], acc is: [0.61]
save model to: /home/aistudio/checkpoints/save_dir_465.pdparams
epo: 4, step: 470, loss is: [0.6868398], acc is: [0.57]
save model to: /home/aistudio/checkpoints/save_dir_470.pdparams
epo: 4, step: 475, loss is: [0.69640535], acc is: [0.52]
save model to: /home/aistudio/checkpoints/save_dir_475.pdparams
epo: 4, step: 480, loss is: [0.6844581], acc is: [0.55]
save model to: /home/aistudio/checkpoints/save_dir_480.pdparams
epo: 4, step: 485, loss is: [0.678205], acc is: [0.61]
save model to: /home/aistudio/checkpoints/save_dir_485.pdparams
epo: 4, step: 490, loss is: [0.6782288], acc is: [0.57]
save model to: /home/aistudio/checkpoints/save_dir_490.pdparams
epo: 4, step: 495, loss is: [0.6809789], acc is: [0.58]
save model to: /home/aistudio/checkpoints/save_dir_495.pdparams
epo: 4, step: 500, loss is: [0.6791268], acc is: [0.58]
save model to: /home/aistudio/checkpoints/save_dir_500.pdparams
epo: 5, step: 505, loss is: [0.66857773], acc is: [0.55]
save model to: /home/aistudio/checkpoints/save_dir_505.pdparams
epo: 5, step: 510, loss is: [0.68727225], acc is: [0.54]
save model to: /home/aistudio/checkpoints/save_dir_510.pdparams
epo: 5, step: 515, loss is: [0.68932843], acc is: [0.49]
save model to: /home/aistudio/checkpoints/save_dir_515.pdparams
epo: 5, step: 520, loss is: [0.68978363], acc is: [0.55]
save model to: /home/aistudio/checkpoints/save_dir_520.pdparams
epo: 5, step: 525, loss is: [0.69064134], acc is: [0.51]
save model to: /home/aistudio/checkpoints/save_dir_525.pdparams
epo: 5, step: 530, loss is: [0.682237], acc is: [0.58]
save model to: /home/aistudio/checkpoints/save_dir_530.pdparams
epo: 5, step: 535, loss is: [0.68976945], acc is: [0.55]
save model to: /home/aistudio/checkpoints/save_dir_535.pdparams
epo: 5, step: 540, loss is: [0.67902535], acc is: [0.58]
save model to: /home/aistudio/checkpoints/save_dir_540.pdparams
epo: 5, step: 545, loss is: [0.67134506], acc is: [0.65]
save model to: /home/aistudio/checkpoints/save_dir_545.pdparams
epo: 5, step: 550, loss is: [0.6688429], acc is: [0.61]
save model to: /home/aistudio/checkpoints/save_dir_550.pdparams
epo: 5, step: 555, loss is: [0.7254223], acc is: [0.49]
save model to: /home/aistudio/checkpoints/save_dir_555.pdparams
epo: 5, step: 560, loss is: [0.69241136], acc is: [0.54]
save model to: /home/aistudio/checkpoints/save_dir_560.pdparams
epo: 5, step: 565, loss is: [0.6801878], acc is: [0.56]
save model to: /home/aistudio/checkpoints/save_dir_565.pdparams
epo: 5, step: 570, loss is: [0.6906636], acc is: [0.53]
save model to: /home/aistudio/checkpoints/save_dir_570.pdparams
epo: 5, step: 575, loss is: [0.70213795], acc is: [0.51]
save model to: /home/aistudio/checkpoints/save_dir_575.pdparams
epo: 5, step: 580, loss is: [0.69319504], acc is: [0.52]
save model to: /home/aistudio/checkpoints/save_dir_580.pdparams
epo: 5, step: 585, loss is: [0.7011637], acc is: [0.48]
save model to: /home/aistudio/checkpoints/save_dir_585.pdparams
epo: 5, step: 590, loss is: [0.6848818], acc is: [0.59]
save model to: /home/aistudio/checkpoints/save_dir_590.pdparams
epo: 5, step: 595, loss is: [0.67795885], acc is: [0.6]
save model to: /home/aistudio/checkpoints/save_dir_595.pdparams
epo: 5, step: 600, loss is: [0.6833943], acc is: [0.55]
save model to: /home/aistudio/checkpoints/save_dir_600.pdparams
epo: 6, step: 605, loss is: [0.6795752], acc is: [0.58]
save model to: /home/aistudio/checkpoints/save_dir_605.pdparams
epo: 6, step: 610, loss is: [0.6964473], acc is: [0.5]
save model to: /home/aistudio/checkpoints/save_dir_610.pdparams
epo: 6, step: 615, loss is: [0.7281563], acc is: [0.43]
save model to: /home/aistudio/checkpoints/save_dir_615.pdparams
epo: 6, step: 620, loss is: [0.675564], acc is: [0.6]
save model to: /home/aistudio/checkpoints/save_dir_620.pdparams
epo: 6, step: 625, loss is: [0.6895311], acc is: [0.52]
save model to: /home/aistudio/checkpoints/save_dir_625.pdparams
epo: 6, step: 630, loss is: [0.67448664], acc is: [0.64]
save model to: /home/aistudio/checkpoints/save_dir_630.pdparams
epo: 6, step: 635, loss is: [0.6737503], acc is: [0.58]
save model to: /home/aistudio/checkpoints/save_dir_635.pdparams
epo: 6, step: 640, loss is: [0.70881164], acc is: [0.46]
save model to: /home/aistudio/checkpoints/save_dir_640.pdparams
epo: 6, step: 645, loss is: [0.68261325], acc is: [0.55]
save model to: /home/aistudio/checkpoints/save_dir_645.pdparams
epo: 6, step: 650, loss is: [0.6765045], acc is: [0.57]
save model to: /home/aistudio/checkpoints/save_dir_650.pdparams
epo: 6, step: 655, loss is: [0.6759614], acc is: [0.59]
save model to: /home/aistudio/checkpoints/save_dir_655.pdparams
epo: 6, step: 660, loss is: [0.6793112], acc is: [0.57]
save model to: /home/aistudio/checkpoints/save_dir_660.pdparams
epo: 6, step: 665, loss is: [0.6845392], acc is: [0.53]
save model to: /home/aistudio/checkpoints/save_dir_665.pdparams
epo: 6, step: 670, loss is: [0.6814833], acc is: [0.58]
save model to: /home/aistudio/checkpoints/save_dir_670.pdparams
epo: 6, step: 675, loss is: [0.68463284], acc is: [0.54]
save model to: /home/aistudio/checkpoints/save_dir_675.pdparams
epo: 6, step: 680, loss is: [0.6939957], acc is: [0.54]
save model to: /home/aistudio/checkpoints/save_dir_680.pdparams
epo: 6, step: 685, loss is: [0.6949662], acc is: [0.56]
save model to: /home/aistudio/checkpoints/save_dir_685.pdparams
epo: 6, step: 690, loss is: [0.6850964], acc is: [0.54]
save model to: /home/aistudio/checkpoints/save_dir_690.pdparams
epo: 6, step: 695, loss is: [0.6783737], acc is: [0.6]
save model to: /home/aistudio/checkpoints/save_dir_695.pdparams
epo: 6, step: 700, loss is: [0.6847418], acc is: [0.56]
save model to: /home/aistudio/checkpoints/save_dir_700.pdparams
epo: 7, step: 705, loss is: [0.6696134], acc is: [0.6]
save model to: /home/aistudio/checkpoints/save_dir_705.pdparams
epo: 7, step: 710, loss is: [0.699369], acc is: [0.59]
save model to: /home/aistudio/checkpoints/save_dir_710.pdparams
epo: 7, step: 715, loss is: [0.6834408], acc is: [0.61]
save model to: /home/aistudio/checkpoints/save_dir_715.pdparams
epo: 7, step: 720, loss is: [0.6834759], acc is: [0.51]
save model to: /home/aistudio/checkpoints/save_dir_720.pdparams
epo: 7, step: 725, loss is: [0.68610823], acc is: [0.54]
save model to: /home/aistudio/checkpoints/save_dir_725.pdparams
epo: 7, step: 730, loss is: [0.667547], acc is: [0.6]
save model to: /home/aistudio/checkpoints/save_dir_730.pdparams
epo: 7, step: 735, loss is: [0.70002645], acc is: [0.55]
save model to: /home/aistudio/checkpoints/save_dir_735.pdparams
epo: 7, step: 740, loss is: [0.6882743], acc is: [0.55]
save model to: /home/aistudio/checkpoints/save_dir_740.pdparams
epo: 7, step: 745, loss is: [0.6829937], acc is: [0.53]
save model to: /home/aistudio/checkpoints/save_dir_745.pdparams
epo: 7, step: 750, loss is: [0.6799063], acc is: [0.57]
save model to: /home/aistudio/checkpoints/save_dir_750.pdparams
epo: 7, step: 755, loss is: [0.6759838], acc is: [0.59]
save model to: /home/aistudio/checkpoints/save_dir_755.pdparams
epo: 7, step: 760, loss is: [0.7013712], acc is: [0.47]
save model to: /home/aistudio/checkpoints/save_dir_760.pdparams
epo: 7, step: 765, loss is: [0.67678285], acc is: [0.55]
save model to: /home/aistudio/checkpoints/save_dir_765.pdparams
epo: 7, step: 770, loss is: [0.6903254], acc is: [0.52]
save model to: /home/aistudio/checkpoints/save_dir_770.pdparams
epo: 7, step: 775, loss is: [0.71212935], acc is: [0.45]
save model to: /home/aistudio/checkpoints/save_dir_775.pdparams
epo: 7, step: 780, loss is: [0.66622734], acc is: [0.62]
save model to: /home/aistudio/checkpoints/save_dir_780.pdparams
epo: 7, step: 785, loss is: [0.6900478], acc is: [0.51]
save model to: /home/aistudio/checkpoints/save_dir_785.pdparams
epo: 7, step: 790, loss is: [0.6736644], acc is: [0.58]
save model to: /home/aistudio/checkpoints/save_dir_790.pdparams
epo: 7, step: 795, loss is: [0.70363], acc is: [0.52]
save model to: /home/aistudio/checkpoints/save_dir_795.pdparams
epo: 7, step: 800, loss is: [0.6934418], acc is: [0.49]
save model to: /home/aistudio/checkpoints/save_dir_800.pdparams
epo: 8, step: 805, loss is: [0.6693335], acc is: [0.59]
save model to: /home/aistudio/checkpoints/save_dir_805.pdparams
epo: 8, step: 810, loss is: [0.694484], acc is: [0.52]
save model to: /home/aistudio/checkpoints/save_dir_810.pdparams
epo: 8, step: 815, loss is: [0.7255772], acc is: [0.37]
save model to: /home/aistudio/checkpoints/save_dir_815.pdparams
epo: 8, step: 820, loss is: [0.6915439], acc is: [0.56]
save model to: /home/aistudio/checkpoints/save_dir_820.pdparams
epo: 8, step: 825, loss is: [0.6881697], acc is: [0.55]
save model to: /home/aistudio/checkpoints/save_dir_825.pdparams
epo: 8, step: 830, loss is: [0.68885416], acc is: [0.52]
save model to: /home/aistudio/checkpoints/save_dir_830.pdparams
epo: 8, step: 835, loss is: [0.6819633], acc is: [0.52]
save model to: /home/aistudio/checkpoints/save_dir_835.pdparams
epo: 8, step: 840, loss is: [0.68416965], acc is: [0.54]
save model to: /home/aistudio/checkpoints/save_dir_840.pdparams
epo: 8, step: 845, loss is: [0.6801962], acc is: [0.5]
save model to: /home/aistudio/checkpoints/save_dir_845.pdparams
epo: 8, step: 850, loss is: [0.67312473], acc is: [0.62]
save model to: /home/aistudio/checkpoints/save_dir_850.pdparams
epo: 8, step: 855, loss is: [0.6651606], acc is: [0.64]
save model to: /home/aistudio/checkpoints/save_dir_855.pdparams
epo: 8, step: 860, loss is: [0.66604716], acc is: [0.58]
save model to: /home/aistudio/checkpoints/save_dir_860.pdparams
epo: 8, step: 865, loss is: [0.6775603], acc is: [0.61]
save model to: /home/aistudio/checkpoints/save_dir_865.pdparams
epo: 8, step: 870, loss is: [0.6985699], acc is: [0.5]
save model to: /home/aistudio/checkpoints/save_dir_870.pdparams
epo: 8, step: 875, loss is: [0.6906618], acc is: [0.54]
save model to: /home/aistudio/checkpoints/save_dir_875.pdparams
epo: 8, step: 880, loss is: [0.6838692], acc is: [0.56]
save model to: /home/aistudio/checkpoints/save_dir_880.pdparams
epo: 8, step: 885, loss is: [0.6818925], acc is: [0.63]
save model to: /home/aistudio/checkpoints/save_dir_885.pdparams
epo: 8, step: 890, loss is: [0.7003258], acc is: [0.51]
save model to: /home/aistudio/checkpoints/save_dir_890.pdparams
epo: 8, step: 895, loss is: [0.7080064], acc is: [0.5]
save model to: /home/aistudio/checkpoints/save_dir_895.pdparams
epo: 8, step: 900, loss is: [0.67341954], acc is: [0.61]
save model to: /home/aistudio/checkpoints/save_dir_900.pdparams
epo: 9, step: 905, loss is: [0.68930835], acc is: [0.53]
save model to: /home/aistudio/checkpoints/save_dir_905.pdparams
epo: 9, step: 910, loss is: [0.7058289], acc is: [0.46]
save model to: /home/aistudio/checkpoints/save_dir_910.pdparams
epo: 9, step: 915, loss is: [0.67915636], acc is: [0.6]
save model to: /home/aistudio/checkpoints/save_dir_915.pdparams
epo: 9, step: 920, loss is: [0.687831], acc is: [0.58]
save model to: /home/aistudio/checkpoints/save_dir_920.pdparams
epo: 9, step: 925, loss is: [0.6957987], acc is: [0.46]
save model to: /home/aistudio/checkpoints/save_dir_925.pdparams
epo: 9, step: 930, loss is: [0.6923476], acc is: [0.55]
save model to: /home/aistudio/checkpoints/save_dir_930.pdparams
epo: 9, step: 935, loss is: [0.70298016], acc is: [0.47]
save model to: /home/aistudio/checkpoints/save_dir_935.pdparams
epo: 9, step: 940, loss is: [0.69297534], acc is: [0.54]
save model to: /home/aistudio/checkpoints/save_dir_940.pdparams
epo: 9, step: 945, loss is: [0.6787053], acc is: [0.56]
save model to: /home/aistudio/checkpoints/save_dir_945.pdparams
epo: 9, step: 950, loss is: [0.6894692], acc is: [0.57]
save model to: /home/aistudio/checkpoints/save_dir_950.pdparams
epo: 9, step: 955, loss is: [0.70166737], acc is: [0.51]
save model to: /home/aistudio/checkpoints/save_dir_955.pdparams
epo: 9, step: 960, loss is: [0.69754714], acc is: [0.53]
save model to: /home/aistudio/checkpoints/save_dir_960.pdparams
epo: 9, step: 965, loss is: [0.6867398], acc is: [0.54]
save model to: /home/aistudio/checkpoints/save_dir_965.pdparams
epo: 9, step: 970, loss is: [0.6726653], acc is: [0.59]
save model to: /home/aistudio/checkpoints/save_dir_970.pdparams
epo: 9, step: 975, loss is: [0.6738178], acc is: [0.56]
save model to: /home/aistudio/checkpoints/save_dir_975.pdparams
epo: 9, step: 980, loss is: [0.6699579], acc is: [0.55]
save model to: /home/aistudio/checkpoints/save_dir_980.pdparams
epo: 9, step: 985, loss is: [0.6805726], acc is: [0.57]
save model to: /home/aistudio/checkpoints/save_dir_985.pdparams
epo: 9, step: 990, loss is: [0.68973434], acc is: [0.53]
save model to: /home/aistudio/checkpoints/save_dir_990.pdparams
epo: 9, step: 995, loss is: [0.66840816], acc is: [0.6]
save model to: /home/aistudio/checkpoints/save_dir_995.pdparams
epo: 9, step: 1000, loss is: [0.7072126], acc is: [0.49]
save model to: /home/aistudio/checkpoints/save_dir_1000.pdparams
Copy the code

Model validation

After the training, the effect of the model needs to be verified. At this point, the test data set is loaded, and then the trained model is used to predict the test set and calculate the loss and accuracy.

Model evaluation
model__state_dict = paddle.load(train_parameters["checkpoints"] +"/"+"save_dir_final.pdparams")
model_eval =  LeNet( num_classes=train_parameters['class_dim'])
model_eval.set_state_dict(model__state_dict) 
model_eval.eval()
accs = []

for _, data in enumerate(eval_loader()):
    x_data = data[0]
    y_data = paddle.to_tensor(data[1])
    y_data = paddle.unsqueeze(y_data, 1)
    predicts = model_eval(x_data)
    The acc # calculation
    acc = paddle.metric.accuracy(predicts, y_data)
    accs.append(acc.numpy()[0])
print('The accuracy of the model on the verification set is:',np.mean(accs))
Copy the code
The accuracy of the model in the validation set is 0.5365Copy the code

Step5. Model prediction

# Image preprocessing
def load_image(file) :
        Prediction image preprocessing
        # Open image
        im = Image.open(file)
        # Resize the image to the same size as the training data 28*28 and set ANTIALIAS
        im = im.resize((28.28), Image.ANTIALIAS)
        # To grayscale
        im = im.convert('1')
        Create a picture matrix of type float32
        im = np.array(im).astype(np.float32)
        Transpose
        # convert the pixel value from [0-255] to [0-1]
        im = im / 255.0
        #print(im)       
        im = np.expand_dims(im, axis=0)
        Keep the image dimension the same as the previous input
        print('im_shape dimensions: ',im.shape)
        return im
Copy the code
Model prediction
# Load model
model__state_dict = paddle.load(train_parameters["checkpoints"] +"/"+"save_dir_final.pdparams")
model_eval = LeNet( num_classes=train_parameters['class_dim'])
model_eval.set_state_dict(model__state_dict) 
# Training mode
model.eval(a)# Show prediction pictures
infer_path='/home/aistudio/data/data7940/dog.png'
img = Image.open(infer_path)
plt.imshow(img)          Draw an image from an array
plt.show()               # display image

# Preprocess the prediction images
infer_img = load_image(infer_path)
infer_img = infer_img.reshape(1.28.28)
# infer_img = infer_img.toGra

# define tag list
label_list = [ "cat"."dog"]

data = infer_img
dy_x_data = np.array(data).astype('float32')
dy_x_data=dy_x_data[np.newaxis,:, : ,:]
img = paddle.to_tensor(dy_x_data)
out = model(img)
lab = np.argmax(out.numpy())  #argmax(): Returns the index of the largest number
print(label_list[lab])
Copy the code

Dimensions of im_shape: (1, 28, 28) dogCopy the code