The cat and dog fight

preface

This was a big assignment, and recently it took two or three days to train it and set up a service that worked.

The assignment is cat and dog fighting (cat and dog data set classification), using TensorFlow and PyTorch respectively. This was a contest in Kaggle a few years ago. The data set originally had more than 800 meters, but IN order to save the training time, I found a “castrated version” data set from the Internet, with a total of 3000 pictures. The details of how to download them are described in the following code.

For the environment, the Pytorch version is relatively low in terms of graphics power

python3.8

TF = = 2.2

Pytorch = = 1.2

1. TensorFlow version

import tensorflow as tf
import os
import random
from tensorflow.keras import models,layers
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense,Conv2D,Flatten,Dropout,MaxPool2D
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.callbacks import EarlyStopping
import numpy as np
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix

plt.rcParams['font.sans-serif'] = ['simhei']
plt.rcParams['axes.unicode_minus'] = False
Copy the code

1.1 Obtaining data sets

The first run requires the comment to be undownloaded

# Data set download link
#dataset_url = "https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip"

Start downloading, and extract to the Settings folder
#dataset_path = tf.keras.utils.get_file("cats_and_dogs_filtered.zip", Origin = dataset_URL,cache_subdir="/home/a/ desktop /python/cat/dogfight ",extract=True)

dataset_dir = os.path.join(os.path.dirname('/home/a/ desktop/Python/ai final Assignment - Dogs and Cats /'), "cats_and_dogs_filtered")
Copy the code

On the first run, uncomment a few lines of the annotated code to download the dataset.

1.2 Load partition training set and construct data generator

Divide the downloaded data into paths based on tags
train_cats = os.path.join(dataset_dir,"train"."cats")
train_dogs = os.path.join(dataset_dir,"train"."dogs")

test_cats = os.path.join(dataset_dir,"validation"."cats")
test_dogs = os.path.join(dataset_dir,"validation"."dogs")

train_dir = os.path.join(dataset_dir,"train")
test_dir = os.path.join(dataset_dir,"validation")


# Check data size
train_dogs_num = len(os.listdir(train_dogs))
train_cats_num = len(os.listdir(train_cats))

test_dogs_num = len(os.listdir(test_dogs))
test_cats_num = len(os.listdir(test_cats))

train_all = train_cats_num+train_dogs_num
test_all = test_cats_num+test_dogs_num


print(train_all,test_all)
Copy the code

There are 2000 training sets and 1000 test sets, so a 2:1 ratio of the training set to the test set is actually not a very good ratio, we still like 7/3, but it’s close.

Before we can construct a data generator, or do any data manipulation, we must look at the data and understand it. If you look at some images and find that each image is not the same size, you need to find a good size to constrain all images (as long as you have enough memory training). Here are some of the treatments for building a generator

  • Set the batch_size
  • Read the picture under the folder
  • In order to reduce the amount of computation and data size, the RGB of the picture is normalized to 0~1
  • Size the image and scramble the data
  • Set up the seed so that it can be reproduced
  • Setting classification Indicators (Dichotomies)
batch_size=64
height=224
width=224

train_generator=ImageDataGenerator(
    rescale=1./255.
).flow_from_directory(
    batch_size=batch_size,
    directory=train_dir,
    shuffle=True,
    seed=0,
    target_size=(height,width),
    class_mode="binary"
)


test_generator=ImageDataGenerator(
    rescale=1./255.
).flow_from_directory(
    batch_size=batch_size,
    directory=test_dir,
    shuffle=False,
    seed=0,
    target_size=(height,width),
    class_mode="binary"
)
Copy the code

Then use the constructed generator to randomly visualize some images

sample_training_images, labels = next(train_generator)
sample_testing_images,test_labels=next(test_generator)

d=train_generator.class_indices
names=dict(zip(d.values(),d.keys()))

def plotImages(images_arr,labels) :
    fig, axes = plt.subplots(3.5, figsize=(10.8))
    axes = axes.flatten()
    for (img,label), ax in zip(zip(images_arr,labels), axes):
        ax.imshow(img)
        ax.set_title("Category:"+str(int(label))+""+names[label])
        ax.axes.xaxis.set_visible(False)
        ax.axes.yaxis.set_visible(False)
    plt.tight_layout()
    plt.show()
plotImages(sample_training_images[:15],labels[:15])
plotImages(sample_testing_images[:15],test_labels[:15])
Copy the code

1.3 Model construction and training

At first, I was full of confidence and decided to build a more complex model, directly building a DenseNet

class ConvBlock(tf.keras.layers.Layer) :
    def __init__(self, num_channels) :
        super(ConvBlock, self).__init__()
        self.bn = tf.keras.layers.BatchNormalization()
        self.relu = tf.keras.layers.ReLU()
        self.conv = tf.keras.layers.Conv2D(
            filters=num_channels, kernel_size=(3.3), padding='same')

        self.listLayers = [self.bn, self.relu, self.conv]

    def call(self, x) :
        y = x
        for layer in self.listLayers.layers:
            y = layer(y)
        y = tf.keras.layers.concatenate([x,y], axis=-1)
        return y

    
    
Num_convs *num_channels+ num_channels
class DenseBlock(tf.keras.layers.Layer) :
    def __init__(self, num_convs, num_channels) :
        super(DenseBlock, self).__init__()
        self.listLayers = []
        for _ in range(num_convs):
            self.listLayers.append(ConvBlock(num_channels))

    def call(self, x) :
        for layer in self.listLayers.layers:
            x = layer(x)
        return x
    
    
class TransitionBlock(tf.keras.layers.Layer) :
    def __init__(self, num_channels, **kwargs) :
        super(TransitionBlock, self).__init__(**kwargs)
        self.batch_norm = tf.keras.layers.BatchNormalization()
        self.relu = tf.keras.layers.ReLU()
        self.conv = tf.keras.layers.Conv2D(num_channels, kernel_size=1)
        self.avg_pool = tf.keras.layers.AvgPool2D(pool_size=2, strides=2)

    def call(self, x) :
        x = self.batch_norm(x)
        x = self.relu(x)
        x = self.conv(x)
        return self.avg_pool(x)
        
def block_1() :
    return tf.keras.Sequential([
        tf.keras.layers.Conv2D(256, kernel_size=7, strides=2, padding='same'),
        tf.keras.layers.BatchNormalization(),
        tf.keras.layers.ReLU(),
        tf.keras.layers.MaxPool2D(pool_size=3, strides=2, padding='same')])

def block_2() :
    net = block_1()
    # num_channels Specifies the current number of channels
    num_channels, growth_rate = 256.32
    num_convs_in_dense_blocks = [4.4.4.4]

    for i, num_convs in enumerate(num_convs_in_dense_blocks):
        net.add(DenseBlock(num_convs, growth_rate))
        # The number of output channels in the last dense block
        num_channels += num_convs * growth_rate
        Add a transition layer between dense blocks to halve the number of channels
        ifi ! =len(num_convs_in_dense_blocks) - 1:
            num_channels //= 2
            net.add(TransitionBlock(num_channels))
    return net


def DenseNet() :
    net = block_2()
    net.add(tf.keras.layers.BatchNormalization())
    net.add(tf.keras.layers.GlobalAvgPool2D())
    net.add(tf.keras.layers.LeakyReLU(0.1))
    net.add(tf.keras.layers.Flatten())
    net.add(tf.keras.layers.Dense(128))
    net.add(tf.keras.layers.LeakyReLU(0.1))
    net.add(tf.keras.layers.Dense(1,activation='sigmoid'))
    return net
Copy the code

Give it a try

num_epochs=5
lr=1e-4

# instantiate the network
densenet=DenseNet()

optimizer=tf.keras.optimizers.RMSprop(lr=lr)
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True)

densenet.build(input_shape=(None,height,width,3))
densenet.summary()

densenet.compile(optimizer=optimizer,loss=loss,metrics=['accuracy'])
history=densenet.fit(
    train_generator,
    epochs=num_epochs,
    validation_data=test_generator,
)

densenet.evaluate(test_generator)
Copy the code

def trainning_plot(history,num_epochs) :
    x=[i for i in range(num_epochs)]
    plt.figure()
    plt.plot(x,history.history['accuracy'],label='accuracy')
    plt.plot(x,history.history['val_accuracy'],label='val_accuracy')
    plt.plot(x,history.history['loss'],label='loss')
    plt.plot(x,history.history['val_loss'],label='val_loss')
    plt.legend()
    plt.xlabel("Epochs")
    plt.show()
trainning_plot(history,num_epochs)
Copy the code

As a result, the model was actually as accurate as blind guessing, which means learning nothing. This is also due to the improper proportion of training set and test set, the overall training data is relatively small. Therefore, in order to expand the data set and enhance the robustness and generalization capability of the model, data enhancement is required.

Add translation, rotation, random scaling and so on to the training data
train_generator=ImageDataGenerator(
    rescale=1./255.,
    rotation_range=40,
    width_shift_range=0.1,
    height_shift_range=0.1,
    shear_range=0.2,
    zoom_range=0.2,
    horizontal_flip=True,
    fill_mode='nearest',
).flow_from_directory(
    batch_size=batch_size,
    directory=train_dir,
    shuffle=True,
    seed=0,
    target_size=(height,width),
    class_mode="binary"
)
Copy the code

Try training again

num_epochs=25

history=densenet.fit(
    train_generator,
    epochs=num_epochs,
    validation_data=test_generator
)
Copy the code

There has been some improvement, but the accuracy is still too low for binary classification problems, and the training curve is volatile. Think about the reasons for the low accuracy. It is often because the model is too complex and the data amount is small, so the training cannot converge. In this case, we can try to use a relatively simple model for training, and see if the training can be carried out smoothly under the simple model. A basic CNN has been built

from tensorflow.keras import layers,models,regularizers


model = models.Sequential()

model.add(layers.Conv2D(32, (3.3), activation='relu', input_shape=(224.224.3)))
model.add(layers.MaxPooling2D((2.2)))

model.add(layers.Conv2D(64, (3.3), activation='relu'))
model.add(layers.MaxPooling2D((2.2)))

model.add(layers.Conv2D(64, (3.3), activation='relu'))
model.add(layers.MaxPooling2D((2.2)))

model.add(layers.Conv2D(128, (3.3), activation='relu'))
model.add(layers.MaxPooling2D((2.2)))

model.add(layers.Conv2D(128, (3.3), activation='relu'))
model.add(layers.MaxPooling2D((2.2)))

model.add(layers.Flatten())
model.add(layers.Dense(512, activation='relu', kernel_regularizer=regularizers.l2(0.001)))
model.add(layers.Dropout(0.2))
model.add(layers.Dense(1, activation='sigmoid'))

print(model.summary())

model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=5e-4), loss='binary_crossentropy', metrics='accuracy')
history = model.fit(
    train_generator,
    epochs=30,
    validation_data=test_generator,
    callbacks=[EarlyStopping(monitor='val_accuracy', min_delta=0.001, patience=5, verbose=1)])Copy the code

Regularization, early stop, forgetting… I hope I can learn better parameters from simple models

There was no early stop here, but 30 epoches were trained, and the accuracy of the test set was improved to 81.6%, indicating that our conjecture was correct (continued training may continue to improve the accuracy). Complex models require a lot of data to train, and for our small number of models, a simple network might have better performance. In this case, we can use transfer learning to directly use the model trained by others in a large number of data sets, so as to avoid the problem of training convergence difficulty.

backbone=tf.keras.applications.DenseNet201(weights='imagenet',include_top=False,input_shape=(height,width,3))

backbone.trainable=False

transfer_model=Sequential()
transfer_model.add(backbone)
transfer_model.add(tf.keras.layers.GlobalAveragePooling2D())
transfer_model.add(Dense(512,activation='relu'))
transfer_model.add(Dense(1,activation='sigmoid'))

transfer_model.summary()

# Set dynamic learning rate, exponential decay
init_lr=1e-4
lr=tf.keras.optimizers.schedules.ExponentialDecay(
    initial_learning_rate=init_lr,
    decay_steps=50,
    decay_rate=0.96,
    staircase=True
)

optimizer=tf.keras.optimizers.Adam(learning_rate=lr)
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True)

transfer_model.compile(optimizer=optimizer, loss=loss, metrics='accuracy')
history = transfer_model.fit(
    train_generator,
    epochs=60,
    validation_data=test_generator,
    callbacks=[EarlyStopping(monitor='val_accuracy', min_delta=0.001, patience=5, verbose=1)])Copy the code

The final result is perfect, 98.9% accuracy on the test set, so use this model to randomly predict some pictures

plt.figure(figsize=(10.8))

Get the original classification dictionary and swap the dictionary key pairs
d=test_generator.class_indices
label_names=dict(zip(d.values(), d.keys()))


# Randomly shuffle the test set to see the prediction effect
pre_generator=ImageDataGenerator(
    rescale=1./255.
).flow_from_directory(
    batch_size=batch_size,
    directory=test_dir,
    shuffle=True,
    seed=0,
    target_size=(height,width),
    class_mode="binary"
)

plt.suptitle("Predicted Results")
for images,labels in pre_generator:
    for i in range(25):
        ax = plt.subplot(5.5,i+1)
        plt.imshow(images[i])
        img_array = tf.expand_dims(images[i], 0)
        # Use models to predict animals in pictures
        predictions = transfer_model.predict(img_array)
        predictions= 1 if predictions>=0.5 else 0
        plt.title(label_names[predictions])
        plt.axis("off")
    break
plt.show()
Copy the code

It looks like it’s all right. Look at the obfuscation matrix

def plot_confusion_matrix(cm,classes, title='Confusion matrix') :

    plt.figure(figsize=(12.8), dpi=100)
    np.set_printoptions(precision=2)

    # The probability value of each cell in the confusion matrix
    ind_array = np.arange(len(classes))
    x, y = np.meshgrid(ind_array, ind_array)
    for x_val, y_val in zip(x.flatten(), y.flatten()):
        c = cm[y_val][x_val]
        if c > 0.001:
            plt.text(x_val, y_val, "% 0.2 f" % (c,), color='red', fontsize=15, va='center', ha='center')
    
    plt.imshow(cm, interpolation='nearest')
    plt.title(title)
    xlocations = np.array(range(len(classes)))
    plt.xticks(xlocations, classes, rotation=90)
    plt.yticks(xlocations, classes)
    plt.ylabel('True value')
    plt.xlabel('Predicted value')
    plt.show()
    
    
test_predict=transfer_model.predict_classes(test_generator,batch_size=batch_size)
test_names=list(test_generator.class_indices)

test_true=test_generator.classes

matrix=confusion_matrix(test_true,test_predict) 

plot_confusion_matrix(matrix,test_names)
Copy the code

Well, of course it’s a good model to keep for later

transfer_model.save('tf_model/transfer_model')
Copy the code

2. Pytorch version

The general idea is the same as TensorFlow, except that the two frameworks use different apis differently.

import os
import torch
from torch import nn
from torch import functional as F
from torch.utils import data
from torchvision import transforms,datasets,models
import numpy as np
import time
import random
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix



plt.rcParams['font.sans-serif'] = ['simhei']
plt.rcParams['axes.unicode_minus'] = False
Copy the code

2.1 Loading Data

Because we already have the data set in TensorFlow, we just load it

file_path="./cats_and_dogs_filtered/"
train="train"
test="validation"


trans=transforms.Compose([
    transforms.Resize((224.224)),# Change the image size to (224,224)
    transforms.ToTensor(), # normalize to 0-1
])


train_data=datasets.ImageFolder(os.path.join(file_path,train),trans)
test_data=datasets.ImageFolder(os.path.join(file_path,test),trans)


random_choice=random.sample([i for i in range(len(train_data))],25)
plt.figure(figsize=(10.8))
plt.suptitle("Visualization of training sets")
for i,j in enumerate(random_choice):
    ax = plt.subplot(5.5,i+1)
    plt.imshow(train_data[j][0].numpy().transpose((1.2.0)))
    plt.title("Tag:"+str(train_data[j][1]) +""+train_data.classes[train_data[j][1]])
    plt.axis("off")
plt.show()

batch_size=64

train_loader=data.DataLoader(train_data,batch_size=batch_size,shuffle=True)
test_loader=data.DataLoader(test_data,batch_size=batch_size,shuffle=False)
Copy the code

There’s not a lot of image manipulation here, unlike the previous series of image enhancement operations. The main thing is to adjust the image size and normalization

2.2 Model construction and training

Given our previous experience, let’s just skip the bells and whistles of the web and go straight to basic CNN

base_model=nn.Sequential(
    nn.Conv2d(3.48, kernel_size=7, stride=4, padding=1), nn.ReLU(),
    nn.MaxPool2d(kernel_size=3, stride=2),
    nn.Conv2d(48.96, kernel_size=5, padding=2), nn.ReLU(),
    nn.MaxPool2d(kernel_size=3, stride=2),
    nn.Conv2d(96.128, kernel_size=3, padding=1), nn.ReLU(),
    nn.Conv2d(128.128, kernel_size=3, padding=1), nn.ReLU(),
    nn.MaxPool2d(kernel_size=3, stride=2),
    nn.Flatten(),
    nn.Linear(4608.1024), nn.ReLU(),
    nn.Dropout(p=0.5),
    nn.Linear(1024.512), nn.ReLU(),
    nn.Dropout(p=0.5),
    nn.Linear(512.2))Model parameter initialization
for name,param in base_model.named_parameters():
    if 'weight' in name:
        nn.init.kaiming_normal_(param)
    elif 'bias' in name:
        nn.init.constant_(param,val=0)
Copy the code

The output of each layer is as follows

Start training

epochs=40
lr=1e-4

cirterion=nn.CrossEntropyLoss()
optimizer=torch.optim.Adam(base_model.parameters(),lr=lr)

base_model=base_model.cuda()

base_model.train() 
loss_=0.
train_acc=0.
total=0.
for epoch in range(epochs):
    for i,data in enumerate(train_loader,0):
        inputs,train_labels=data
        optimizer.zero_grad()
        outputs=base_model(inputs.cuda())
        _,predicts=torch.max(outputs.data,1)
        train_acc+=(predicts.cuda()==train_labels.cuda().data).sum()
        loss=cirterion(outputs,train_labels.cuda())
        loss.backward()
        optimizer.step()
        
        loss_+=loss.item()
        #print(f"epoch: {epoch},loss: {loss_}")
        total+=train_labels.size(0)
    
    print(f"epoch: {epoch},loss={loss_/total*batch_size},acc={100*train_acc/total}%")  
Copy the code

Pytorch is a bit more complicated to train than TensorFlow. After defining the loss function and the optimizer, Pytorch needs to calculate the loss and gradient through forward propagation, and then update model parameters with the optimizer. Of course, a compile and fit method as simple as tensorflow. keras is already implemented in the higher version of Pytorch Lightning. However, my current graphics card can only support 1.2, while the minimum Lightning requirement is 1.3… Leave it as it is in order to use the GPU.

See how it looks on the test set

def test(model,test_loader) :
    model.eval()
    correct=0
    test_predict=[]
    with torch.no_grad():
        for idx,(t_data,t_target) in enumerate(test_loader):
            t_data,t_target=t_data.cuda(),t_target.cuda()
            pred=model(t_data)
            pred_class=pred.argmax(dim=1)
            test_predict.extend(pred_class.cpu())
            correct+=(pred_class==t_target).sum().item()
    acc=correct/len(test_data)
    print(The accuracy of f" test set is:{acc*100}%")
    return test_predict

test_predict=test(base_model,test_loader)
Copy the code

y_true=test_loader.dataset.targets

matrix=confusion_matrix(y_true,test_predict)


def plot_confusion_matrix(cm,classes, title='Confusion matrix') :

    plt.figure(figsize=(12.8), dpi=100)
    np.set_printoptions(precision=2)

    # The probability value of each cell in the confusion matrix
    ind_array = np.arange(len(classes))
    x, y = np.meshgrid(ind_array, ind_array)
    for x_val, y_val in zip(x.flatten(), y.flatten()):
        c = cm[y_val][x_val]
        if c > 0.001:
            plt.text(x_val, y_val, "% 0.2 f" % (c,), color='red', fontsize=15, va='center', ha='center')
    
    plt.imshow(cm, interpolation='nearest')
    plt.title(title)
    xlocations = np.array(range(len(classes)))
    plt.xticks(xlocations, classes, rotation=90)
    plt.yticks(xlocations, classes)
    plt.ylabel('True value')
    plt.xlabel('Predicted value')
    plt.show()
    
    
plot_confusion_matrix(matrix,list(test_loader.dataset.class_to_idx))
Copy the code

The result is not too different from TensorFlow, so try transfer learning again

transfer_model=models.densenet201(pretrained=True)
for param in transfer_model.parameters():
    param.requires_grad=False
    
transfer_model.classifier=nn.Sequential(
    nn.Linear(1920.512),
    nn.LeakyReLU(0.1),
    nn.Linear(512.128),
    nn.Dropout(0.5),
    nn.Linear(128.2)
)

transfer_model=transfer_model.cuda()


optimizer=torch.optim.Adam(transfer_model.parameters(),lr=lr)

epochs = 10

transfer_model.train()
loss_=0.
train_acc=0.
total=0.
for epoch in range(epochs):
    for i,data in enumerate(train_loader):
        inputs,train_labels=data
        optimizer.zero_grad()
        outputs=transfer_model(inputs.cuda())
        _,predicts=torch.max(outputs.data,1)
        train_acc+=torch.sum(predicts.cuda()==train_labels.cuda().data) loss=cirterion(outputs,train_labels.cuda()) loss.backward() optimizer.step()  loss_+=loss.item()#print(f"epoch: {epoch},loss: {loss_}")
        total+=train_labels.size(0)
    
    print(f"epoch: {epoch},loss={loss_/total*batch_size},acc={100*train_acc/total}%")  
Copy the code

In transfer learning, we only need to adjust the classifier layer to what we need, and the pre-training model in front does not need to move or participate in training.

The accuracy rate on the test set is still 98.6%, which is similar to the previous result on TensorFlow. Finally, the model is saved

torch.save(transfer_model,'./torch_model/transfer_model.pkl')
Copy the code

3. Set up picture classification service

After continuous tuning, optimizing the network model, and trying different methods, we finally got a model with excellent performance, so we should make good use of it. I wrote a FastAPI column before, and finally made a demo using Sklearn. Today it is similar to that, using locally saved models to predict the classification of uploaded images

# -*- coding: utf8 -*-
from PIL import Image
from fastapi import FastAPI, File, UploadFile, HTTPException
from fastapi.requests import Request
from fastapi.responses import RedirectResponse
from io import BytesIO
import tensorflow as tf
import uvicorn
import numpy as np
from typing import Optional.List
from starlette.templating import Jinja2Templates

tmp = Jinja2Templates(directory='templates')


class Model:
    model: Optional

    def load_model(self) :
        self.model = tf.keras.models.load_model("./tf_model/transfer_model")

    def predict(self, input_image) :
        output = self.model.predict_classes(input_image).item()
        mapping = {
            0: 'cat'.1: 'dog'
        }

        return mapping[output]


def read_convert_image(file) :
    loaded_image = Image.open(BytesIO(file))
    image_to_convert = np.asarray(loaded_image.resize((224.224[... :)))3]
    image_to_convert = np.expand_dims(image_to_convert, 0)
    image_to_convert = image_to_convert / 255.0
    return np.float32(image_to_convert)


describe = "

access /predict/image routing to try to predict cat and dog images using trained models

"
app = FastAPI(description=describe) mymodel = Model() @app.get("/predict/image") def index(request: Request) : return tmp.TemplateResponse('predict.html', { 'request': request, }) @app.post("/predict/image") async def image(request: Request, image_to_predict: UploadFile = File(.)) : if image_to_predict is None or image_to_predict.file is None: raise HTTPException(status_code=400, detail="Please provide an image when calling this request") extension = image_to_predict.filename.split(".")[-1] in ("jpg"."jpeg"."png") if not extension: raise HTTPException(status_code=400, detail="Please provide an jpg or png image") img = image_to_predict.filename image_data = read_convert_image(image_to_predict.file.read()) prediction = mymodel.predict(image_data) return tmp.TemplateResponse('result.html', { 'request': request, "img": img, 'prediction': prediction }) @app.get('/') async def hello() : return RedirectResponse("/docs") @app.on_event("startup") async def startup() : mymodel.load_model() if __name__ == "__main__": uvicorn.run("app:app", port=8000) Copy the code

The model is still loaded when the project is started, and the form uploads images for prediction and returns results. But because the front end is really bad, want to use HTML to achieve simple page rendering but some functions are still not very good. Helpless, can only do a simple page out.

Try not to add reload because the model takes a long time to load. Next time try TensorFlow serving

result.html

<! DOCTYPE html> <html lang="en">
<head>
    <meta charset="UTF-8"> < title > dog and cat fight predict < / title > < / head > < body > < h1 > introduced into image name is: {{img}} < / h1 > < h1 > predicted results as follows: {{prediction}} < / h1 > < a href ="/predict/image"</strong></a> </body> </ HTML >Copy the code

predict.html

<! DOCTYPE HTML > < HTML lang="en"> <head> <meta charset="UTF-8"> <title> </head> <body> </h1> <form action="/predict/image/" enctype="multipart/form-data" onchange="changepic(this)" Method ="post"> <input type="file" id="file" name="image_to_predict" Accept ="image/*"> < submit type=" value=" prediction "> </form> <img src="" id="show" width="200"> </body> <script> function changepic() { var reads= new FileReader(); f=document.getElementById('file').files[0]; reads.readAsDataURL(f); reads.onload=function (e) { document.getElementById('show').src=this.result; }; } </script> </html>Copy the code

4. The end result

To check it out, try downloading some images online