The author | Sakshi Butala compile | source of vitamin k | forward Data Science

In this article, I will show you how to build a letter recognition system using convolutional Neural Networks (CNNs) and deploy it using Anvil. At the end of this article, you will be able to create the system shown above.

directory

  • Convolutional neural network

  • CNN implementation

  • Anvil integration

Convolutional neural network

Let’s start by understanding what a convolutional neural network is. Convolutional neural network (CNN) is a neural network widely used in image recognition and classification.

CNN is the regularized version of the multi-layer perceptron. Multilayer perceptrons usually refer to fully connected networks, where each neuron in one layer is connected to all the neurons in the next layer.

CNN consists of the following layers:

Convolution layer: A “core” of size 3X3 or 5X5 is passed to the image and the dot product of the original pixel values with the weights defined in the core is computed. The matrix is then passed through an activation function “ReLu”, which converts each negative value in the matrix to zero.

Pooling layer: The size of the “pooling matrix” is 2X2 or 4X4, and the size of the small matrix is reduced by pooling to highlight only the important features of the image.

There are two types of pool operations:

  1. A maximum pool is a pool type where the maximum value that exists in the pool matrix is put into the final matrix.
  2. Average Pooling is a pool type in which the Average of all values in a pooled matrix is calculated and placed into the final matrix.

Note: CNN architecture can have multiple combinations of convolutional layers and pool layers to improve its performance.

Fully connected layer: The final matrix is flattened into a one-dimensional vector. That vector is then fed into the neural network. Finally, the output layer is a list of probabilities for the different labels (for example, letters A, B, C) attached to the image. The label with the highest probability is the output of the classifier.

CNN implementation

Let’s start by importing the library in Jupyter Notebook, as follows:

import numpy as np
import matplotlib.pyplot as plt
from keras.preprocessing.image import ImageDataGenerator
from keras.preprocessing import image
import keras
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Activation
import os
import pickle
Copy the code

Then, let’s import two data sets containing images from A to Z to train and test our model. You can download the dataset from the GitHub repository linked below.

Link: github.com/sakshibutal…

train_datagen = ImageDataGenerator(rescale = 1./255,
                                   shear_range = 0.2,
                                   zoom_range = 0.2,
                                   horizontal_flip = True)

test_datagen = ImageDataGenerator(rescale = 1./255)

train_generator = train_datagen.flow_from_directory(
    directory = 'Training',
    target_size = (32.32),
    batch_size = 32,
    class_mode = 'categorical'

)

test_generator = test_datagen.flow_from_directory(
    directory = 'Testing',
    target_size = (32.32),
    batch_size = 32,
    class_mode = 'categorical'

)
Copy the code

ImageDataGenerator generates a batch of tensor image data that converts RGB coefficients in the range 0-255 to target values between 0 and 1 by scaling with rescale.

Shear_range is used to apply shear transforms randomly.

Zoom_range is used to scale randomly within an image.

Horizontal_flip is used to flip half the image horizontally randomly.

We then use **.flow_FROm_directory ** to import images one by one from the directory and apply ImageDataGenerator to them.

We then convert the image from the original size to the target size and declare the Batch size, which is the number of training examples to be used in an iteration.

We then set class_mode to category, which means we have multiple classes (a through Z) to predict.

Next we build our CNN architecture.

model = Sequential()
model.add(Conv2D(32, (3.3), input_shape = (32.32.3), activation = 'relu'))
model.add(MaxPooling2D(pool_size = (2.2)))


model.add(Conv2D(32, (3.3), activation = 'relu'))
model.add(MaxPooling2D(pool_size = (2.2)))

model.add(Flatten())
model.add(Dense(units = 128, activation = 'relu'))
model.add(Dense(units = 26, activation = 'softmax'))


model.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics = ['accuracy'])

model.summary()
Copy the code

We start by creating a Sequential model that allows us to define the CNN architecture layer by layer using the.add function.

We first add a convolution layer with 32 filters (cores) of 3X3 size to the input image and activate it through the “RELu” function.

We then perform the MaxPooling operation using a pool of size 2X2.

These layers are then repeated again to improve the performance of the model.

Finally, we flatten the resulting matrix and pass through a fully connected layer consisting of 128 nodes. It is then connected to an output layer of 26 nodes, each representing an alphabet. We use Softmax activation to convert the score into a normalized probability distribution and select the node with the highest probability as the output.

Once our CNN architecture is defined, we compile the model using the Adam optimizer.

Finally, we train the model.

model.fit_generator(train_generator,
                         steps_per_epoch = 16,
                         epochs = 3,
                         validation_data = test_generator,
                         validation_steps = 16)
Copy the code

The accuracy of the model after training is 93.42%

Now let’s test our model. But before we can do that, we need to define a function that gives us the alphabet associated with the result.

def get_result(result) :
    if result[0] [0] = =1:
        return('a')
    elif result[0] [1] = =1:
        return ('b')
    elif result[0] [2] = =1:
        return ('c')
    elif result[0] [3] = =1:
        return ('d')
    elif result[0] [4] = =1:
        return ('e')
    elif result[0] [5] = =1:
        return ('f')
    elif result[0] [6] = =1:
        return ('g')
    elif result[0] [7] = =1:
        return ('h')
    elif result[0] [8] = =1:
        return ('i')
    elif result[0] [9] = =1:
        return ('j')
    elif result[0] [10] = =1:
        return ('k')
    elif result[0] [11] = =1:
        return ('l')
    elif result[0] [12] = =1:
        return ('m')
    elif result[0] [13] = =1:
        return ('n')
    elif result[0] [14] = =1:
        return ('o')
    elif result[0] [15] = =1:
        return ('p')
    elif result[0] [16] = =1:
        return ('q')
    elif result[0] [17] = =1:
        return ('r')
    elif result[0] [18] = =1:
        return ('s')
    elif result[0] [19] = =1:
        return ('t')
    elif result[0] [20] = =1:
        return ('u')
    elif result[0] [21] = =1:
        return ('v')
    elif result[0] [22] = =1:
        return ('w')
    elif result[0] [23] = =1:
        return ('x')
    elif result[0] [24] = =1:
        return ('y')
    elif result[0] [25] = =1:
        return ('z')
Copy the code

Finally, let’s test our model:

filename = r'Testing\e\25.png'
test_image = image.load_img(filename, target_size = (32.32))
plt.imshow(test_image)

test_image = image.img_to_array(test_image)
test_image = np.expand_dims(test_image, axis = 0)
result = model.predict(test_image)
result = get_result(result)
print ('Predicted Alphabet is: {}'.format(result))
Copy the code

The model correctly predicted the letter of the input image and the result was “E”.

Anvil integration

Anvil is a platform that allows us to build full-stack Web applications using Python. It makes it easier to convert the machine learning model from Jupyter Notebook to a Web application.

Let’s start by creating an account on Anvil. When you’re done, create a new blank application with Material Design.

Check out this link for a step-by-step tutorial on how to use Anvil: Anvil. Works/Learn

The toolbox on the right contains all the components you can drag onto the site.

Required Components:

  • 2 tags (title and subtitle)

  • Image (display input image)

  • FileLoader (upload input image)

  • Highlight button (for predicting results)

  • Tags (view results)

Drag and drop these components and arrange them according to your requirements.

To add a title and subtitle, select label and in the Properties section on the right, then go to the option named Text as shown below (highlighted in red), and type a title/subtitle.

Once the user interface is complete, go to the code section shown above (highlighted in green) and create a new function, as shown below

def primary_color_1_click(self, **event_args) :
      file = self.file_loader_1.file
      self.image_1.source = file
      result = anvil.server.call('model_run',file)
      self.label_3.text = result
      pass
Copy the code

This function is executed when we press the “PREDICT” button. It takes the input image uploaded from the file loader and passes it to the “model_run” function of Jupyter Notebook. This function returns the prediction letter.

Now all we have to do is connect our Anvil website to Jupyter Notebook.

This requires the following two steps:

  1. Import the Anvil Uplink key: Click Set, then click Uplink, and click Enable uplink key and copy the key.

Paste the following contents in the Jupyter Notebook:

import anvil.server
import anvil.media
anvil.server.connect("paste your anvil uplink key here")
Copy the code
  1. Create a “model_run” function to predict images uploaded to the site.

    @anvil.server.callable
    def model_run(path) :
        with anvil.media.TempFile(path) as filename:
            test_image = image.load_img(filename, target_size = (32.32))
            test_image = image.img_to_array(test_image)
            test_image = np.expand_dims(test_image, axis = 0)
            result = model.predict(test_image)
            result = get_result(result)
            return ('Predicted Alphabet is: {}'.format(result))
    Copy the code

Now you can go back to Anvil, click the Run button, and the letter recognition system is complete.

You can find the source code and dataset in my GitHub repository: github.com/sakshibutal…

reference

  1. Convolutional Neural Network Tutorial: From Basic to Advanced – MissingLink.ai
  2. CS 230 – Convolutional Neural Networks Cheatsheet
  3. Keras documentation: Image data preprocessing

The original link: towardsdatascience.com/building-an…

Welcome to panchuangai blog: panchuang.net/

Sklearn123.com/

Welcome to docs.panchuang.net/