Some time ago, we shared how to use Google’s GPU training machine learning model for free.

Here’s another opportunity: Free Nvidia Gpus with Kaggle Kernels!

What is Kaggle Kernels?

For those of you who aren’t familiar with Kaggle Kernels, at least those in data science and machine learning have heard of the Kaggle Challenge. Kaggle is a platform for doing data science research or sharing data science knowledge. Not only can we practice data science on Kaggle, but we can also learn a lot from the Kaggle community.

Kaggle Kernels is a Notebooks of Jupyter built into the browser, leaving everything to your advantage. Bottom line: Kaggle Kernels is a free platform to run Notebooks of Jupyter in the browser.

That means you can have the Jupyter Notebook environment on your browser anytime, anywhere, as long as you have an Internet connection and a browser, instead of having to set up your own local environment.

Because The processing power of Kaggle Kernels comes from cloud servers, rather than local machines, we can do a lot of data science and machine learning on our laptops with very little battery drain.

Once you sign up for an account on Kaggle, you can select a data set you want to use and launch a new Kernel, or Notebook, with a few clicks.

The data sets we used were pre-loaded into the Kernel, so the time-consuming process of importing the data sets into the machine and waiting for them to enter the model was eliminated.

For details on how to use Kaggle Kernels, see this tutorial.

Recently, Kaggle has released another great benefit: free NVidia K80 Gpus with Kaggle Kernels!

Kaggle tests show that using a GPU allows you to train deep learning models 12.5 times faster.

Taking the ASL Alphabet data set training model as an example, the GPU training time on Kaggle Kernels totaled 994 seconds, compared with 13,419 seconds on the CPU. Reduces your training time by 12 times. Of course, in actual use, how much model training time is shortened will involve many factors, such as model architecture, batch size, complexity of input pipeline, and so on. Anyway, we can now use gpus for free with Kaggle Kernels!

How can I use a GPU on Kaggle Kernels

The Kaggle website shares how to use the GPU with Kaggle Kernels, and shows sample code:

Add the GPU

We first open the Kernel control interface and run a GPU for the current Kernel setup.

Select Setting and then Enable GPU. Then check whether your Kernel is connected to GPU ON the control bar, and the connection status should be displayed as “GPU ON”, as shown below:

Many data science libraries don’t use gpus, so for some tasks (especially when using deep learning libraries like TensorFlow, Keras, and PyTorch), gpus can be very valuable.

data

The data set we used included images involving 29 American Sign languages, which are used to refer to the 26 English letters and the meanings of Spaces, deletions, and nothing. Our model looks at these images and learns to classify the sign language on each image.

Import the libraries and data required for deep learning
from keras.layers import Conv2D, Dense, Dropout, Flatten
from keras.models import Sequential
from keras.preprocessing.image import ImageDataGenerator

# Ensure continuity in operation
from numpy.random import seed
seed(1)
from tensorflow import set_random_seed
set_random_seed(2)

Import to view data
import cv2
from glob import glob
from matplotlib import pyplot as plt
from numpy import floor
import random

def plot_three_samples(letter):
    print("Samples images for letter " + letter)
    base_path = '.. /input/asl_alphabet_train/asl_alphabet_train/'
    img_path = base_path + letter + '/ * *'Path_contents = glob(img_path) plt.figure(figsize=(16,16)) imgs = random. Sample (path_contents, 3) plt.subplot(131) plt.imshow(cv2.imread(imgs[0])) plt.subplot(132) plt.imshow(cv2.imread(imgs[1])) plt.subplot(133) plt.imshow(cv2.imread(imgs[2]))return

plot_three_samples('A')


plot_three_samples('B')
Copy the code

Sample image of corresponding letter “B” :

Data processing setup

data_dir = ".. /input/asl_alphabet_train/asl_alphabet_train"
target_size = (64, 64)
target_dims = (64, 64, 3) # add channel for RGBClasses = 29 val_frac = 0.1 batch_size = 64 data_augmentor = ImageDataGenerator(samplewise_center=True, samplewise_std_normalization=True, validation_split=val_frac) train_generator = data_augmentor.flow_from_directory(data_dir, target_size=target_size, batch_size=batch_size, shuffle=True, subset="training")
val_generator = data_augmentor.flow_from_directory(data_dir, target_size=target_size, batch_size=ba
Copy the code

78,300 images belonging to 29 categories were found

8,700 images belonging to 29 categories were found

Model specification

my_model = Sequential()
my_model.add(Conv2D(64, kernel_size=4, strides=1, activation='relu', input_shape=target_dims))
my_model.add(Conv2D(64, kernel_size=4, strides=2, activation='relu'))
my_model.add(Dropout(0.5))
my_model.add(Conv2D(128, kernel_size=4, strides=1, activation='relu'))
my_model.add(Conv2D(128, kernel_size=4, strides=2, activation='relu'))
my_model.add(Dropout(0.5))
my_model.add(Conv2D(256, kernel_size=4, strides=1, activation='relu'))
my_model.add(Conv2D(256, kernel_size=4, strides=2, activation='relu'))
my_model.add(Flatten())
my_model.add(Dropout(0.5))
my_model.add(Dense(512, activation='relu'))
my_model.add(Dense(n_classes, activation='softmax'))

my_model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=["accuracy"])
Copy the code

The model fitting

my_model.fit_generator(train_generator, epochs=5, validation_data=val_generator) Epoch 1/5 1224/1224 [==============================] - 206s 169ms/step - loss: 1.1439-ACC: 0.6431 - val_loss: 0.5824 - val_ACC: 0.8126 Epoch 2/5 1224/1224 [= = = = = = = = = = = = = = = = = = = = = = = = = = = = = =] - 179-146 ms/s step - loss: 0.2429 acc: 0.9186 - val_loss: 0.5081 - val_acc: 0.8492 Epoch 3/5 1224/1224 [= = = = = = = = = = = = = = = = = = = = = = = = = = = = = =] - 182-148 ms/s step - loss: 0.1576 acc: 0.9495-VAL_loss: 0.5181-val_ACC: 0.8685 Epoch 4/5 1224/1224 [= = = = = = = = = = = = = = = = = = = = = = = = = = = = = =] - 180-147 ms/s step - loss: 0.1417 acc: 0.9554 - val_loss: 0.4139 - val_acc: 0.8786 Epoch 5/5 1224/1224 [= = = = = = = = = = = = = = = = = = = = = = = = = = = = = =] - 181-148 ms/s step - loss: 0.1149 acc: 0.9647-val_loss: 0.4319 - val_acc: 0.8948 <keras.callbacks.History at 0x7F5Cbb6537b8 >Copy the code

You may also like how to win the Kaggle Challenge:

Kaggle opener with silver summary | entry guidance (long, dry goods).


Don’t miss the manual of artificial intelligence required algorithms