In this installment of AI Adventures, we’ll meet and use Kaggle Kernels. There won’t be popcorn, but I guarantee you will find Kaggle Kernels as popular as popcorn.

This is the 13th in a series of videos/articles titled “AI Adventures,” by Yufeng Guo, a development technology extension engineer at Google, that will help you understand ARTIFICIAL intelligence and machine learning in easy-to-understand language. In this series of videos/articles, we will explore the world of artificial intelligence, taste art, explore science, and master the tools of machine learning.


The first paper:
What is machine learning?


Article 2:
7 Steps to Machine learning


Article 3:
Use an evaluator to sort the flowers


Article 4:
Flexible and scalable cloud hosting services


Article 5:
Visualize the model through TensorBoard


Article 6:
Through the deep neural network reidentification evaluator


Article 7:
Cloud training model of big data solutions


Article 8:
Experience natural language generation with Google Research


Article 9:
Machine learning engines on the cloud


Article 10:
The model was trained using MNIST data sets


Chapter 11:
Best practices for machine learning engineers using Python development environments


Chapter 12:
Before machine learning, give the pandas a taste of data


All content and videos will be posted on”
Intelligent as youThe goal is to publish the latest google-related machine learning and TensorFlow content. If you have any questions, please let us know in the comments section.

Kaggle is a platform where you can research and share data science. You’ve probably heard of competitions on Kaggle that have cash prizes. It’s also a great place to practice data research and learn from the community.

Kaggle Kernels for what?

Kaggle Kernels is essentially the notebook of Jupyter on the browser and is free of charge. I’ll say it again so you don’t miss it:

Kaggle Kernels is an open platform that lets you run Jupyter in your browser for free!

That means you don’t have to go through the hassle of setting up your local environment at all, and you can have an awesome Jupyter Notebook environment wherever you are, as long as you’re connected to the Internet.

And that’s not all! The online Notebook’s computing power comes from servers in the cloud, not your computer. So you can do some data-related research and machine learning without using much of your computer’s precious battery power.


Blog.kaggle.com/2017/09/21/…

Kaggle recently updated Kernels to provide more powerful computing performance and increase the time you can use them to 60 minutes.

Well, I’ve been nagging Kaggle Kernels for a long time, so it’s time to find out what it really looks like.

▍ Kernels in action

Once registered at Kaggle.com, we can select a data set and enter a new Kernel or notebook with a few clicks.


Click on the link for the above GIF (approx. 11Mb)

The data set we selected will be pre-loaded into the Kernel we use, so there is no need to worry about loading the data set or waiting for a long data copy process.


However, you can still upload data (up to 1GB) to the Kernel if you wish.

In our example, we will continue to use the fashion-Mnist data set. This is a data set of 10 clothing categories, including pants, bags, high heels, shirts, and so on. The dataset includes 50,000 training samples and 10,000 evaluation samples. Check it out on my Kaggle Kernel Notebook.


Let’s look at the data set. Kaggle provides data sets in the form of CSV files. The raw data is a 28×28 pixel grayscale image that has been squeezed into a single 784 column of data in a CSV file. The file also contains a list of data (index 0 to 9) that represent the clothing category.

Data loading

Since the data set is already loaded into the current browser environment, we tried to use it to read.csv files into pandasDataFrames, one for training and one for prediction.


Note that the data is stored in the input directory one level up

If you want to see how I do it, check out my Kaggle Kernel:

Fashion-MNIST

Data Exploration

Now that we have all the data loaded into the DataFrame, we can take advantage of it (see the previous installment for more details). We use the head() method to get the first five rows of the data, and then use describe() to learn more about the structure of the dataset.


It looks like the data has been processed out of order

Data Visualization

In addition, if the digital data can be visualized, they will make more sense than the dry lines of numbers. Let’s use Matplotlib to see what these images look like.

The matplotlib.pyplot library, commonly referred to as PLT, is used to display grouped data as a picture.


We can see these images, they’re a little blurry, but we can still tell if they’re clothes or accessories.




Kaggle Kernels gives us a fully interactive, almost assembly-free notebook environment. I must stress that there is no need to configure the Python development environment or install any libraries at all, which is awesome!

Complete the Kernel link: www.kaggle.com/yufengg/fas…

Are you already using Kaggle Kernels? What’s your favorite feature? Any tips to share?


Thanks for reading this episode of Cloud AI Adventures. If you like this series, please give it a thumbs up. If you want more of this, follow me and the Smart As You column (you can also follow Yufeng G’s Medium and YouTube channels). See you next time!


Medium Introduction to Kaggle Kernels

Affectedcover source: thumbnail of YouTube video

Affectedvideo source: YouTube – Introduction to Kaggle Kernels

Gu Chuang subtitle group

Affd editor: @ Yang Dong