[Otaku] In the Japanese IT circle, there is a book whose influence has surpassed that of “Flower Book”, which is a powerful one. IT has long been the top of the list of “artificial intelligence” books in Japan and Asia, with many five-star reviews. As you may have heard, the book is Introduction to Deep Learning: Python-based Theory and Implementation.


When I first saw the book on Riya’s website, I was completely captivated by the reviews, which went something like this.


Less than two years after its release, the book has already been reprinted to 100,000 copies, an astonishing number for a technical book.

On the one hand, it shows that deep learning is really hot, on the other hand, it also shows that the content of this book is really amazing.

Read the comments on this book’s general impression is, easy to understand, overwhelming good to understand, good to understand terrible! There are also liberal arts students said to be able to understand.

What’s so great about this deep learning primer, which Japanese netizens call “Shenben”?

Chapter 1 Introduction to Python

As an opening chapter, this is the usual book chapter, providing a brief introduction to Python and how to use it.

If you already know Python, NumPy, and Matplotlib, you can skip this chapter and go straight to the next one.

If you have no basic knowledge, it is recommended that you start at the beginning and learn about Python.

Chapter two perceptron

This chapter introduces the Perceptron algorithm. Perceptron was put forward by American scholar Frank Rosenblatt in 1957.

Why are we learning this old algorithm now? Because perceptrons are also algorithms that are the origin of neural networks (deep learning).

Therefore, the construction of learning perceptron is also an important idea of learning to neural network and deep learning.

Chapter 3 Neural network

This part mainly introduces the relevant knowledge of neural network. An important property of neural network is that it can automatically learn appropriate weight parameters from data.

This chapter first introduces the outline of neural network, and then focuses on neural network recognition processing.

Chapter 4 neural network learning

The topic of this chapter is neural network learning. The “learning” here refers to the process of automatically obtaining optimal weight parameters from training data.

In order to enable the neural network to learn, the loss function is introduced. The purpose of learning is to find out the weight parameters that can minimize its value based on the loss function.

In order to find the value of the loss function as small as possible, the author uses the gradient method of function slope.

Chapter v Error back propagation

Numerical differentiation is simple and easy to implement, but the disadvantage is that it takes time to calculate. This chapter introduces an efficient method to calculate the gradient of weight parameters – error back propagation method.

To correctly understand the error back propagation method, I personally think there are two methods: one is based on mathematical formula; The other is based on the computational graph.

The former is the more common approach, and most books on machine learning focus on mathematical formulas. Because this method is rigorous and concise, it makes perfect sense, but if you start by talking about mathematical formulas, you will miss some fundamental things and stop at the list of formulas.

Therefore, this chapter is intended to give you an intuitive understanding of the error back propagation method by calculating the diagram. Then, combined with the actual code to deepen understanding, I believe that you will have a kind of “so it is!” Feeling.

Chapter 6 Learning-related skills

This chapter will introduce some important ideas in neural network learning, including the optimization method of finding optimal weight parameters, the initial value of weight parameters, and the setting method of hyperparameters.

In addition, regularization methods such as weight attenuation and Dropout will be introduced and implemented in order to cope with overfitting.

Finally, a brief introduction to the Batch Normalization method used in many studies in recent years is presented.

Using the method introduced in this chapter, the neural network (deep learning) can be efficiently learned to improve the identification accuracy.

Chapter 7 Convolutional neural network

The topic of this section is Convolutional Neural Network (CNN).

CNN is used in various occasions such as image recognition and speech recognition. In image recognition contests, methods based on deep learning are almost based on CNN.

This chapter details the structure of CNN and how it handles content in Python.

Chapter 8 deep learning

We have learned a lot about neural networks, such as the various layers that make up neural networks, effective learning techniques, CNN which is particularly effective for images, optimization methods of parameters, etc., all of which are important techniques in deep learning.

Based on the networks described earlier, deep networks can be created by simply stacking. This chapter will cover the nature, topics and possibilities of deep learning, and then give a general description of current deep learning.

This book is a true introduction to deep learning, which analyzes the principles and related technologies of deep learning in simple terms. Python3 is used in the book, and it does not rely on external libraries or tools as much as possible. Starting from basic mathematical knowledge, it leads readers to create a classic deep learning network from scratch, enabling readers to gradually understand deep learning in the process.

The book not only introduces the basic knowledge of deep learning and neural network, such as the concept and characteristics, but also gives in-depth explanation of error back propagation method and convolutional neural network. In addition, practical techniques related to deep learning, applications of automatic driving, image generation, reinforcement learning, and “why” questions such as why deepening layers can improve recognition accuracy are also introduced.

All in all, this book is worth learning deep learning friends a look ~!


For more exciting content, please click on the top right corner to follow the small house


Source: Ai Headlines