Seriously, if you want to learn so many good courses, how can you learn them? Good anxiety…

The author | Zhou Xiang

Edit | pigeons

Last Tuesday, netease Cloud Classroom announced a partnership with Enda Ng’s Deeplearning.ai. The latter’s latest Deep Learning course, Deep Learning, has been released in Chinese subtitles for free. It’s a huge piece of good news.

AI science and technology base camp in the whole network after the first, the comments in the message area praise “netease cloud classroom is really the conscience of the industry”.

The good news does not stop there. AI Tech camp has another big news for you.

Dr. Mu Li, the former Young marshal of Baidu and now chief AI scientist of Amazon and author of MXNet, also opened his own deep learning course.

Today, the amazon AI chief scientist will bring a new deep learning course, Hands-on Deep Learning, to the Chinese language development community.



Hands-on deep Learning


This course will use Apache MXNet (Incubating) ‘s latest Gluon interface to demonstrate how to implement various deep learning algorithms from 0, taking advantage of Jupyter Notebook’s ability to integrate documentation, code, formulas and graphics. Provides an interactive learning experience for developers. As such, no current project can provide both comprehensive deep learning and interactive executable code, and this course fills that gap.

Oh, they are all deep learning courses, they are all great, how on earth to choose? Battalion commander has a phobia of choice…

There are some significant differences between the course and Ng’s, says Li mu. Listen to him:

  1. We not only introduce the deep learning model, but also provide easy-to-understand code implementation. Instead of slides, we learn by reading code, tuning parameters, and running experiments.

  1. We use Chinese. Whether it’s teaching material, live broadcast, or forum. (Although I have been in the United States for 5 or 6 years, I actually still have a hard time understanding English with a variety of accents at the same time.)

  2. The free version of Andrew course can only be watched by video, but we not only live broadcast the teaching, but also provide exercises and forums for everyone to communicate and encourage everyone to participate in the improvement of the course on Github. I hope to have a closer interaction with you.

In other words, this Specialization will not only be live but also provide practice questions that are not available in the netease Cloud class. According to AI Technology Base camp, Li Mu will broadcast the course live at douyu at 10 am every Saturday, and the first live broadcast will be at 10 am on September 9. In addition, the entire course is in Chinese, which further lowers the learning barrier.

However, this course is only version 0.1 at present, so there are only three chapters of “preparatory knowledge”, “supervised learning” and “neural network”, in addition, there are some English content is not Chinese.

Course content

Currently, each hands-on deep Learning tutorial is organized in the following way (with the exception of a few background tutorials) :

  1. Introduce a new concept (or a few)

  1. Provide a complete sample using real data

The main feature of this tutorial is that each tutorial is a Jupyter Notebook that can be edited and run. Running these tutorials requires Python, Jupyter and its plug-in NoteDown, as well as the latest version of MXNet, so learners must first know how to install and use these programs.

However, the course is preceded by a detailed “Install and Use” tutorial that learners follow.

Here are the full chapters of the 0.1 version of the Hands-on Deep Learning course:

Preliminary knowledge

Supervised learning

The neural network

So, we can’t help but ask, why did Li Mu open this course? Who are the best people for this class? Why is the logic of this course designed this way? We found the answer from Li Mu’s open letter. Perhaps, the thinking behind the course is more worthy of our learning.

Attachment: Open letter from Li Mu

When we started the MXNet project two years ago, one of the things that always bothered us was that whenever MXNet released a new feature, we got a message saying, “Make something new, update the documentation.” There was a time when we couldn’t understand it. The documentation was so much better than anything we’d ever done before. And look at the wheels next door, there are no documents, so people are not using very high.

Then one day Zack asked this question: If you went back to when you were learning machine learning, what kind of documentation would you need?

I started to learn machine learning in my sophomore year. At that time, there were not many good materials, so I was still muddled after reading The obscure translation of The Elements of Statistical Learning for half a year. Then, in 2008, I spent several months on Pattern Recognition And Machine Learning, which was completely confused by Bayes. When I went to HKUST in 2010, James asked me, which model are you most familiar with? Hard to think, unexpectedly can not answer.

I know quite a few people, though, who can read a paper or listen to a presentation and ask very good questions and then basically get it. But I’m a lot dumber at this. A paper read is like water drunk, forgotten the next day. Must be the need to calm down, from beginning to end to achieve a few data, adjust some parameters, to feel at ease to understand. For example, I read a lot of papers during my two years in HKUST, but now I can still remember those two models that have faithfully implemented and written papers. Even after another five years in machine learning, learning anything new was still hands-on.

I started to learn deep learning a few years ago, and I have helped and witnessed many others in the MXNet project. I find that there are many friends like me, who can only become an expert (or qualified alchemist) by implementing, adjusting and running experiments. Although in the era before the rise of deep learning, no code and no experiments can do good theoretical work. But in the field of deep learning, hands-on ability is the core competence. For example, even if I knew the three ways of writing convolution, the ten variations of Relu, understood why BatchNorm accelerated convergence, had the error rate of Imagenet’s previous champions at my ease, and could talk for hours about the ups and downs of neural networks. But without parameters, everything is in vain. When you’re asked why you’re so far from state-of-the art, you’re making a linear model that’s 100 times less accurate than mine.

A big part of my work at AWS over the past year has been helping Amazon’s internal teams and users in the cloud understand deep learning and apply it to their products. At this year’s CVPR in Hawaii, I met many old friends, such as Kai Ge from Horizon, Li Lei from Toutiao, Wen Yuan and Yu Qiang from Fourth Paradigm, and made many new friends, such as Momenta Xudong and Sentang Junjie. I said that MXNet had a new Gluon front end that could address both product and research needs at once. And everybody said, well, well, come here and talk about it. And I especially emphasized that we have a lot of new people here, so it would be nice to have some introductory knowledge.

So it’s natural to wonder if we can help more people. Therefore, we want to set up a series of courses, from the introduction to deep learning to the latest and most cutting-edge algorithms, and explain each algorithm and concept through interactive code from scratch. Hopefully, this will allow you to understand the details of the algorithm, but also to tune the parameters. Won the competition, and do the product.

We’ve done (and are doing) these four things:

  1. Eric and Sheng developed Gluon, a new front end for MXNet. For details, see Eric’s introduction. This front end brings a convenient programming environment that is more consistent with Python, both debug and interaction, and is better suited for deep learning than frameworks such as TensorFlow programming through computational graphs.

  1. Zack, Alex, Aston and many others have written a series of notebooks to explain each model. Zack explains and implements the algorithms from scratch, both as a layman (he’s a professional musician) and as a teacher (a CMU computer professor).

  1. We also translated notebook into Chinese and made a lot of improvements (I personally think the Chinese version is of higher quality) and created a Chinese community called discuss.gluon.ai for everyone to discuss and learn.

  2. We will jointly broadcast a series of courses on Douyu, explaining each course in depth.

While we were preparing for this, Andrew Ng was also running a deep learning course. From the syllabus it looks very good, very detailed. Besides, Andrew always speaks clearly, so this course must be excellent. But there are a few major differences between what we did and Andrew’s:

  • We not only introduce the deep learning model, but also provide easy-to-understand code implementation. Instead of slides, we learn by reading code, tuning parameters, and running experiments.
  • We use Chinese. Whether it’s teaching material, live broadcast, or forum. (Although I have been in the United States for 5-6 years, I actually still have a hard time understanding English with a variety of accents.)
  • The free version of Andrew course can only be watched by video, but we not only live broadcast the teaching, but also provide exercises and forums for everyone to communicate and encourage everyone to participate in the improvement of the course on Github. I hope to have a closer interaction with you.

We agree with Andrew from the general point of view that we hope to help our friends quickly master deep learning. This technological innovation may continue to radiate the technical circle for several years, I hope that friends can participate in this upsurge faster and better.

Courses address: http://zh.gluon.ai/index.html


Author: AI Science Base (RGZNAI100)


Copyright belongs to the author. Commercial reprint please contact the author for authorization, non-commercial reprint please indicate the source.