If you have an engineering background in mathematics and computer science or coding experience with related knowledge, it only takes two months to become proficient in deep learning. Unbelievable? Four steps make it possible.

To learn more, read on

Learn the basics of machine learning

(Optional, but highly recommended)

Machine Learning by Andrew Ng. Machine learning — Stanford University. His course introduces various current machine learning algorithms, more importantly, the general procedures and methods of machine learning, including data preprocessing, hyperparameter tuning, etc.

NIPS 2015 Deep Learning tutorials by Geoff Hinton, Yoshua Bengio, and Yann LeCun are also recommended, with slightly less introduction.

Step 2: Learn more

Personal learning preference is to watch lecture videos and have several excellent courses online. Here are a few favorite courses to recommend:

  • Deep Learning at Oxford 2015, explained by Professor Nando de Freitas, will not be too simple. If you are already familiar with neural networks and want to go deeper, start with lecture 9. He uses the Torch Framework in his example. (Videos on Youtube)

  • Geoffrey Hinton’s course on Neural Networks for Machine Learning Hinton is an excellent researcher who has shown that the use of generalized backpropagation algorithms is crucial to the development of deep learning.

  • Neural Networks Class by Hugo Larochelle: Another excellent course

If books are more your thing, here are some excellent resources. Go check it out. I’m not judging.

  • Neural Networks and Deep Learning Book by Michael Nielsen: An online Learning Book with several interactive JavaScript elements to play with.

  • Deep Learning Book by Ian Goodfellow, Yoshua Bengio, and Aaron Courville: Dense, but not a great resource

Step 3: Select an area and go further

Identify your passion for further learning. The field is huge, so this list is by no means a comprehensive list.

  • Computer vision: Deep learning has transformed the field. Stanford’s CS231N by Andrej Karpathy is the best course I’ve come across; CS231n convolutional neural network vision recognition. It introduces you to the basics as well as CoVNets, and helps you set up GPU instances in AWS. See also Mostafa S. Ibrahim’s “Introduction to Computer Vision”

  • Natural language processing (NLP) : for machine translation, question answering, emotion analysis. Mastering this field requires a deep understanding of both algorithms and the basic computational properties of natural languages. CS 224N/Ling 284 by Christopher Manning is an excellent course. CS224d: Deep Learning in Natural Language Processing, another Stanford scholar David Socher (founder of MetaMind) is also an excellent course that addresses all the latest deep learning research related to NLP. For more information, see How to Learn Natural Language Processing?

  • Memory network (RNN-LSTM) : Combining the attention mechanism of the recurrent neural network with external writable memory in LSTM means some interesting work in building systems that can understand, store and retrieve information in question and answer form. The field of research began with Dr. Yann Lecun’s Facebook AI lab at New York University. The original article was posted on arxiv: Memory Network. There are many research variants, data sets, benchmarks, etc. derived from this work, for example, Metamind’s natural Language processing Dynamic Memory network

  • Deep reinforcement learning: Made famous by AlphaGo, the Go-playing system beat the best Go players in history. David Silver’s (Google Deepmind) RL video lecture and Professor Rich Stutton’s book are good places to start. For a gentle introduction to LSTM, see Christopher on understanding LSTM networks and Andrej Karpathy’s “Irrational Validity of Classical Neural Networks”

  • Generative models: Although discriminatory models attempt to detect, identify, and separate things, they ultimately look for features that differentiate and don’t understand data at a fundamental level. In addition to short-term applications, generative models offer the potential for automatic learning of natural features; Category or size or otherwise completely. Of the three commonly used generation models, generative adversarial networks (GANs), variant autoencoders (VAEs) and autoregressive models (e.g. PixelRNN), GAN is the most popular. Read more:

    • Original GAN paper by Ian Goodfellow and so on.

    • This fixes The stability problem

    • The Deep Convolutional Generative Adversarial Networks (DCGAN) paper and DCGAN Code it can be used to learn The hierarchy of functions without any supervision. Also, check DCGAN used for Image Superresolution

Step 4: Create projects

Doing is the key to becoming an expert. Try to build something that interests you and matches your skill level. Here are some tips to get you thinking:

  • Traditionally, MNIST data sets are first classified

  • Try face detection and classification on ImageNet. If you do, do the ImageNet Challenge 2017.

  • Use RNN or CNN for Twitter sentiment analysis

  • Teaching neural networks to recreate the art styles of famous painters (Neural algorithms for art styles)

  • Make music with a recurrent neural network

  • Use deep reinforcement to learn to play table tennis

  • The neural network evaluates selfies

  • Color black and white images use deep learning

For more information, check out the CS231n Winter 2017, Winter 2016&Winter 2015 programs. Also watch Kaggle and HackerRank matches for fun and opportunities to compete and learn.

Other resources

Here are some guidelines to help you keep learning

  • Read some great blogs. Christopher Olah’s blog and Andrew Karpathy’s blog both do a good job of explaining the basic concepts and recent breakthroughs

  • Follow influence on Twitter. Here are a few starts @drfeifei, @ylecun, @Karpathy, @Andrewyng, @Kdnuggets, @Openai, @Googleresearch. (See: Who follows machine learning information on Twitter?)

  • The Google+ Deep Learning Community, Yann Lecunn, is a great way to stay in touch with deep learning innovations and connect with other deep learning professionals and enthusiasts.

See ChristosChristofidis/awesome-deep-learning, a well-designed deep learning tutorial, project and community to make learning easier.

Translation is hard work, and some are left unwritten and put in columns. Introduction to Deep learning resources

It’s just an introduction to a method of learning. You can’t see everything in it at every step. Choosing one or two is enough