It’s hard to believe how much has happened in the field of artificial intelligence and machine learning this year, and it’s hard to do a comprehensive systematic summary. Nevertheless, I have tried to put together a summary that will help you to reflect on how far technology has come today.

Alpha Go Zero: The Rise of the Creator



If I had to pick the main highlight of the year, it would be AlphaGo Zero. This new approach not only improves in some of the most promising directions (such as deep reinforcement learning), but also confirms a paradigm shift in how models can learn without data. We’ve also recently seen Alpha Go Zero being rolled out to other games in the chess category.

GAN: Don’t be afraid, just take GAN



A recent meta-study found systematic errors in the reporting metrics of GAN related research papers. Nevertheless, it is undeniable that GANS continue to excel, especially when it comes to applications in image space (for example, progressive GANS, conditional GANS or CycleGans in Pix2PIx).

Deep learning version of NLP: The beginning of commercialization

This year, deep learning is all about NLP, especially translation, and NLP has given us a sense of how easy translation is becoming. Salesforce offers an interesting non-autoregressive approach to complete sentence translation. Perhaps even more groundbreaking is Facebook’s unsupervised approach to UPV. Deep learning has also been successful in helping businesses make their recommendation systems better. However, a recent paper also casts doubt on some recent advances, such as how simple kNN is compared to Deep Learning. As with GAN research, it should come as no surprise that the breakneck pace of AI research leads to a loss of scientific rigor. While much or most of the advances in AI have come from deep learning, there are many other areas of ongoing innovation in AI and ML that should also be interesting.

4. Theoretical issues: interpretability and rigor



Somewhat related to some of the issues mentioned above, many have criticized the theoretical basis of this approach for its lack of rigor and explainability. Not long ago, Ali Rahimi described modern AI as “alchemy” in his NIPS 2017 talk. Yann Lecun was quick to respond in a debate that is unlikely to be resolved soon. It’s worth noting that this year has seen a lot of efforts to build on attempts to push deep learning. For example, researchers are trying to understand how neural networks generalize deeply. Tishby’s information bottleneck theory was also debated at length this year as a plausible explanation for some of the attributes of deep learning. Hinton, who is celebrating his career this year, has also been questioning fundamental issues such as the use of backpropagation. Well-known researchers such as Pedro Domingos quickly got into gear, developing deep learning methods using different optimization techniques. The last radical recent change proposed by Hinton is the use of (capsule) capsules (see article) as a substitute for convolutional networks.

5. Service provider battle: better and better development experience

If we look at the engineering-related results of ARTIFICIAL intelligence, Pytorch is starting to stir up excitement over the past year as a real challenge for Tensorflow, especially on the research side. Tensorflow responds quickly by publishing dynamic networks in the Tensorflow Fold. There are many other battles in the “AI war” between the big players, the most intense of which is over the clouds. All the major vendors have stepped up and increased their AI support in the cloud. Amazon has presented big innovations in their AWS, such as their recent performance Sagemaker building and deploying ML models. It’s also worth noting that smaller players are piling in. Nvidia recently launched their GPU Cloud, which is another interesting option for training deep learning modes. All these battles will undoubtedly drive industrial upgrading in the future. In addition, the new ONNX neural network shows that standardization is an important and necessary step toward interoperability.

6. Future social problems that remain to be solved



In 2017, social issues related to ARTIFICIAL intelligence were also extended (upgraded). The idea that Elon Musk continues to push us closer and closer to killer AI has dismayed many. There is also a lot of discussion about how AI will affect work in the coming years. Finally, we see more focus on the interpretability and bias of AI algorithms.

7. New battleground: machine learning + traditional industries

For the last few months, I’ve been working on artificial intelligence in medicine and healthcare. I’m glad to see that the rate of innovation in “traditional” areas like “health care” is increasing rapidly. AI and ML have been used in medicine for many years, starting with specialist and Bayesian systems in the 1960s and 1970s. However, I often find myself quoting articles from months ago. Some recent innovations proposed this year include the use of Deep RL, GAN or autoencoders to aid patient diagnosis. Many recent advances in AI have also focused on precision medicine (highly personalized medical diagnosis and treatment) and genomics. David Blei’s latest paper, for example, addresses causality in neural network models by using Bayesian inference to predict whether an individual has a genetic predisposition to disease. All the big companies are investing in AI in healthcare. Google has several teams, including Deepmind Healthcare, that have come up with some very interesting advances in medical ARTIFICIAL intelligence, particularly in the area of medical image automation. Separately, Apple is also looking for a healthcare app for its Apple Watch, while Amazon is also “secretly” investing in healthcare. Clearly, the room for innovation is ripe.



The Uber AI team came up with the very interesting idea of using genetic algorithms (GA) in the context of deep reinforcement learning. In these five papers, the team showed how GA could be a competitive alternative to SGD. It’s going to be fun to see GA come back and I’m excited to see where it can take us in the coming months.

Finally, I recently read scientific papers about how Libratus beat the experts at single-pick unlimited Poker (a version of IJCAI’s earlier paper). And AlphaGo Zero is indeed a very exciting development, in fact most of the problems in reality can be more easily absorbed into an imperfect information game like Poker than into a perfect information game like Go or Chess. That’s why working in this area is a really exciting and important field to push forward. In addition to the scientific papers mentioned above, I recommend you read the following two: deep reinforcement learning for self-playing in incomplete information games, and DeepStack: Expert ARTIFICIAL Intelligence applied to Single-pick Unlimited Poker.

This article is recommended by Beijing Post @ Love coco – Love life teacher, translated by Ali Yunqi Community organization.

The original title of the article was “What-are-the-Most Significant Machine-Learning-Advances in-2017.”

Xavier Amatriain, PhD, Computer Science, ML Research Fellow.

The tiger said eight ways.

The article is a brief translation. For more details, please refer to the original text