New Jiwon reports

New Zhiyuan Editorial Department

【 New Jiyuan Guide 】The ACM has just announced the 2018 Turing Award winners, and the deep learning trio — Yoshua Bengio, Geoffrey Hinton, and Yann LeCun — have received the highest honor. Behind the three awards, it is a hard road through the winter.

The glory belongs to deep learning.

Yoshua Bengio, Geoffrey Hinton and Yann LeCun have won the 2018 Turing Prize, which was announced today.

Yann LeCun, Geoffrey Hinton, Yoshua Bengio

  • Yoshua Bengio (58) is a professor at the University of Montreal and scientific director of the Quebec Institute for Artificial Intelligence;

  • Geoffrey Hinton (71) is vice President and Engineering Fellow at Google, Chief Scientific Advisor at Vector Institute for Artificial Intelligence, and Professor Emeritus at the University of Toronto;

  • Yann LeCun, 55, is a professor at New York University and vice president and chief AI scientist at Facebook.

Known in the industry as the “godfathers of contemporary ARTIFICIAL intelligence,” the three winners pioneered deep neural networks, a technology that has become a key part of computing science, laying the foundation for the development and application of deep learning algorithms.

Last year, the “deep learning winter theory” became popular, and the three gods also responded to the “winter” on several occasions. In fact, the trio have long been resistant to cold weather, with Geoffrey Hinton and others sitting on the sidelines for several years in the 1980s until the resurgence of AI in this century.

Hinton, a 71-year-old man who survived the coldest AI winter and decided that another “winter” would never come.

Hinton’s early years as a scientist were rocky.

He initially studied physics and chemistry at Cambridge, but switched to architecture after only a month, only to be overwhelmed by a day in architecture, re-enrolling in physics and physiology, then finding the mathematics in physics too difficult, he switched to philosophy, spending a year doing a two-year course.

It was only a short year, but it was a very useful one, because he developed a very strong antibody to philosophy: “I want to know how the brain works.”

To understand how the brain works, Hinton turned to psychology, only to find that “psychologists don’t have a clue.”

In 1973 Hinton entered graduate school at the University of Edinburgh, where he studied artificial intelligence with Christopher Longuet-Higgins. But that was during the artificial intelligence winter of the 1970s, when deep learning and AI were despised by academics.

Christopher Longuet-Higgins

Besides, Christopher is a famous theoretical chemist and cognitive scientist, who has cultivated Nobel Prize winners all over the world, but Hinton has a different philosophy from his mentor: The tutor insisted on the traditional logical artificial intelligence concept, while Hinton firmly believed in the simulation neuron based on neuron theory, and firmly believed that neural network must be the future development direction.

“I had a rough graduation year, and every week we had a fight.” Hinton later recalled that he had been making a “deal” with his mentor: “Let me do neural networks for another six months, and I’ll show you they work.” But at the end of six months, Hinton said, “Give me six more,” and then, “Give me five more.”

In the 1980s, Hinton really took off.

In 1986, Hinton et al. completed a famous paper, Experiments on Learning by Back Propagation, which proved that Back Propagation in neural networks could provide “interesting” distribution representation and proposed a new method that would influence artificial intelligence in the later generations.

But there was neither enough data nor enough computing power for neural networks to be trained on a large scale, and the industry remained uninterested in Hinton’s neural network. Hinton often sat in the far corner of the room at academic conferences and was ignored by the academic heavyweights of the day.

Fortunately, there are some die-hards just like Hinton.

Yann Lecun, a postdoctoral student at Hinton, is a firm believer in neural networks. In 1989, Yann LeCun provided the first practical demonstration of back propagation at Bell LABS.

play

Yann LeCun demonstrates handwriting recognition on a computer in 1993

He combined convolutional neural networks with backpropagation to read “handwritten” numbers, which later became widely used and by the late 1990s handled the identification of 10% to 20% of checks in the United States.

Yann LeCun was the team leader at Bell LABS, and his team included Bengio, another of today’s or Turing’s heroes.

Although Bengio is not a direct disciple of Hinton, he is also considered to be the three flag-bearer of Hinton and LeCun in promoting the wave of deep learning. He pioneered the language Model of neural network.

From left to right: LeCun, Hinton, Bengio, Ng

In 2012, Geoffrey Hinton and two of his students published a paper proposing AlexNet, a deep convolutional neural network model, which won the ImageNet large-scale image recognition competition that year. Later, Hinton joined Google Brain and AlexNet became one of the most classic models for image recognition and was widely used in the industry until deep learning exploded.

What is more memorable is that in 2015, Geoffrey Hinton, Yann LeCun and Yoshua Bengio, three giants of deep learning, jointly published a review article titled deep Learning in Nature, which described the changes that deep learning brings to traditional machine learning.

Back in 2015, zhihu posed a question: Could Yann LeCun, Geoffrey Hinton or Yoshua Bengio win the Turing Award?

Problem Address:

https://www.zhihu.com/question/33911668

At that time, many netizens thought that although the three of them contributed a lot, they could not win the Turing award. So today have run to the problem below their own face.

Professor Pei Jian, vice President of JINGdong Group, professor of School of Computing Science, Department of Statistics and Actuarial Science, Simon Fraser University, Canada, Canadian First-class Research Professor, ACM Fellow, IEEE Fellow, ACM SIGKDD Chair, told Xin Zhiyuan, This year’s Turing Prize, awarded to Hinton, LeCun and Bengio, is widely expected to be the pioneers, adherents and evangelists of deep learning.

“Three or four years ago a lot of people predicted (or expected) that Old Hinton and deep learning would win. The benefits of deep learning in our daily lives are unprecedented. So perhaps it’s not too much of a stretch to say that Hinton’s deep learning might have been too shy to win the Turing Prize before. I believe deep learning has become a fundamental element of computing.”

Pei jian said that the story of the three gods and all kinds of tidbits have been widely spread, there is no need to repeat. There are three feelings, worth learning, but also with colleagues to encourage:

First of all, these three men are true scholars who have been dedicated for a long time and have not been surprised by their honor or disgrace. Mr Hinton, in particular, has made an outstanding contribution to a field that has been up and down since the early 1980s.For more than thirty years he concentrated on one subjectDon’t distract yourself or take credit or pontificate on your past achievements. This spirit of rigorous scholarship is worth our younger generation to learn. I personally feel that ACM puts academic quality first and is rigorous and strict in the evaluation of Turing Award and Fellow.
Second, although all of them are from academia, they have worked in industry to promote the application of deep learning to varying degrees. Their important contribution to the industry is not only to solve a number of practical problems with deep learning methods, but also to promote scientific principles and methods in industrial research and development.
This combination of production, education and research in scientific methodology has cultivated a large number of new young talents with solid academic foundation and research methods as well as broad industrial vision and practical thinking. It is safe to say that the impact of deep learning is not only technical, but alsoInnovation in the way academia and industry collaborate and develop future talent. At the IEEE ICDE 2019 conference in Macau on April 10, I organized a panel discussion on this issue with prominent university presidents, prominent industrial R&D leaders, and successful new technology venture investors. Stay tuned.)
In the end, their awards tell a storyAn environment that encourages continuous independent researchVery important. In the long days when deep learning was unpopular, the National Research Foundation of Canada generously supported a group of deep neural network researchers represented by these individuals to continue in-depth research, and this spark eventually became a prairie fire. I have been studying and working in Canada for many years. I really like the equal and flat research environment, which enables scholars to meditate. Despite the popularity of deep learning, Canadian academia also holds a relatively rational attitude. Bengio was named a Royal Canadian Academician only in 2017.

ACM President Cherri M. Pancake said the growth of ARTIFICIAL intelligence, and the interest in it, owes much to recent advances in deep learning that Bengio, Bengio and LeCun laid the groundwork for. These technologies are used by billions of people. Anyone with a smartphone in their pocket can actually experience advances in natural language processing and computer vision that were not possible 10 years ago. In addition to the products we use every day, new advances in deep learning provide powerful new tools for scientists in medicine, astronomy, materials science and more.”

“Deep neural networks have contributed to some of the major advances in modern computer science, helping to make real progress on long-standing problems in areas such as computer vision, speech recognition and natural language understanding.” “At the heart of this progress are fundamental technologies developed more than 30 years ago by Yoshua Bengio, Geoffrey Hinton, and Yann LeCun, this year’s Turing Prize winners,” said Jeff Dean, Senior Researcher at Google and Senior vice President of AI at Google. By dramatically increasing the ability of computers to understand the world, deep neural networks have transformed not only the field of computing, but virtually every area of science and human endeavor.”

Since last year, there have been frequent arguments about the “winter” and “peak” of deep learning. Many people believe that deep learning seems to have reached a bottleneck, requiring extremely large and deep networks and a large amount of data training.

In response, several Turing-award winning deep learning gurus have responded to the “cold winter theory” on various occasions.

LeCun:

LeCun once said that the authors of the cold Winter theory of deep learning lack common sense and their opinions are very uninformed.

Hinton:

Hinton doesn’t think there will be an AI winter because AI is already powering phones. AI wasn’t a part of everyday life during the artificial intelligence winter, but it is now.

What is more valuable is that Hinton, the great god, has been fighting on the front line of deep learning. On the one hand, he overthrew and rebuilt it, and on the other hand, he had a better understanding of deep learning.

In 2017, Hinton and two colleagues at Google Brain published Dynamic Routing Between Capsules, a new neural Network model called Capsule Network that performed better than traditional convolutional neural networks for certain tasks.

Hinton thinks Capsule will eventually go beyond visual to broader applications, and while many people are still skeptical about it, he’s sure it’s just as skeptical about neural networks as it was five years ago.

“History will repeat itself.”

The following is the ACM’s official introduction to the three gods, readers may skip this section if they are already familiar with it.

Geoffrey Hinton

Geoffrey Hinton is vice President and Engineering Fellow at Google, Chief Scientific Advisor at vector Institute, and Emeritus Professor at the University of Toronto. Hinton holds a BA in experimental psychology from the University of Cambridge and a PhD in artificial intelligence from the University of Edinburgh. He was the founding director of CIFAR’s neural computing and Adaptive Perception (later “Machine and Brain Learning”) program.

Hinton has received Canada’s Highest Order of Honor, Fellow of the Royal Society, Foreign Fellow of the National Academy of Engineering, International Joint Conference on Artificial Intelligence (IJCAI) Research Excellence Award, NSERC Herzberg Gold Medal award, and IEEE James Clerk Maxwell Gold Medal. He was also selected by Wired magazine as one of the “100 Most Influential People of 2016” and by Bloomberg as one of the “50 People Changing the Global Business Landscape” in 2017.

Yann LeCun

Yann LeCun is the Silver Professor at NYU’s Courant Institute for Mathematical Sciences and vice president and chief AI scientist at Facebook. He holds a BACHELOR’s degree in Advanced English from the Institute of Electronic Technology and Electronics (ESIEE) and a PhD in Computer Science from Marie Curie Pierre University.

LeCun is a member of the Us National Academy of Engineering, honorary Doctor from IPN Mexico and ecole Polytechnique Federale de Lausanne (EPFL), winner of the Pender Prize at the University of Pennsylvania, winner of the Holst Medal at the Technical University of Eindhoven and philips Laboratories, Nokia-bell LABS Shannon Luminary Award, IEEE PAMI Distinguished Researcher Award, IEEE Neural Network Pioneer Award.

He was selected by Wired magazine as one of the 100 Most Influential People of 2016 and one of the 25 Geniuses creating the Future of Business. LeCun is the founding director of New York University’s Center for Data Science and director (with Yoshua Bengio) of CIFAR’s Learning Machines and Brain Program. In addition, LeCun is a co-founder and former member of the Artificial Intelligence Partnership Council, a consortium of businesses and nonprofits studying the social consequences of AI.

Yoshua Bengio

Yoshua Bengio is a professor at the University of Montreal and scientific director of the Quebec Institute for Artificial Intelligence and IVADO (Institute for Experimental Data). He is co-Director (with Yann LeCun) of CIFAR’s Machine and Brain Learning Program. Bengio holds a BACHELOR’s degree in electrical engineering and a Master’s and PhD in computer science from McGill University.

Bengio has been awarded the Order of Canada, A Fellow of the Royal Society of Canada and the Marie-Victorin Award. His founding and serving as scientific director of the Quebec Institute for Artificial Intelligence (Mila) is also considered a major contribution to the field of AI. Mila, an independent non-profit organization with 300 researchers and 35 faculty, is now the largest academic center for deep learning research in the world and has made Montreal a vibrant AI ecosystem with research LABS for major companies and AI startups from around the world.

Finally, I will present the main technical achievements of the three Turing award winners, which have had a great impact on the subsequent deep learning research and are worth remembering.

Geoffrey Hinton

Back propagation:

In 1986, Hinton co-wrote “Learning Internal Representations by Error Propagation” with David Rumelhart and Ronald Williams. Hinton et al. proved in this paper that the back-propagation algorithm can enable neural networks to discover their own internal representation of data, which makes it possible for neural networks to solve problems previously considered unsolvable. Back propagation algorithms have become the standard for most neural networks today.

Boltzmann machine:

In 1983, Hinton, along with Terrence Sejnowski, invented the Boltzmann machine, one of the first neural networks capable of learning the internal representations of neurons that were not inputs or outputs.

Improvement of convolutional Neural network:

In 2012, Hinton, along with his students Alex Krizhevsky and Ilya Sutskever, improved convolutional neural networks using rectigating linear neurons and exit regularization. At the prestigious ImageNet image recognition contest, Hinton and his students almost halved the error rate for object recognition, arguably reshaping the field of computer vision.

Yoshua Bengio

Sequence probability model:

In the 1990s, Bengio combined neural networks with probabilistic models of sequences, such as hidden Markov models. These ideas were incorporated into AT&T/NCR’s system for reading handwritten checks, the culmination of neural network research in the 1990s, and the current deep-learning speech recognition system is an extension of these concepts.

High-dimensional lexical embedding and attentional mechanisms:

In 2000, Bengio wrote A landmark paper “A Neural Probabilistic Language Model”, which introduced high-dimensional word embedding as semantic representation. Bengio’s idea had a profound impact on future natural language processing tasks, including language translation, question answering, and visual question answering system development. Bengio’s team also introduced the “attention mechanism” that led to breakthroughs in machine translation research and became a key component of deep learning’s sequential processing.

Generative adversarial network (GAN) :

Since 2010, generative Adversarial Networks (GAN), developed by Bengio with Ian Goodfellow, have revolutionized computer vision and computer graphics. One compelling application of GAN is that computers can actually generate raw images, and this creativity is often taken as a sign that machines have human intelligence.

Yann LeCun

Convolutional neural network:

In the 1980s, LeCun developed convolutional neural networks, which became the basic model in the field of neural networks. LeCun first successfully trained convolutional neural network systems on handwritten digital images in the late 1980s while working at the University of Toronto and Bell LABS. Today, convolutional neural networks are the industry standard in computer vision, speech recognition, speech synthesis, image synthesis and natural language processing. It has been used in areas such as autonomous driving, medical imaging analysis, voice assistants and information filtering.

Improvements to the back propagation algorithm:

LeCun proposed an early version of the back propagation algorithm (Backprop) and derived it succinctly according to the variational principle. He described two simple ways of shortening the learning time, thus speeding up the backpropagation algorithm.

Broaden the research field of neural network:

LeCun also broadened the field of neural network research, applying neural network as a computational model to a wider range of tasks. Many of the ideas and concepts he introduced in his early research are now foundational in the field of AI. In the field of image recognition, for example, he has studied how neural networks can learn hierarchical features, an approach that is now used in many everyday recognition tasks. They also propose deep learning architectures that can manipulate structured data, such as graph data.


Highlights of 2019 New Intelligence AI Technology Summit

On March 27, 2019, Xinzhiyuan gathered its AI power again and held the 2019 New Zhiyuan AI Technology Summit in Beijing Taifu Hotel. With the theme of “Intelligent Cloud • Core World”, the summit focuses on the development of intelligent cloud and AI chips, reshaping the future AI world pattern.

At the same time, New Intelligence will release a number of AI white papers at the summit, focusing on the innovation and vitality of the industrial chain, commenting on the influence of AI unicorns, and helping China surpass in the world-class AI competition.

Highlights:

Iqiyi (all day) :

https://live.iqiyi.com/s/19rsj6q75j.html

Headline technology (am) :

m.365yg.com/i6672243313506044680/

Headline Technology (PM) :

m.365yg.com/i6672570058826550030/