New intelligence unit compilation

Liu Xiaoqin

Hinton said in an interview with Axios that he is now “deeply skeptical” of backpropagation algorithms. Backpropagation algorithms are behind the advances we see in AI today, from the ability to sort photos to the ability to talk to Siri. “My view is to throw it all away and start again,” Hinton said.

Science advances with every funeral: in order to progress, new methods must be found

“Learning Representations by Back-Propagation Errors”, a 1986 paper co-authored by Geoffrey Hinton, is at the heart of the ARTIFICIAL intelligence explosion. But Hinton says his breakthrough approach should be abandoned and we should find new ways to approach AI.

In an interview with Axios at an AI conference in Toronto last Wednesday, Hinton said he is now “deeply skeptical” of backpropagation algorithms. Backpropagation algorithms are behind the advances we see in AI today, from the ability to sort photos to the ability to talk to Siri. “My view is to throw it all away and start again,” Hinton said.

Other scientists at the conference said back propagation remains a central role in the future of artificial intelligence. But to drive progress, Hinton says, entirely new approaches have to be invented. “Max Planck said: ‘Science advances with every funeral. ‘The future depends on graduate students who are very skeptical of everything I say.”

How it works: In backpropagation, a label or weight is used to represent a photo or sound in a neural layer similar to the brain. The weights are then adjusted layer by layer until the network can implement an intelligent function with as few errors as possible.

But for neural networks to become intelligent on their own, something called “unsupervised learning,” Hinton said, “I think that means giving up on back propagation.”

“I don’t think that [backpropagation] is how the brain works,” he said. “Our brains obviously don’t need to annotate all the data.”

Hinton et al. introduced back propagation algorithm into multi-layer neural network training

In 1986, Geoffrey Hinton co-authored a paper with David E. Rumelhart and Ronald J. Wlilliams: “Learning representations by back-propagation errors”. The back propagation algorithm is first introduced into multi-layer neural network training. It laid the foundation for the popularization of backpropagation.

abstract

We describe a new learning process, back-propagation, for networks resembling neuron units. This learning process minimizes the difference between the actual output vector and the desired output vector by repeatedly adjusting the weights of the connections in the network. Because the weights are adjusted, internal “hidden” units that do not belong to the inputs or outputs represent important characteristics of the task domain, and the rules in the task are captured by the interactions of these units. The ability of backpropagation to create useful new features sets it apart from earlier, simpler approaches such as the perceptron-convergence procedure.

The paper address: https://www.iro.umontreal.ca/~vincentp/ift3395/lectures/backprop_old.pdf

If you don’t get an internship at Google, blame Geoffrey Hinton for setting the bar too high

Geoffrey Hinton is one of the best-known researchers in the field of artificial intelligence. His work helped open up the world of deep learning we see today. He received his doctorate in artificial intelligence in 1977, and four decades later, he played an important role in the development of backpropagation and boltzmann machines. So the Google Brain team’s Reddit AMA a few days ago about Hinton working as an intern at Google in 2012 sparked a lot of interest.

Prompted by a question about “age limits for Google Brain interns,” Google Senior researcher and Head of Google Brain Jeff Dean explained that his team doesn’t have any age limits for interns. Technically, he says, Hinton was his intern for a while in 2012.

“In the summer of 2012, our group hired Geoffrey Hinton as a visiting research fellow, but for a variety of reasons he was classified as an intern.” “There is no age limit for interns,” Dean joked during the exchange. What we want is for interns to be talented and eager to learn, like Geoffrey :).”

A year later, Google acquired Hinton’s then-startup DNNresearch to expand their deep learning capabilities. How much Google paid for DNNrearch is still a mystery, but after a year of internships it seems like a good deal 🙂

The Google Brain team is at the heart of Google’s deep learning efforts and the enemy of TensorFlow, the most popular deep learning framework. If you apply to study there and don’t get an internship, don’t be sad – Geoffrey Hinton set the bar too high!

Hinton, the great god of deep learning: Inventor of back propagation and comparative divergence

Geoffrey Everest Hinton (born 6 December 1947) is an English born computer scientist and psychologist, best known for his work on neural networks. Hinton is one of the inventors of backpropagation and comparative divergence algorithms and an active promoter of deep learning.

Hinton received a BACHELOR’s degree in experimental psychology from the University of Cambridge in 1970. He received his PhD in artificial intelligence from the University of Edinburgh in 1978. He has since worked at the University of Sussex, University of California, San Diego, Cambridge university, Carnegie Mellon University and University College London. He is the founder of the Gatsby Centre for Computational Neuroscience and currently serves as a professor in the Department of Computer Science at the University of Toronto. Hinton is Canada’s leading academic in machine learning and the leader of the “Neural Computing and Adaptive Perception” project sponsored by the Canadian Institute for Advanced Research. Hinton joined Google in March 2013, when Google acquired DNNresearch, a company he founded.

Research interest

A light explanation of Hinton’s work can be found in two scientific American articles published in September 1992 and October 1993. He has studied the use of neural networks for machine learning, memory, perception and symbol processing and has published over 200 papers in these areas. He is one of the scholars who introduced back propagation algorithm into multi-layer neural network training. He invented the Boltzmann machine with Terry Segenowski. His other contributions to neural networks include distributed representation, delay neural networks, Hybrid Systems of Experts, And Helmholtz machines. Hinton’s current work deals with unsupervised learning in neural networks with rich sensor inputs.

The winning

Hinton, the first recipient of the Rumelhardt Prize, was elected a Fellow of the Royal Society in 1998.

Hinton received the 2005 IJCAI Distinguished Scholar Award for Lifetime Achievement and was also the recipient of the 2011 Herzberg Gold Medal in Science and Engineering Canada.

anecdotes

Hinton is the great-great-grandson of logician George Bull, whose work eventually became the basis of modern electronic computers. Hinton, meanwhile, is a descendant of surgeon and writer James Hinton.

(via: wikipedia)

Read more about Geoffrey Hinton:

I’m a little embarrassed to be called the godfather of deep learning

The legend of Hinton, the godfather of neural networks: architecture, physics, philosophy, and finally artificial intelligence

Hinton Google Brain latest research: 137 billion parameter Super-large Neural Networks


[extra] New wisdom yuan is conducting a new round of recruitment, the most beautiful spacecraft to fly to the intelligent universe, there are N seats

Click to read the original article to see the position details, looking forward to your joining ~