| AI base of science and technology (rgznai100)


Participate in | Peng Shuo



What is artificial intelligence? Why is ARTIFICIAL Intelligence important? Should we fear ARTIFICIAL intelligence? Why is everyone suddenly talking about ARTIFICIAL intelligence?

You may know from online how AI is powering virtual assistants at Amazon and Google, or how AI is stepping up to replace all jobs (controversially), but these articles rarely do a good job of explaining what AI is (or whether robots will take over). This article will explain ai, and this concise guide will be updated and improved as the field evolves and important concepts emerge.


What is artificial intelligence?

Artificial intelligence is software or computer programs that have learning mechanisms. Artificial intelligence uses this knowledge to make decisions in new situations, just like humans do. The researchers building the software try to write code that can read images, text, video or audio so that the AI can learn something from it. Once the machine learns, the knowledge can be used elsewhere. If an algorithm learns to recognize someone’s face, it can find them in Facebook photos. In modern AI, learning is often referred to as “training” (more on that later).

Humans naturally learn complex ideas: we can see objects like apples and then identify a different apple later. Machines are very literal – computers don’t have a flexible concept of “like”. The goal of ARTIFICIAL intelligence is to make machines less text-based concepts. It’s easy for a machine to decide whether two images of apples or two sentences are exactly the same, but AI aims to recognize a picture of the same apple from a different Angle or different light; It captures visual angles to identify apples. This is called “generalization” or forming ideas based on similarity in data, not just the images or text the AI sees. More general ideas can be applied to things that the AI hasn’t seen before.

Alex Rudnicky, a computer science professor at Carnegie Mellon university, said: “The goal of AI is to reduce complex human behavior to a form that can be calculated. “This in turn allows us to build systems that are useful to humans to do complex activities.”


How far is artificial intelligence today?

Ai researchers are still grappling with the basics of this problem. How do we teach computers to recognize what they see in images and videos? And then, identifying how to go into understanding — not only producing the word “apple,” but knowing that apple is a food related to oranges and pears, that humans can eat apples, that humans can cook with apples, that humans can make apple pies with apples, and relate to the story of Johnny Apple Pie, and so on. There’s also the problem of understanding language — words have multiple meanings depending on context, definitions are always evolving, and everyone says it a little differently. How does a computer make sense of this fluid, ever-changing construct of language?

Artificial intelligence progresses at different rates due to different media. Now, we’re seeing amazing growth in the ability to understand images and video, an area the industry calls computer vision. But the advance doesn’t add much to the understanding of other artificial intelligence, an area known as natural language processing. Limited intelligence is being developed in these fields, which means that AI is powerful at processing images, audio or text, but cannot learn the same approach from all three. One agnostic form of learning is general intelligence, which is what we see in humans. Many researchers hope that advances in various fields will reveal more about how we can make shared truths about machine learning eventually fuse into a unified approach to ARTIFICIAL intelligence.


Why is ARTIFICIAL intelligence important?

Once ai learns how to recognize an apple from an image, or transcribe a speech fragment from an audio clip, it can be used in other software to make decisions that should be made by humans. It can be used to identify and tag your friends in Facebook photos, something you (alone) should have done manually. It can recognize another car or a street sign from a self-driving car or your car backing up. It can be used to locate poor quality produce that should be removed from agricultural production. These tasks, based solely on image recognition, are usually performed by the user or someone who provides the software for the company.

If a task saves time for the user, it’s a feature, and if it saves time for people working in the company or even eliminates a job altogether, then it’s a huge cost savings. There are applications, such as processing millions of data points within minutes of sales analysis, that would be impossible without machines, meaning the potential for new information never before existed. These tasks can now be done quickly and cheaply by machines at any time and anywhere. It is a reproduction of tasks once performed by humans, an undeniable economic benefit for infinitely scalable low-cost labor.

Jason Hone, a professor at Carnegie Mellon University’s Human Computer Interaction Laboratory, says that while AI can replicate human tasks, it also has the ability to unlock new tasks. “The car is a direct substitute for the horse, but in the medium to long term, it’s bringing a lot of other uses, like semi-trucks for large haulages, furniture moving vans, minivans, cars with folding covers.” “Similarly, AI systems will directly replace routine tasks in the short term, but in the medium to long term we will see it used as dramatically as cars,” Hong said.

Just as Gottlieb Daimler and Carl Benz failed to take into account how cars will redefine the way cities are built, or the impact of pollution or obesity, we have yet to see the long-term effects of this new workforce.


Why is AI so popular now, and not 30 (or 60) years ago?

Many ideas about how AI should learn are actually more than 60 years old. In the 1950s, researchers such as Frank Rosenblatt, Bernard Widrow and Marcian Hoff first studied how biologists think the brain’s neurons work, and what they do mathematically. The idea is that one major equation might not solve everything, but what if we used a lot of connected equations, like the human brain does? The initial example was simple: analyze 1s and 0s over a digital phone line and predict what would happen next. (This research, done by Widrow and Hoff at Princeton University, is still being used to reduce the echo of telephone connections).

In 2006, 50 years after the Dartmouth conference, the parties reunite. From left: Moore, McCarthy, Minsky, Selfric, Solomonoff

For decades, many in the computer science community thought the idea would never solve more complex problems, and today it is the foundation of A.I. for major tech companies, from Google and Amazon to Facebook and Microsoft. Looking back, researchers now realize that computers aren’t complex enough to simulate the billions of neurons in our brains, and that we need a lot of data to train these neural networks as we understand them.

Those two factors, computing power and data, have only been realized in the last 10 years. In the mid-2000s, Graphics processor unit (GPU) company NVIDIA said its chips were perfect for running neural networks and started making it easier for AI to run on its hardware. The researchers found that if they could use faster, more complex neural networks, they could improve accuracy.

Then in 2009, artificial intelligence researcher Fei-Fei Li released a database called ImageNet, which contained more than 3 million organized images and tagged them. She thinks that if these algorithms had more examples to look for relationships between patterns, it could help them understand more complex ideas. She started a competition for ImageNet in 2010, and by 2012, researcher Geoff Hinton had used millions of images to train a neural network, beating other applications with more than 10 percent accuracy by a wide margin. As Li predicts, data is key. Hinton also piled the neural network on top of another, one just found the shape, another looked at the texture, and so on. These are called deep neural networks, or deep learning, and that’s what you hear in the news today about artificial intelligence. Once the tech industry saw the results, the AI boom began. Researchers who have been working on deep learning for decades have become the tech industry’s new rock stars. By 2015, Google had more than 1,000 projects that used some kind of machine learning technology.


Should we fear ARTIFICIAL intelligence?

After watching movies like The Knot, it’s easy to be afraid of all-powerful evil AI like Skynet. In the field of ARTIFICIAL intelligence, Skynet is known as general super intelligence, or ARTIFICIAL general Intelligence, and this type of software is more powerful than the human brain in every way.

Since computers can scale, it means we can make stronger, faster computers and then connect them together. The fear is that the computing power of these robot brains could grow to an unfathomable level, and if they were really that smart, they would be uncontrollable and would bypass anyone who tried to shut them down. This is the end of the world that extremely smart people like Elon Musk and Stephen Hawking fear. While they do possess intelligence in some areas, as Musk said, most mainstream AI researchers dismiss the idea of summoning demons. Although researchers broke the basic principle of learning, such as how they changed to understand the meaning behind the model, then put these new understanding into a functional view of the world, there is no evidence that the computer will have demand and the desire or the will to survive, Facebook artificial intelligence research center leadership Yann Lecun said.

“We become more violent when we are threatened, when we are jealous, when we want resources, when we prefer our close relatives to strangers, and so on, all of these behaviors are built by evolution for our survival. Intelligent machines don’t have these basic behaviors unless we explicitly build them into them.” “He wrote on Quora.

There is no evidence that computers perceive humans as a threat, because no such threat is defined for computers. Perhaps humans can define it and tell machines to operate in parameters that functionally resemble a will to live that does not exist.

“I said I’m not worried about A.I. becoming evil for the same reason I’m not worried about overpopulation on Mars,” said Andrew Ng, a founding member of Google and former head of A.I. at Baidu. But there’s one reason to fear AI: humans.

There is evidence that AI is sensitive to picking up human biases from the data it learns. These biases can be harmless, such as identifying a cat in a picture more often than a dog because it has been trained on more cat pictures. But they may also perpetuate stereotypes, such as AI associating doctors with white men more often than other genders or races. If an AI with this bias was in charge of recruiting doctors, it might be unfair to employees who were non-white males. An investigation by ProPublica found that the algorithms used to sentence those convicted of crimes were racially biased because they offered harsher sentences to people of color. Health care data often do not include women, especially pregnant women, leading to incomplete system functioning when medical advice is made to these people. Since these mechanisms used to be made by humans, and now we have a super powerful machine that’s faster, we need to make sure that they make these decisions fairly and consistently in our ethics.

It’s not easy to tell if an algorithm is biased, because deep learning requires millions of connected calculations, and it’s very difficult to work out from all those small decisions their contribution to the larger one. So even if we know ai makes a bad decision, we don’t know why or how it does it, so it’s hard to build a mechanism to catch bias before it’s implemented.

The issue is particularly volatile in areas like self-driving cars. In self-driving cars, every decision can be a matter of life and death. Early research has shown great promise in reversing the complexity of the machines we create, but now it’s almost impossible to know why the AI at Facebook, Google or Microsoft makes any decisions.



Functional AI Glossary:

Algorithm: A set of instructions for a computer to follow. An algorithm can be a simple single-step program or a complex neural network, but is usually used to refer to a model.

Artificial intelligence: This is a catch-all. Broadly speaking, software means imitating or replacing every aspect of human intelligence. Ai software can learn from data such as images or text, experience, evolution, or the inventions of other researchers.

Computer vision: Artificial intelligence research explores the field of image and video recognition and understanding. The field ranges from understanding what the apple looks like, to understanding what the Apple does, and the ideas that go with it. It is the main technology used in self-driving cars, Google image search and automatic tagging on Facebook.

Deep learning: A field in which neural networks are layered to understand complex patterns and relationships in data. When the outputs of one neural network become the inputs of another, effectively stacking them up, the resulting neural network is called “depth.”

General Intelligence: Sometimes referred to as “strong ARTIFICIAL intelligence,” general intelligence will be able to learn and apply different ideas in different tasks.

Generative adversarial network: This is a system consisting of two neural networks, one used to generate the output and the other used to verify that the quality of the output is the desired. For example, when trying to generate a picture of an apple, the generator will generate an image, and another (called a discriminator) will cause the generator to try again if it can’t recognize an apple in the image.

Machine learning: Machine learning (ML), often combined with the term artificial intelligence, is the convention of using algorithms to learn from data.

Model: A model is a machine learning algorithm that builds its own understanding of a topic, or its own model of the world.

Natural language processing: software for understanding the intentions and relationships of ideas in language.

Neural network: Algorithms built to simulate the way the brain processes information through a network of connected mathematical equations. The data supplied to the neural network is broken down into smaller chunks and its underlying patterns are analyzed thousands of times, depending on the complexity of the network. When the output of one neural network is input to the input of another neural network, the two neural networks are linked together to form a layer, a deep neural network. Typically, layers of deep neural networks analyze data at higher and higher levels of abstraction, meaning that they extract useful data from unnecessary data until they can get the simplest and most accurate representation of the data.

Convolutional neural network: A neural network designed primarily to identify and understand image, video, and audio data because of its ability to process dense data, such as multi-million pixel images or thousands of audio file samples.

Recursive neural network: A neural network for natural language processing that analyzes data periodically and continuously, meaning it can process data like words or sentences while maintaining their order and context within the sentence.

Long term memory network: A variant of a periodic neural network used to retain structured information based on data. For example, RNN can recognize all nouns and adjectives in a sentence and check that they are used correctly, but LSTM can remember the plot of a book.

Reinforcement learning: A deep learning algorithm that learns from experience. Algorithms that can control aspects of the environment, such as characters in video games, and then learn by trial and error. Because they are highly repeatable, serve as models of a three-dimensional world, and have been played on computers, many of the reinforcement learning breakthroughs have come from algorithms that play video games. RL was one of the main types of machine learning in DeepMind’s AlphaGo, which beat the world champion Lee Sedol at Go. In the real world, this has been demonstrated in areas such as cybersecurity, where software has learned to trick anti-virus software into thinking malicious files are safe.

Superintelligence: Artificial intelligence more powerful than the human brain. It’s hard to define because we still can’t objectively measure what the human brain can do.

Supervised learning: machine learning in which the data provided to the trained person is organized and labeled. If you were building a supervised learning algorithm to recognize cats, you could train the algorithm on 1,000 pictures of cats.

Training: The process of making an algorithm learn by providing data.

Unsupervised learning: A type of machine learning algorithm that does not give any information about how it should classify data and must find relationships between them. Ai researchers like Facebook LeCun see unsupervised learning as the holy Grail of AI research because it’s so similar to how humans learn naturally. “In unsupervised learning, the brain is much better than our model,” LeCun tells IEEE Spectrum. “This means that our artificial learning system is missing some very basic biological learning principles.”