Following AlphaGo’s 4-1 win over Lee Sedol last year, the mysterious go player Master who swept the online game of Go in recent weeks and won 60 straight victories has finally been unveiled, and the official new AlphaGo has been officially unveiled, and the fear of artificial intelligence has again spread across social networks. According to Xia Yonghong, who studies the philosophy of cognition and the mind, AlphaGo’s algorithm, though delicately designed, is still based on violent statistical calculations of big data, which is completely different from how human intelligence works.

Giiso Information, founded in 2013, is a leading technology provider in the field of “artificial intelligence + information” in China, with top technologies in big data mining, intelligent semantics, knowledge mapping and other fields. At the same time, its research and development products include editing robots, writing robots and other artificial intelligence products! With its strong technical strength, the company has received angel round investment at the beginning of its establishment, and received pre-A round investment of $5 million from GSR Venture Capital in August 2015.

“Thought is a function of the immortal soul of man, and therefore no animal or machine can think.”

“The consequences of machine thinking are so dire that we hope and believe machines can’t do it.”

“Godel’s theorem states that any formal system is incomplete, and it will always face problems that it cannot determine, so it is difficult for a machine to surpass the human mind.”

“The machine has no phenomenal conscious experience, it has no thoughts and no feelings.”

“Machines don’t have the capacity to be as colorful as humans.”

“Machines can’t create anything new. All it does is what we know how to command it to do.”

“The nervous system is not a discrete state machine, and a machine cannot simulate it.”

“It’s impossible to formalize all the common sense that guides behavior.”

“Humans have telepathic abilities that machines don’t.”

These objections to artificial intelligence (AI) were first outlined by AI pioneer Alan Turing in his famous paper Computing Machines and Intelligence (1950). Although they had been initially refuted by Alan, they could be found in almost all subsequent counter-arguments against AI. Since the birth of AI, all kinds of doubts and criticisms have never stopped.

However, after Alphago’s victory over Lee Sedol last March and its recent incarnation as Master in sweeping up the best players in China, Japan and South Korea, those views seem to have disappeared. There is little doubt that AI will comprehensively surpass human intelligence at a so-called Singularity. The only debate is when the singularity will arrive. Even those who are hostile to AI do not doubt the possibility of singularities, but only worry about the possibility of humans being rendered obsolete by AI. However, such blind optimism risks both irresponsibly damaging AI’s future development — the higher the expectations, the greater the disappointment, and the lack of rigorous scrutiny of current AI — and if we understand how Alphago works, we will find that it still shares all the philosophical conungments AI faces.

I. Why is ARTIFICIAL intelligence not intelligent?

Many of the criticisms laid out by Alan were later developed into more elaborate arguments. For example, criticism based on Godel’s theorem was later developed by philosopher Lucas and physicist Penrose; The problem that common sense cannot be formalized later appeared in the framework problem and common sense problem; Machines only act mechanically in accordance with the rules, but cannot think independently, which is the core concern of the later Chinese house argumentation and symbol laying. Machines have no phenomenal conscious experience, which is the main argument against AI by philosophers of the mind who advocate first-person experience and the nature of feeling.

(1) The framework problem is one of the most serious problems plaguing AI, which has not been effectively solved so far. The original paradigm of AI was symbolism, representing the world based on symbolic logic. The framing problem is a difficult problem inherent in the representation process of AI. Dennett, a cognitive scientist, uses the example of framing: We tell a robot to go into a room with a ticking bomb and retrieve a spare battery. But since the explosive is in a car with the battery, the robot cart also pushes out the explosive when it removes the battery. And the dynamite went off… We can avoid this kind of accident by having the robot act out the side effects of an action. So when the robot enters the room, it calculates whether it will change the color of the walls, whether it will change the wheels of the car… It does not know which results are relevant to its goal and which are not. As it fell into an infinite calculation, the dynamite exploded… Again, we modified the robot, taught it to distinguish which side effects were relevant to the mission and which were not, but as the robot was calculating which were relevant and which were not, the dynamite went off again.

When a robot interacts with the external world, some things in the world may change, and the robot needs to update its internal representation. But the robot itself does not know what will and will not change, which requires a framework to define the terms of change. However, the framework itself is too cumbersome, on the other hand depends on the specific situation and becomes more cumbersome, and ultimately far more than the computer can load. This is known as the framing problem.

This problem is often linked to another problem in AI representation, such as common sense. We all know asimov’s so-called three laws of robotics: a robot must not harm a human being (it must try to save him from danger), must obey the orders given to it, and must protect its own survival as far as possible. But in practice, these three laws are difficult to exist as robot instructions, because they are not clear rules, can be effectively operated instructions. For example, the law of saving lives can be carried out differently in different situations. When a man hangs himself, the way to save him is to cut the rope; But when a man cries for help with a rope under a fifth-story window, the way to save him is to pull, not cut, the rope. Thus, to get people to work, a great deal of background knowledge must be formalized. Unfortunately, expert systems and knowledge representation projects in the second wave of AI in the 1980s failed because they could not handle the representation of common sense.

(2) Another difficulty of AI is the argument of Chinese house and the symbolic foundation derived from it. The philosopher of the mind, Paul Searle, devised a thought experiment in which he imagined himself locked in an airtight room with an English instruction manual that described how to give a Chinese answer to a Chinese question based on the shape, not the meaning, of the characters. Thayer in the Chinese room receives Chinese questions from the window and gives corresponding Chinese answers according to the English manual. To those outside the Chinese-speaking room, it looked as if Sayle knew Chinese. But in reality, Searle didn’t understand any of the meaning of the Chinese questions and answers. To Searle, the digital computer was like Searle in the Chinese room, processing strings of symbols according to physical and syntactical rules without understanding the meaning of those symbols. Even if a computer exhibits intelligent behavior similar to that of a human, the work of a computer boils down to the processing of symbols whose meanings are not understood or generated by a computer, but depend on what they mean in the human mind.

Image credit: Visual Reading Artificial Intelligence, page 50.

Later, cognitive scientist Harnard built on Searle’s work and proposed what is known as the symbolic foundational problem: how to make an artificial system autonomously generate symbolic meaning without external or prior human input. In fact, the problem is how to make AI autonomously identify feature quantities from the world, and ultimately autonomously generate symbols corresponding to such feature quantities. “Deep learning” tries to solve this problem, but its solution is not so satisfactory.

(3) The problem of phenomenal consciousness is also a difficult problem for artificial intelligence. In fact, all the framework and symbolic foundation problems mentioned above are concerned with how to simulate human representational activities in a formal system. However, even if these representational activities can be simulated by artificial intelligence, whether human consciousness can be reduced to a representational process is also a controversial issue. The representationist theory of consciousness holds that all processes of consciousness can be reduced to representational processes, but for those philosophers of the mind who come from or sympathize with the phenomenological tradition, consciousness contains the inescapable subjective conscious experience of the human mind. We commonly refer to this first-person experience as perceptive or phenomenal consciousness. Philosopher of the mind Nagel famously wrote, “What is it Like to Be a Bat?” In his view, even with everything we know about bats’ neurobiology, we still don’t know for sure what their inner conscious experiences are. Chalmers made similar postulates, such as the existence of a zombie that behaves like a human in all its activities, but lacks the most essential human experience of phenomenal consciousness. In their view, conscious experience cannot be simulated by representational processes. If this theory is true, representation and consciousness are two different concepts, and even if strong artificial intelligence is possible, it does not necessarily have consciousness.

Is Alphago really smart?

So is Alphago really that revolutionary and a milestone in THE development of AI? In fact, Alphago did not adopt any new algorithms, and thus shared the limitations of these traditional algorithms.

The basic design idea of Alphago is to evaluate checkerboard positions and decision moves by constructing two neural networks, namely decision network and value network, based on two modes of supervised learning and reinforcement learning. Deepmind’s engineers first employed supervised learning, training a strategy network based on reams of data from human chess games, from which it could learn the patterns of human players. But learning these patterns doesn’t make you a master. You also need to evaluate the game after the move to choose the best move. To do this, Deepmind used reinforcement learning, training a strategy network to learn how to win rather than mimic moves made by human players, by playing against each other on the basis of previously trained strategy networks (there was not enough data on human games). Alphago’s most innovative feature is that it has trained a valuation network based on the data from its own games to assess the strengths and weaknesses of the entire board. To play against humans, Alphago used a Monte Carlo search tree to integrate these neural networks. First, the strategy network can search for various moves, and then the valuation network can evaluate the odds of winning those moves. Because the strategic network forms a set of moves, the valuation network evaluates and deletes these moves, which can ultimately greatly reduce the width and depth of the search, compared to the traditional brute force search.

In contrast to traditional AI, deep learning in recent years, and the reinforcement learning revived by Alphago, they have demonstrated the ability of human intelligence to continuously learn from sample data and environmental feedback. But in general, while Alphago’s algorithm is very well designed, it is still based on brute force statistics on big data, which is completely different from how human intelligence works. Alphago played tens of millions of games and statistically analyzed those situations before reaching the same level of power as humans. But a gifted player would need to play a few thousand games to achieve the same level of power, less than a thousandth of Alphago’s. As a result, Alphago’s learning efficiency is still very low, indicating that it has not yet touched the most essential part of human intelligence.

More importantly, deep learning is still not immune to the theoretical challenges that plague traditional AI. For example, the robot framework problem requires real-time representation of the complex and dynamic environment in which the robot resides. Applying deep learning as it is today can be a difficult task. Because deep learning is still limited to the processing of large samples of images and voice data. As such data are highly context-dependent and difficult to exist in the form of big data, it is impossible to use big data to train robots. Finally, it is very difficult to generate a neural network with human common sense beliefs, and the framing problem remains difficult to solve.

In addition, because deep learning requires the implantation of a large number of training samples, parameters need to be constantly adjusted in the training process to obtain the desired output. For example, the strategy network trained by Alphago’s supervised learning needs human chess games as training samples, and the characteristic parameters need to be set manually in the training process. In this case, the correspondence between the neural network and the world is still artificial, rather than generated by the neural network itself. Deep learning cannot completely solve the problem of symbolic foundation laying.

Better to be wary of philosophers than of artificial intelligence

AI is probably the discipline most closely associated with philosophy than any other field of engineering. In the history of the philosophy of artificial intelligence, many philosophers have tried to improve the technical solutions of artificial intelligence with some alternative thought resources. Philosophy also plays the role of gadfly in this process, by constantly clarifying the nature of human intelligence and cognition, examining the weaknesses and limits of AI, and ultimately stimulating AI research. Of all the philosophers, the authors most cited by ai researchers are probably Heidegger and Wittgenstein.

As early as in the era of AI symbolism, Dreyfus, an American expert in Heidegger, criticized AI at that time. AI algorithms, however complex, boil down to representing the world with symbolic logic or neural networks, and then planning actions based on efficient processing of those representations. However, this does not quite fit the human pattern of behavior. According to Dreyfus, a large number of human behaviors do not involve representation, actors directly interact with the environment in real time, and they do not need to represent the changes in the world in their minds before planning their actions. MIT Brooks was later adopted the “characterization of smart” solution (although not admit Brooks dreyfus’s influence on him, but according to dreyfus, this idea is the result of laboratory Brooks some students taking the course in his philosophy), designed a robot can be real-time response environment “genghis.”

Besides Heidegger, Wittgenstein is another storm center of AI criticism. Wittgenstein taught a course in mathematics at Cambridge, circa 1939, on which Alan Turing, the pioneer of artificial intelligence, had taken a course. A later science novel, “The Cambridge Quintet,” featured a verbal duel between the two men about whether machines could think, partly based on arguments in class. In Wittgenstein’s opinion, although both human beings and machines act in accordance with certain rules, rules are constructive for machines because their operation must depend on rules, but for human beings, abiding by rules means consciously abiding by them. But wittgenstein’s greatest influence on AI was his later work on language. Wittgenstein believed in his early days that language is a set of propositions that can be described by symbolic logic, and the world is also composed of facts. In this way, propositions are logical images of facts, and we can depict the world through propositions. This idea is completely isomorphic to the idea of AI. However, Wittgenstein later abandoned these ideas, arguing that the meaning of language lies not in the combination of basic propositions but in its usage, and that it is our use of language that determines its meaning. Thus, attempts to establish fixed connections between symbols and objects, as traditional AI does, are futile; the meaning of language can only be established in its use. Based on this concept, some AI experts like Steele have used it to solve the symbolic foundational problem. He designed a robot species, one of the robot to see an object such as box, randomly generated a string such as Ahu to represent it, then it will Ahu this symbol to another robot, it puzzles, behold Ahu corresponding to an object, if the robot is right to point out that Ahu corresponding box, Send it a correct response and the two robots get a word for Ahu. Steele calls this a process of adaptive language play, in which the robot population can acquire a linguistic description of the world around it, and thus autonomously base the meaning of symbols on the world.

However, these approaches have historically been marginal in AI because they rely on technical resources that are too simple to fully simulate the human body and world, even more difficult than traditional AI representations of the world with formal systems. But if Heidegger and Wittgenstein are right about the nature of human intelligence, future AI will inevitably need a embodied and distributed solution. For example, give the AI a body that can take features directly from the environment rather than training data, and learn common sense and language to guide human actions through interaction with the environment and other agents. This may be the only path to universal AI.

Giiso information, founded in 2013, is the first domestic high-tech enterprise focusing on the research and development of intelligent information processing technology and the development and operation of core software for writing robots. At the beginning of its establishment, the company received angel round investment, and in August 2015, GSR Venture Capital received $5 million pre-A round of investment.

However, this embodied universal AI could also be the end of the human race. Because once an AI has its own history, its own world and its own form of life, it can eventually have its own desires and goals, free of human training and feedback. Once it has its own desire and plans its actions based on this desire, it will enter the track of evolution and become a new species in the process of continuous adaptation and adjustment to the environment. If human beings and their survival conflict and competition, due to the limitation of human function, it is likely to face the fate of elimination.

Therefore, philosophically, we worry about is not AI research in defiance of heidegger and wittgenstein such artificial intelligence (potential) opponent’s argument, because good is a dedicated weak AI AI, we are more worried about is rather AI researchers believe their point of view, will now deep learning and reinforcement learning with and robotics.