This article by”
AI the front“Original, original
Yann LeCun defends the view that algorithms have not improved AI much and that the singularity is still a long way off


Planning | Tina


Xue Mingdeng, Nucleon Cola, Debra

What is Singularity? The singularity is a hypothetical point in the future where technology is moving so fast that we can’t understand it. The singularity is seen as an unattainable level of civilization… Something we don’t expect to be able to predict.

In a recent post on his blog, computer science professor Edward W. Felten argues that the singularity is far off, a view strongly supported by Yann LeCun, who we translated.


Why is a singularity not a singularity?

British data scientist I.J. Good summarized the singularity theory in a 1965 paper:

We assume that a superintelligent machine is a machine that can surpass human intelligence. Since such a machine is designed through human intelligence, superintelligent machines can also design better machines. There will undoubtedly be an “intelligence explosion” and human intelligence will fall far behind. So if a superintelligent machine can tell us how to control it, then the first superintelligent machine will be the last machine that needs to be invented by humans.

Vernor Vinge was the first to describe this theory as a “singularity”, a concept derived from mathematics and used to denote an increase in quantity at an infinite rate. The term “Singularity” became known later when Ray Kurzweil used it in his book “The Singularity is Near”.

Exponential growth

The singularity theory is mainly concerned with the growth rate of machine intelligence in the future. Before delving into this theory, however, let’s clarify some concepts related to growth rates.

Exponential growth is the key concept, which means that things grow in proportion to their current size. For example, if my bank account grows by 1% a year, the bank adds 1% of the current balance to the account each year. This is called exponential growth.

Exponential growth rates vary, and we can use two ways to express exponential growth rates. The first is the rate of growth, usually expressed as a percentage per unit of time. Bank deposits, for example, are growing at a rate of 1% a year. The second is doubling time, which is how long it takes to double the number. For example, it would take me about 70 years to double my bank account.

The best way to find out if a quantity is growing exponentially is to measure it using the two methods described above. If it fits any of these patterns, it is growing exponentially. For example, most countries measure economic growth by GDP, which, of course, may fluctuate in the short run, but in the long run, it grows exponentially. If a country’s GDP grows by 3% a year, it will double in about 23 years.

Exponential growth is common in both human society and nature. So an exponentially growing number doesn’t make it anything special, nor does it make any counterintuitive changes.

Computers have also grown exponentially in speed and capacity, which is nothing new. What’s new is the rate at which computer capacity is growing. Moore’s Law tells us that computers double in speed and capacity every 18 months, equivalent to a 60 percent annual growth rate. Moore’s Law has proved true over the past 50 years, as computer capacity has increased 33 times, or nearly 10 billion times.

A singularity is not a real singularity

Before we get to the truth of the singularity hypothesis, let’s look at the actual singularity — at some point in the future, the rate at which machine intelligence improves tends to infinity. That would require machine intelligence to grow faster than exponential, with the doubling time shrinking to zero.

There may not be any theoretical basis for a practical singularity. There is no such thing as super-exponential growth in human society or the natural world, and even if there were, there would be no real singularity. Simply put, the singularity of ARTIFICIAL intelligence cannot become a reality.

If a singularity is not really a singularity, what is it?


Why isn’t self-improvement enough?

We discussed above why there can’t be a single singularity — in other words, why AI can’t have an infinite growth rate. So if the singularity is not simple, what form does it take?

Let’s start with a review of singularity theory, which is basically a claim about the rate of growth of machine intelligence. After ruling out the possibility of exponential growth, the main assumption is that AI technology will grow exponentially.

Exponential growth does not mean “explosive” development. For example, even though my savings account pays 1% and continues to grow, I’m not going to experience a ‘wealth explosion,’ where I suddenly have more money than I thought POSSIBLE. But if exponential growth is very high, will it lead to explosive growth?

In this regard, I think Moore’s Law is the ideal analogy. Over the past few decades, computing power has been growing steadily at an annual rate of 60 percent, in other words, doubling in 18 months. That represents an increase of about 10 billion times in computing power over the decades-long period. This is a remarkable achievement, but it does not fundamentally change the way we live. In fact, the social and economic impact of this growth is gradual.

It’s easy to see why a 10-billion-fold increase in computing power still won’t make us a hundred times happier — it’s not at our core. In order for computing power to translate into our happiness, humans have to figure out how to use computing resources to improve the aspects we care about most — which is obviously very difficult.

What’s more, efforts to turn numeracy into happiness always seem to face a problem of rapidly diminishing returns. For example, every doubling in computing power allows us to better evaluate medical treatments or improve human health more efficiently by discovering new drugs. The end result, however, is that health improvement is more about our bodies as a savings account than Moore’s Law.

Here is an example of AI. The graph shows the trend of computer performance in chess from the 1980s to the present. The vertical axis shows the Elo level, A natural measure of chess skill, defined as A 64% win rate against B if A’s Elo is 100 points higher than B’s.

Despite the exponential growth in computing power and the exponential growth in algorithmic performance, the winning rate has remained remarkably linear over the past three decades. This means that while the AI improves exponentially at the chess level, the resulting natural metrics only improve linearly.

So what does all this mean for singularity theory? Consider the heart of the intelligent explosion argument — as Google put it in its classic paper:

… A super-intelligent machine that can design better machines will undoubtedly trigger an “intelligence explosion”…

If “designing a better machine” is embodied as playing chess, why does an exponential improvement in the input (i.e., machine intelligence) lead only to a linear improvement in the output (i.e., the machine’s effectiveness at designing other machines)? Even so, the idea of an intelligence explosion is clearly not accurate. In fact, the growth of machine intelligence will remain only linear. (We can think of it mathematically: if we assume that the derivative of intelligence is proportional to log(intelligence), then the growth of intelligence over time T will follow T log(T), almost linearly with T.)

Could designing new machines be analogous to, say, playing chess? We don’t know for sure. This is a difficult problem in the theory of complex computing, whose essence is to discuss how much can be achieved with increasing computing resources. After going further than most into complexity theory, we have found that machine design is subject to the same kind of diminishing returns as chess. In any case, this possibility does give reason to be skeptical of Google’s conclusion that self-improvement will “undoubtedly” lead to an intelligence explosion.

The onus is therefore on singularity theorists to explain why machine designs can exhibit the kind of feedback loop needed to cause an intelligence explosion — rather than following a linear increase in returns, as chess does.


Why hasn’t the singularity really happened yet?

I think that sometimes improvements in input to AI systems (computer speed and algorithms) do little to improve AI performance.

There are various objections to this. Some people say that computer chess Elo rating should increase exponentially; Some say that the arrival of the new AI program AlphaZero, which I didn’t discuss in my post, is a game changer and a strong validation of my argument. Let me rebut these objections one by one.

First, let’s talk about how we measure the performance of ai. For chess, I used the Elo rating, which states that if player A is 100 points higher than player B, we expect player A to score 64 percent against player B (winning the game gets one point, drawing gets 0.5 points for each player, and losing gets zero points).

There is an alternative rating system, which I call ExpElo, and its predictions are not too different from Elo’s. Explo rating is a power of the Elo score. Elo uses both players’ ratings to predict odds, while ExpElo uses the ratio of ratings to predict odds. From an abstract mathematical point of view, Elo and ExpElo are equally good at predicting, and the results are exactly the same. However, if Elo’s improvement is linear, ExpElo’s improvement is exponential. So, is chess linear or exponential?

Before addressing this question, let’s pause for a moment to consider that this situation is not unique to chess. Any measure that grows linearly can be retuned (by exponentiating the measure) to get a new measure that grows exponentially. Moreover, any measure that grows exponentially can be retuned (by taking logarithms) to get a new measure that grows linearly. Therefore, for any amount of improvement, we will always be able to choose between linear growth and exponential growth.

The key question for AI is: What is the most appropriate measure of the “intelligence” of a particular task? For chess, I think it’s Elo (not ExpElo). Previously, Arpad Elo launched the Elo system, which was adopted by chess professionals. While the US Chess Federation classifies players by skill as masters, experts, A, B, C, etc., we chose to use Elo to categorize human chess players. So why should we use a different metric when it comes to AI?

And here’s the twist: the role of both Elo and ExpElo ratings in chess will stabilize as computers approach perfection, a level that humans will never actually reach.

In every game, there are some best moves (or moves) that are likely to lead to the best outcome of the game. For a very strong player, we might ask what is his error rate: at a high level of play, what are the circumstances in which he makes a wrong move?

Suppose a player Alice has an error rate of 1%, and suppose a chess game lasts fifty turns. So in the long run, Alice will make a non-optimal move every two games, and in half of the games, she will play at her best. This means that if Alice played a chess match with God (always playing the best move), then Alice would get at least 25% of the score, because she would play God with the best move in half of the game, and in the worst case, she would lose in the game where she made mistakes. If Alice can score at least 25%, then Alice’s Elo rating will not be 200 points lower than God’s. The result is a “god rating” that no one can beat, in both Elo and ExpElo systems.

Research by Ken Regan et al shows that today’s best chess programs have a fairly low error rate, approaching “God’s rating,” as it were. Regan’s research suggests that RoG has a score of around 3600, which is notable because Stockfish, the best program I know of, has a score of around 3,400, and AlphaZero, Google’s new AI player developed by DeepMind, is probably around 3,500. If Regan’s estimate is correct, AlphaZero would have played The best moves against God most of the time, scoring about 36%. AI Elo ratings have historically increased by 50 points per year, so it looks like the trend could continue for a few years before leveling off. Regardless of whether the current trend is linear or exponential, chess ratings are likely to flatten out over the next few years.

Freedom-to-tinker.com/2018/01/03/…

For more content, you can follow AI Front, ID: AI-front, reply “AI”, “TF”, “big Data” to get AI Front series PDF mini-book and skill Map.