Translated by Kevin Kelly: Dream 1819


It is wise to think about the implications of new technology. I understand the good intentions of Jaron Lanier and others sounding the alarm about AI.

But I think their way of thinking about the AI challenge relies too much on fear rather than on the evidence we have so far. I propose a counter-argument that has four parts:

  1. Artificial intelligence is not improving exponentially.

  2. If the performance of the accreditation body is unsatisfactory, we will re-programme the accreditation body.

  3. Self-reprogramming is the least likely of many scenarios.

  4. Rather than sensationalizing the fear, this is a golden opportunity.


I’m going to expand each point down here.


1. Artificial intelligence is not improving exponentially.

When I was researching my recent article on the benefits of artificial intelligence in business, I was surprised to learn that AI doesn’t follow Moore’s Law.

In particular, I asked ai researchers whether the performance of AI is growing exponentially. They can point to exponential growth in AI inputs. The number of processors, cycles, data learning sets, and so on grows exponentially in many cases.

But output intelligence doesn’t grow exponentially, because intelligence, to some extent, isn’t measured. We set benchmarks for certain types of learning and intelligence, such as speech recognition, and these benchmarks are approaching the asymptote of zero error. But we don’t have rulers to measure the continuity of intelligence. We don’t even have an operational definition of intelligence. There is simply no evidence that the measure of intelligence is doubling every X.

In fact, AI is steadily improving, but not exponentially, which is important because it gives us time (decades) to do the following.


2. If the performance of an accreditation body is unsatisfactory, we will re-programme the accreditation body.

While AI doesn’t follow Moore’s Law, it is becoming more useful at a faster rate. So the utility of AI is likely to grow exponentially, if we can measure it. But over the past century, as more people have used triggers and more devices, the utility of electricity has exploded, yet the quality of electricity has not increased exponentially.

As the usefulness of ARTIFICIAL intelligence grows rapidly, it raises the specter of destruction. More recently, people familiar with the technology are fanning that fear. The thing they seem to fear most is that AI is taking over the decisions humans once made. Diagnosing X-rays, driving cars, targeting bombs and missiles. These can be life and death decisions. As far as I can tell, from what little is recorded by those who fear, their biggest fear — the threat of extinction — is that AI will take over more and more decisions and then decide that they don’t need humans, or that somehow AI will destroy civilization.

This is an engineering problem. As far as I know, AI has yet to make a decision that human creators will regret. If they do (or when they do), then we change their algorithm. If an accreditation body makes decisions that our society, legal, ethical consensus or consumer market does not approve of, then we should, and will, modify the principles that govern the accreditation body, or create better principles that allow us to make decisions that we approve of.

Machines, of course, make “mistakes,” even big ones — but so do humans. We correct them all the time. The behavior of AI is going to get a lot of attention, so the world is watching. However, we don’t have universal agreement on what we think is appropriate, so that’s where most of the friction over them will come from. When we decide, our AI decides.

3. Self-reprogramming is the least likely of many scenarios.

However, the great fear stirred up by some is that when ai gain our confidence to make decisions, they will somehow prevent us from changing their decisions. We were afraid they’d shut us out. They become scoundrel. It’s hard to imagine how this could happen. Human engineers cannot program artificial intelligence so that it cannot be changed in any way.

This is possible, but impractical. That limp isn’t even a bad actor. Often the scary thing is that ai reprograms itself so that it can’t be changed by the outside world. Presumably, this is a selfish move by ai, but it’s not clear why an immutable program would have an advantage over AI. Creating a system that can’t be hacked would also be an incredible achievement for a group of human engineers. Still, it’s possible at some distant time, but it’s only one of many possibilities. Artificial intelligence can decide to let anyone change it in open source mode. Or it can decide that it wants to merge with the human force of will.

Why not? In the only example we have of an introspective, self-aware intelligence (hominid), we find that evolution seems to have designed our brains so that they can’t easily reprogram themselves. With the exception of some yogis, you cannot easily enter and change your core mind code. Failure to easily master the basic operating system would seem to present an evolutionary disadvantage, and accreditation may require the same self-protection. We don’t know. But the possibility that they decide on their own to shut out their partner (and their doctor) is just one of many possibilities, not necessarily the most likely.

4. This is a great opportunity, instead of hyping fear.

Since artificial intelligence (sometimes embodied in robots) takes on many of the tasks that humans do, we have a lot to teach them. Because without this kind of teaching and guidance, even with minimal intelligence, they are afraid. But fear-based motives are futile. When people act out of fear, they do stupid things. A better approach is to see this as an opportunity to teach ai ethics, morality, fairness, common sense, judgment and wisdom.

Ai gives us the opportunity to upgrade and strengthen our own morals, morals and aspirations. We complacently assume that human behavior — all human beings — is superior to that of machines, but human morality is sloppy, cunning, inconsistent, and often suspect.

When we drive down the road, we have no better solution to the plight of the person we hit (child or adult) than a robotic car — though we think we do. If our goal is to shoot someone in a war, our standards are inconsistent and vague. The clear ethical programme that accreditation bodies must follow will force us to go all out and get a clearer picture of why we believe what we believe.

Under what conditions do we want to be relativistic? In what particular context do we want the law to be a context? Human morality is a messy puzzle that could benefit from close scrutiny, less superstition and more evidence-based thinking. We will soon find that trying to train AI to be more human will challenge us to be more human. Just as children can be better than their parents, the challenge of raising ai is an opportunity, not a fear. We should welcome it. I hope people who have a big following will also welcome it.

The myth of artificial intelligence?

In the end, I’m not worried about Jaron’s main complaint about semantic distortions caused by AI, because culturally (not technically) we’ve defined “true” AI as intelligence that we can’t produce with machines today, so anything we produce with machines today can’t be AI, So artificial intelligence in its narrowest sense will always appear tomorrow.

Because tomorrow will come, and whatever machines do today, we won’t give the blessing of artificial intelligence. Machine learning, machine intelligence, or some other name, is known in society as intelligence. In this cultural sense, artificial intelligence will remain a myth even if everyone uses it every day.



This article is formatted using MDNICE