Google’s self-driving car, Tesla’s Autopilot, is making headlines these days. People are talking about autonomous driving technology, and they are full of dreams and even fears about this “science fiction” technology. So TODAY I want to talk about my views on these “automatic cars”.

As you can see from my other post, Tesla’s Autopilot isn’t really “self-driving technology” at all, and it’s nothing like Google’s autonomous cars. Tesla pushed this rudimentary software into users’ cars in order to steal the show from Google and burnish its own image. Look, we got the automatic car first! However, Tesla is not “autonomous driving” at all, but a “adaptive cruise control” at best. And it’s pretty insecure and unreliable. Unfortunately, Tesla has risked the lives of its users for the sake of its reputation, and some applaud it.

But even Google’s autonomous cars are far from practical. And when I say “far away” here, I don’t mean 10, 20 years, but at least 100, 1000 years… It never will. Why is that? Doesn’t Google claim to have its autonomous cars “learn” millions of miles every day? Couldn’t learning such “big data” make the car as smart as a person? If you think that, you probably don’t understand the inner workings of artificial intelligence (AI). In fact, so-called “machine learning” is completely different from human “learning”. Even if the current computer has IQ, then its level is like a worm, or even worse than a worm. The ability to learn is different, so no matter how hard it works, no matter how much data it “learns,” it’s all in vain.

Many people hear cool terms like “ARTIFICIAL intelligence” (AI) or “machine learning” or “deep learning” and think of intelligent robots in science fiction and think that science fiction is about to become reality. When you get into machine learning, it’s a bunch of confusing, confusing things that don’t really work. These big slogans, including “deep learning,” have little to do with how people think. “Machine learning” is just ordinary statistical methods that fit the parameters of a function. Blowing fantastic, in fact, with people’s way of thinking, completely less than the wind horse, but let the statistics professional people joke.

There was a boom in artificial intelligence in the 1980s. At that time, people believed optimistically that computers would soon possess human intelligence. Japan also claims to be mobilising national efforts to build what it calls a “fifth generation computer”. The result? The promise failed to materialize, and the AI field entered the winter. This “AI fever” has seen a resurgence recently with the emergence of hot topics such as “big data”, “autonomous cars” and “Internet of Things”. Today’s AI, however, is not that much better than it was in the 1980s. People still know very little about how their brains and senses work, but they swear that concepts stolen from statistics, renamed “machine learning,” are as good as their brains.

A very simple phenomenon illustrates the problem. How many miles does a person have to drive to learn how to drive? For ordinary people, it takes only 12 classes of one hour each to go from being completely useless to being safe on the road. If you were on the freeway for an hour, you’d be 80 miles. That’s 960 miles in 12 hours. In other words, the average person needs less than 1,000 miles to be a reliable driver.

Compare that to Google’s self-driving cars, which “analyze” and “learn” a million miles a day in “simulation driving,” and are often out collecting data, accumulating tens of thousands of miles a day. These autonomous cars, however, can still only be driven during the day, in good weather, in Mountain View, a city with very simple road conditions. I’ve never seen an automatic Google car at more than 50mph. And Google’s autonomous cars were recently reported to have had more than 272 errors requiring “human intervention” in the past year. If people don’t grab it in time, many of them will be car accidents. This shows that Google’s self-driving cars are still far from being ready for real, safe use.

How far is the gap between automatic cars and people? It’s a world of difference. You only have to drive 1,000 miles to learn it, and machines learn millions, tens, hundreds of millions of miles, and still fail, which shows that there’s a fundamental difference between this kind of automatic car technology and the way the human brain controls movement. Going completely in the wrong direction and expecting to overtake human drivers within a decade is a fantasy, isn’t it?

My prediction is that until humans fully understand how their brains work, it will not be possible to build autonomous cars that can surpass human drivers. I think people’s enthusiasm for automatic car technology is really comparable to the Cultural Revolution or the great Leap Forward.