On October 17, the spark Forum of Tsinghua University Association for Science and Technology, together with tsinghua University Brain-like Computing Research Center and HyperAI, held “From AlphaGo to brain-like computer chips, where artificial Intelligence is heading”, which was successfully held in Meng Minwei Building of Tsinghua University. At the beginning of the roundtable forum, Deng Lei, the first doctor of brain like in Tsinghua University, explained the process of forming a relationship with brain like computing from multiple perspectives, answering questions for all students and giving us a new understanding of brain like computing and the development of artificial intelligence.

Deng Lei is tsinghua’s first PhD in brain-like computing and a postdoctoral fellow at the University of California, Santa Barbara. He is the lead author of the article “Heterogeneous Space-based Chip Architecture for Artificial General Intelligence,” which appears on the cover of Nature on August 1. He is responsible for chip design and algorithm details.

This paper achieved a breakthrough in the field of chip and AI in China. Deng Lei, the first author of this paper, is on the left

Last Thursday, tsinghua University Science and Technology Association Spark Forum, together with Tsinghua University Brain-like Computing Research Center, and HyperAI neural, held the theme forum of “From AlphaGo to brain-like computer chips, where artificial intelligence is heading”. Deng Was invited as a special guest to share some of his views in the form of a roundtable forum. This post will review some of his insights in the field of AI and brain-like computing, following questions from the forum.

Learning and exploring: the first PhD in brain-like computing

Q: How did you get into brain-like computing? What exactly does this subject involve? When I was studying for a doctor’s degree in brain-like computing, brain-like computing was not yet popular. At that time, I searched but did not find much effective information, and then I asked my tutor specifically…

As the first PhD in the brain-like computing Research Center, I have seen the brain-like center come from scratch. Including later to open a company, do research. After 2017, I graduated and went to the United States, and then transferred to the direction of computer. Now it’s 50 percent theory, 50 percent chips.

I majored in machinery as an undergraduate, but later I found that I didn’t have much talent in machinery, so I gradually transferred to instrument making. Later, I also worked in robotics and studied some materials and microelectricity. After that, I began to work on some algorithms and theories of AI. Along the way constantly walk constantly learn, is probably such a process.

Deng Lei completed his doctoral defense in 2017, posing for a group photo at the Tsinghua Brain Computing Research Center

Note: The Research Center for Brain-like Computing, Tsinghua University, was founded in September 2014, involving basic theory, brain-like chip, software, system and application. The center is a joint venture of seven schools and departments of Tsinghua University, integrating brain science, electronics, microelectronics, computer science, automation, materials and precision instruments.

The research of brain-like computing involves interdisciplinary integration. The origin is definitely medical (brain science), and the current ai was originally born out of psychology and medicine, which provide some basis for the model.

The next step is machine learning, and they will definitely come together in the future, but separately for now, because machine learning has more experience in making products, often thinking in terms of applications.

In addition, there are problems in computer science that cannot be solved by GPU, so Ali and Huawei have started to make their own dedicated chips. Students majoring in computing architecture can also consider developing in this direction.

Next is the hardware such as chips, which involves microelectronics and even materials. Because some new devices need to be provided, some basic storage units are still used at present, but there will definitely be some new devices in the future, such as whether carbon nanotubes, graphene and other materials can be applied.

There is also the direction of automation, many people who do machine learning, usually in the computer science department and automation department, because automation is to do control and optimization, which is similar to machine learning. In brain-like computing, these disciplines are well integrated.

As the first doctoral student of tsinghua Brain-like Computing Research Center, he published 9 academic papers and applied for 22 patents

Question: What was the driving force, or the opportunity, to choose this direction at that time?

In a word, the greatest charm of this direction is that it never gets done.

I used to think of a philosophical paradox: the study of brain-like computing is inseparable from the human brain, but to think of the human brain with the human brain, we don’t know how far, the study of it will never end. Because human thinking about themselves is always going to be there, there will always be an orgasm, and then there will be a plateau, and suddenly there will be a breakthrough, and it will never stop. This Angle is worth studying.

Q: What are the differences in your research during your current postdoctoral period?

In the past, when I made chips in Tsinghua university, I thought that I could make a device or an instrument from a practical point of view. However, after I went to the United States, I considered this matter more from the perspective of disciplines, just like the computing architecture of computing major and many Turing Awards of ACM, they all looked at problems from this perspective. Although they did the same thing, the perspective of thinking was different.

From the perspective of computing architecture, any chip is nothing more than a computing unit, a storage unit, and communication. No matter how we do it, it is the category of these three things.

Sky movement and brain-like computing: The bike is not the focus

Q: This article by Nature is a landmark event. What do you consider to be the milestones of the past few decades? What are some of the events driving the growth of brain-like computing?

The field of brain-like computing is relatively complex. If I comb it from the context of artificial intelligence, it will be more obvious. Artificial intelligence is not a single discipline, but basically can be divided into four directions.

The first is algorithmic, the second is data, the third is computing power, and the last is programming tools. Milestones can be viewed in these four directions.

In terms of algorithms, of course, deep neural networks, this is indisputable; From a data point of view, ImagNet is a milestone. Before big data, deep neural networks were almost buried. In terms of computing power, the GPU is a great birth. Programming tools, popular apps like Google’s TensorFlow, are a big driver.

These are the things that move AI forward, and they’re an iterative process that wouldn’t be where it is today without one. But AI also has its limitations. AlphaGo, for example, can do only one task and cannot do much except play chess. It’s not the same as the brain.

The second is interpretability. We use deep neural networks for fitting, including augmented reinforcement learning, but it’s not clear what’s going on inside of them, and some people are trying to visualize this process or figure out how it works.

The third is robustness. AI is not as stable as humans. For example, autonomous driving, now AI is only used to assist driving, because it can not guarantee absolute safety. Because of these shortcomings, we must pay attention to the development of brain science and introduce more mechanisms of brain science. The most urgent, in my view, is to make intelligence more universal.

Deng Lei speaks at spark Forum

As for milestone events, AlphaGo is one. Because it put AI in the public eye, made everyone pay attention to AI, and reinforcement learning took off later. From the perspective of chips, there are two types of chips that focus on algorithms and those inspired by biological brain. In the development of these two types of chips, there are two milestones.

The first category is machine learning. At present, deep neural networks are all calculated on Gpus, but Gpus are not the most efficient. A group of companies like Cambrian are looking for solutions to replace Gpus, which is an important event. The other is not limited to machine learning, which IBM or Intel have done better in making specialised chips out of models of the brain.

The reason why Tianji chip is very concerned lies in integrating the advantages of these two types into an architecture.

Q: Your team has released the test of tianji movement on the bicycle. Can you tell us more about it?

On the Internet, everyone was attracted by the bike, but we all knew that the bike was not our focus, it was just a Demo platform, because we were thinking about finding a good platform to show everyone.

The bicycle with the sky movement can move perfectly on its own and avoid obstacles

In the bicycle demonstration, there are visual, auditory and motion control, and a chip to complete these functions, is an ideal platform. From that point of view, bike control was not very difficult, we just wanted to show a new model.

The Future of Brain-like computing: Breaking the Von Neumann architecture

Question: How does future artificial intelligence, or brain-like computing, relate to the existing Von Neumann architecture? Will they evolve towards the human brain?

This is a very important question, and there is a fundamental trend in the semiconductor industry, including the Turing Award in 18 years, which was also awarded to researchers doing research on computing architecture. There are two ways to try to make gpus perform better. The first is to make transistors smaller, physically smaller, following Moore’s Law. But in the past two years, we realized that Moore’s Law began to fail, the development of the relevant is slower and slower, one day it will not be able to do small.

Moore’s Law is slowing down

The other direction is to do computing architecture, and try to design the framework so that the computing unit, storage unit, communication, these three parts play a very high efficiency. The human brain is amazing. Through the accumulation of learning, knowledge increases with each generation, and we have to learn from the evolution of this knowledge.

General-purpose processors have largely followed Moore’s Law over the past century, as the development of computing architectures has been somewhat overshadowed by the ability to make transistors smaller and smaller. Now that Moore’s Law has been blocked and applications like AI need to pursue high processing efficiency, computing architecture research is back in the spotlight, and the next decade will be a golden age for dedicated processors.

One of the most frequently asked questions about brain-like research is, what can brain-like computing do?

It’s a deadly problem, and a lot of people who do ARTIFICIAL intelligence or brain science don’t really know what’s behind it. In the case of brain science, there are three levels of disconnection.

The first is how nerve cells actually work. This is a question that many medical and biological scientists are still struggling to explore.

The second is how nerve cells are connected, and there are 10 to the 11th power of nerve cells in the brain, and how they are connected is a little harder to figure out, and requires the power of optics and physics.

And finally, how they learn, which is the hardest but most important question.

There is a gap in each, but difficulty is no reason not to explore. If we do nothing, we don’t stand a chance. Doing something at every level will always result in something new, and iteration after iteration.

At the roundtable forum, Dr. Deng Lei is second from the left

If you wait for brain science to figure it out, you’re too late. Someone else will be ahead of you.

For example, CPU is not as simple as we think, not to say that The Chinese are not smart, engine is the same, we all understand the principle, but it is not easy to do well, engineering difficulty and technology accumulation is not a day.

One of the reasons is that many things have a large industrial chain. If you don’t do it at the beginning, you will lose a lot of opportunities for trial and error. There will be no quick breakthroughs in this area, only a down-to-earth approach. As for the future, the current AI, strong AI, AI 2.0 and brain-like computing, I think they all end up in the same direction, because they all come from the brain, just different directions.

Q: There was another paper in Nature a while ago where they mapped out all the neurons in the worm, and all the 7,000 connections between all the neurons.

Question: Is there any connection between this work and brain-like research? Can we use existing technology, or von Neumann’s CPU, to simulate the work of nematodes, and what can we expect to happen in the next three to five years?

Scientists map neuronal connections in a worm’s brain

I’ve seen research on the structure of that nematode, and it’s had a big impact on brain-like research. In fact, the current AI model, no matter it is brain-like computing or artificial intelligence, its connection structure is mostly derived from the current hierarchical deep neural network, which is actually very superficial.

Our brain is more like a graph than a simple layered network. And the connections between brain regions are very complex. The significance of this study is to think about whether we can use this kind of connection.

Previously, there was a view that in the structure of neural network, the role of connection structure is actually greater than the specific weight of each connection, that is, the meaning of connection is greater than the meaning of each parameter.

The reason why convolutional neural network is better than previous neural networks is that its connection structure is different, so its ability to extract features is stronger, which also shows that the connection structure will lead to changes in results.

It’s a little difficult to do this on a traditional processor. The typical thing about von Neumann architecture is that you need a very specific unit of storage, a very specific unit of computation.

A schematic of a traditional Von Neumann architecture

But we don’t have very clear boundaries in the brain, and even though we have the hippocampus, which is responsible for long-term memory, it’s not clear at the level of neural networks which parts of the brain are necessarily storage and which are just calculations.

The brain is more like a chaotic network, where computing and storage are indistinguishable, so in that sense it’s hard to do it with traditional chips or processors.

So we have to develop some new non-Von Neumann methods, supported by new architectures, to do brain-like research.

The Turing Award in 2018, for example, declares that chips in specialized fields will become more and more popular. What Nvidia is promoting right now is a heterogeneous architecture, where you have all kinds of small IP cores on a single platform, maybe like the brain.

So instead of having a SINGLE CPU to do everything, no single chip can do everything efficiently. The future will move towards a variety of highly efficient and specialized development technologies, which is a current trend.

Deng believes that future brain-like computing and AI research will eventually lead the same way

One of the main reasons why brain science or brain-like computing is not as well understood as ARTIFICIAL intelligence is that investors and industry have not been very involved. Therefore, both data computing power and tools are difficult to do. Brain-like computing is in its infancy, and will be much clearer in the future as more universities and companies get involved.

Question: How does the architecture of a brain-like chip differ from the traditional Von Neumann architecture?

The brain-like chip can be divided into brain-like chip and computing chip. From the perspective of brain like, it is not only the deep neural network in AI, but also a combination of some brain science calculations.

In terms of architecture, there is a bottleneck in the Von Neumann system, which is a problem faced by the architecture of the entire semiconductor industry: as the storage capacity increases, it becomes slower and slower. If you want to expand the scale and want to achieve high speed, it is impossible to achieve. Basically, architecture people are looking at optimizing the storage hierarchy, how to make it faster.

Unlike other architectures, Skyphone does not use memory that needs to be expanded. Tianqi chip is more like a brain, which is equivalent to many small circuits connected by cells, which then expand into many networks, and finally form functional areas and systems. It is a structure that can be easily expanded, rather than like a GPU.

Tianqi chip single chip and 5×5 array expansion board

The multi-core decentralized architecture of the Sky movement determines that it can be easily expanded into a large system without the constraint of memory wall, which is actually a non-von Neumann architecture of memory fusion. This is one of the biggest differences between existing processors at the architecture level, the model level, and basically those two categories of differences.