This is the 21st day of my participation in the November Gwen Challenge. Check out the event details: The last Gwen Challenge 2021

What ai and AI systems

Artificial intelligence is a new technical science that researches and develops theories, methods, technologies and application systems for simulating, extending and expanding human intelligence. Founded in 1956 by John. First proposed by McCarthy, it was then defined as “the science and engineering of building intelligent machines”. The purpose of artificial intelligence is to make machines think like people and make machines intelligent. Artificial intelligence is a branch of computer science. Today, ARTIFICIAL intelligence has expanded into an interdisciplinary discipline.

Artificial intelligence system is integrated with artificial intelligence technology system, to achieve intelligent information processing, improve the sales and management capabilities of enterprises.

Therefore, the practical application of artificial intelligence can be roughly divided into two main directions, namely theoretical research (algorithm, model) and engineering practice (programming implementation, MLOps).

Development of artificial intelligence

The industrial ecology of artificial intelligence

• The four elements of ARTIFICIAL intelligence are data, algorithm, computing power and scene. To meet these four elements, we need to combine AI with cloud computing, big data and the Internet of Things to intelligent the whole society.

Artificial intelligence related technologies and application scenarios

Artificial intelligence related technology

Al technology is multi-layered, spanning application, algorithm, tool chain, device, chip, process and material technology levels.

The main application technology direction of artificial intelligence at present

  • Natural language processing is the use of computer technology to understand and use natural language. The research topics of natural language processing mainly include machine translation, text mining and emotion analysis. Natural language processing has high technical difficulty and low technical maturity. Due to the high complexity of semantics, it is difficult to reach the level of human understanding only by deep learning based on big data and parallel computing.
  • Computer vision, which is the science of making computers see. Computer vision is the most mature of the three AI applications. Computer vision research topics mainly include image classification, target detection, image segmentation, target tracking, text recognition and so on.
  • Speech processing, it is the study of speech processing process, statistical characteristics of speech signal, speech recognition, machine synthesis and speech perception and other processing technologies collectively. Speech processing research topics mainly include speech recognition, speech synthesis, speech wake up, voice pattern recognition, audio event detection and so on. Among them, the most mature technology is speech recognition, which can reach 96% recognition accuracy under the premise of quiet room and near field recognition.

Application scenarios

Artificial intelligence is changing the world step by step. It has a wide range of application scenarios, such as personal assistant, surveillance detection, machine translation, medical diagnosis, games, art, image recognition, speech recognition, natural language processing, generative model, reinforcement learning, automatic driving and so on. Artificial intelligence will change all industries.

Artificial intelligence, machine learning, deep learning

  • Artificial intelligence: it is a new technical science to research and develop the theory, method and application system for simulating, extending and expanding human intelligence.
  • Machine learning: The study of how computers simulate or implement human learning in order to acquire new knowledge or skills and reorganize existing knowledge structures to improve their own performance. It is one of the core research fields of artificial intelligence.
  • Deep learning: Derived from the research of artificial neural network, multi-layer perceptron is a kind of deep learning structure. Deep learning is a new field of machine learning research that mimics the mechanisms of the human brain to interpret data, such as images, sound and text.

Why have artificial intelligence and deep learning only been successful in the last decade

The success achieved in the last decade is largely due to efficient programming languages, algorithm optimization, computer architecture improvements, parallel computing, and the development of distributed systems.

Massive amounts of data

Internet services and big data platforms have brought vast data sets to deep learning.

Data Source:

  • Search engine: for image search: such as ImageNet, Coco, etc.; For text search: Wikipedia (Natural language dataset)
  • Commercial websites: such as Amazon, Taobao (recommendation system data set, advertising data set)
  • Other Internet services: Siri, Cortana

For image classification problems, the scale of data has grown from the original MNIST dataset to ImageNet to website images.

Advances in deep learning algorithms

MNIST data set is adopted for handwritten numeral body recognition.

  • A simple convolutional Neural network approach can be equated to the best SVM approach (1998)
  • Deep convolutional neural network methods can reduce the error rate to 0.23% (2012), compared with 0.2% in humans

Development of programming languages and computing frameworks

From the hardware level, from the linear algebra library (CPU/GPU) in the early days, to the dense matrix engine (GPU), and then to the specialized AI accelerator (TPU), the processing capacity has been greatly improved.

At the same time, from the early computing framework need custom want to machine learning algorithms (Theano/DisBelif/Caffe), to the back of the deep learning framework (MxNet/TensorFlow/CNTK/Pytorch) provide a simple way to use a variety of library, computing framework has been real progress.

Increased computing power

From the birth of the first general purpose computer (ENIAC), to the Intel Xeon X5, and then GPU/TPU, computing power continued to climb.

The problems faced by artificial intelligence

  • Privacy issues: Existing AI algorithms are data-driven, and we need a lot of data to train models. While we enjoy the convenience brought by artificial intelligence every day, such as Facebook, Google, Amazon, Alibaba and other technology companies are acquiring a large amount of user data.
  • Security issues: For example, hackers use ARTIFICIAL intelligence, illegally steal private information, or simulate user behavior and try to change methods.
  • Credibility problem: With the development of computer vision, images and videos are becoming less and less reliable. Now we can use photoshop, GAN(generative adversarial network) and other techniques to create fake images that are hard to tell apart.

The future of artificial intelligence

  • Frameworks: A more user-friendly development framework.
  • Algorithm: better performance, smaller algorithm model
  • Computing power: end-edge-cloud computing power for overall development.
  • Data: more complete basic data service industry, more secure data sharing.
  • Scenario: Continuous breakthrough industry applications.

conclusion

In short, the full progress of artificial intelligence and deep learning in recent years comes from breakthroughs in algorithms, data, systems and other aspects. At the same time, the new problems facing the system are accompanied by new application problems and challenges.

Reference documentation

  • Overview of Artificial Intelligence
  • Overview of artificial intelligence