Here are some interesting and important AI applications that have been released recently:

  • Nature blockbuster: DeepMind uses reinforcement learning algorithms to control a nuclear fusion device
  • Break through the original technology application form, Meta Meta universe play AI like this
  • Bid farewell to poor quality animation video, B station open source animation super resolution model
  • 2D images generate 3D models, NeROIC makes your hands more refined

1. Nature blockbuster: DeepMind uses reinforcement learning algorithms to control fusion devices

If you’ve seen spider-man 2, released in 2004, you may remember a scene in which the villain Dr. Oak creates an AI-powered exoskeleton to control an experimental fusion reactor, which is now a reality.

EPFL and DeepMind have used deep reinforcement learning to control a fusion device (Tokamak plasma), according to research published recently in Nature.

At the center of the tokamak is a circular vacuum chamber with coils wound around it. When an electric current is applied to the tokamak, a huge helical magnetic field is created inside the tokamak, which heats the plasma to high temperatures for nuclear fusion. Finding ways to control and limit the plasma will be key to unlocking the potential of nuclear fusion, which is supposed to be a source of clean energy for decades to come.

Changing plasma configurations and experimenting with different shapes to produce more energy or purer plasma requires a lot of engineering and design work. Traditional systems are computer-controlled and based on models and simulations, but Ambrogio Fasoli, director of the Swiss Plasma Centre (SPC), who was involved in the research, said the traditional methods are complex and not necessarily optimising. And artificial intelligence, and reinforcement learning in particular, is uniquely suited to the complex problem of controlling plasma in the Tokamak.

The research showed that DeepMind’s artificial intelligence could manipulate the magnetic coils in the right way to autonomously calculate how to create plasma shapes, whether in simulations or experiments in tokamak devices. Fasoli said the research is an “important step” that could influence the design of future tokamak devices and even speed up the path to controlled fusion reactors.

Dmitri Orlov, an associate research scientist at the Energy Research Centre in Santiago, says the more complex and powerful the tokamak devices become, the more they need to be controlled with increasing reliability and accuracy. The AI-controlled tokamak device can be optimized to control heat transfer from the reaction to the vessel walls and prevent damaging “plasma instability.” The reactor itself could be redesigned to take advantage of the tighter controls offered by reinforcement learning.

This isn’t the first time scientists have used artificial intelligence to control nuclear fusion. Since 2014, Google has been working with TAE Technologies, a Fusion company based in California, to apply machine learning to different types of fusion reactors to speed up the analysis of experimental data. Research at the European Joint Ring Reactor (JET) fusion project has used AI to try to predict the behaviour of plasma. Last week, a fusion experiment on the JET project produced 59 megajoules of energy in five seconds, a new record.

Related links:

1. www.nature.com/articles/s4…

2. www.wired.com/story/deepm…

2, break through the original technology application form, Meta Meta universe play AI like this

At Meta’s recent conference on “Building meta-universes with ARTIFICIAL Intelligence,” Zuckerberg unveiled a host of new technologies, including token AI systems, Builder Bot, universal language translation systems, and the open source TorchRec library, an AI recommendation system.

Among them, unlike projects such as Dall-E and CLIP, which produce images based on text, Builder Bot allows users to generate or import objects into the virtual world using only voice commands. For example, appearing as a 3D avatar in the metasverse and issuing voice commands to create beaches, characters, etc., and adding different scenes. However, Builder Bot is not available yet.

Cairtoken is a super-AI conversational system that can support more personalized, context-specific conversations than the conversational systems people are familiar with today. Unlike traditional methods that use NLU, DST, DT, and NLG models, Meta uses a new end-to-end training model for AI, builds new databases, and dramatically increases the speed of training and development. The Cairtoken project will be the core of Meta, Zuckerberg said.

In addition, the Meta is developing a common voice translator, aims to create suitable for translation software “all languages in the world”, the company had already set for the artificial intelligence system all the goals of the written language translation, but one big challenge is that the current lack parts of the language corpus or no standardized writing system.

3, bid farewell to the poor quality animation video, B station open source animation super resolution model

Limited by the requirements of equipment and later technology, the quality of content released by users on various video platforms is difficult to reach the level of ULTRA HD video. Now the AI super resolution technology can help the image at the specified resolution to achieve HD.

Among them, in order to improve the image quality of UGC video content of animation, station B opened source the self-developed animation super-resolution model real-Cugan, which uses the same animation model structure as Waifu2x, but uses new training data and training methods, resulting in a model with different parameters and reasoning modes.

Real-cugan can first slice animation frames and use image quality scoring model to score and filter candidate blocks to obtain a training set of millions of high-quality animation image blocks. Then, the multi-stage quality reduction algorithm is used to downsample the hd image block to obtain the low-quality image, and the AI model learns and optimizes the reconstruction process from the low-quality image to the high-quality image. After the training, the real low-quality two-dimensional image can be processed in HD.

Compare this to the popular open source model Waifu2x (using the latest CUNet-NoisE3 mode) and Real-ESrgan (using the latest anime specially optimised RRDB_Anime6B), Real-cugan offers improvements in speed and compatibility.

The B stand and OGV countries play azraell street in the second quarter (www.bilibili.com/bangumi/pla…

Making links:

Github.com/bilibili/ai…

4, 2D pictures generate 3D new models, NeROIC makes your hands more delicate

Neural Rendering is a technology that uses various deep Neural networks to generate realistic models in space. For example, given several 2d images taken from different angles, Neural Rendering model can generate 3d models without manual intervention. This technique is commonly used in scenarios such as hand crafted and game animation.

However, it has always been a difficult problem in graphics to restore 2d photos to 3D models, and different lighting, freshness and camera models of photos will affect the final generated effect, and also limit the actual application scenarios of models.

The traditional NeRF model has high requirements for input images and cannot change the lighting conditions of rendering. However, the new NeROIC model proposed by Dr. Chinese from THE University of Southern California can change the lighting effects by using a relatively small amount of image data through deep extraction network and rendering network. Even 3d models with greater detail and resolution can be rendered using web photos.

Related links:

1. www.louisbouchard.ai/neroic/

2. Arxiv.org/pdf/2201.02…

Problem group from geralt, Pixabay

OneFlow’s new generation of open source deep learning framework: github.com/Oneflow-Inc…