Thanks to advances in image recognition technology and falling hardware prices, motion capture technology is both mature and readily available; Examples include OMOTE, a face-projection film developed by the Japanese filmmaker Nobumichi Asai’s team, and lady Gaga’s live makeup at the beginning of her performance at the 58th Grammy Awards, all done through high-speed cameras and other equipment. Face recognition and dynamic tracking, and real-time projection of visual effects on the performer’s head, face change. Such works use motion capture technology and light Projection Mapping, opening up new possibilities for technological exhibition.

Optical motion capture technology: The real world into the virtual world

The motion capture technology mentioned above is essentially “optical motion capture technology”; The so-called Optical motion capture technology is to first detect the person or object, wear a special light sphere (called Marker or Optical), then use a number of high-resolution cameras to record the movement of the object track; These high-resolution cameras are surrounded by infrared emitters that project a high-frequency flash of infrared light that is invisible to the naked eye. The above mentioned light sphere can reflect infrared light spot, and then receive it through multiple high-resolution cameras. After comparison and calculation, the 3d X, Y and Z displacement action of the light sphere in the solid space can be obtained, so as to further obtain the dynamic of the detected object. Because spot is a passively reflected light, it is also called “passive” optical motion capture system. Motion capture technology has been used in practice for a long time. It can be seen in almost all movies, games and performing arts. Fundamentally, it is a technology that transcends the virtual world by translating the dynamics of the physical world into the morphological changes of the virtual world. Andy Serkis, the British director and actor who played Gollum in the Lord of the Rings trilogy, talks about the use of motion capture in film.

Augmented reality: Pokemon appears in reality

In terms of crossing the actual situation of science and technology, here let’s talk about the current “Augmented Reality” (Augmented Reality, AR) technology, the so-called Augmented Reality, is a digital message or computer generated information, a technique of real-time coverage in real space, in general, Augmented Reality can amplify “five senses”, But most augmented reality technologies today are only used to enhance visual perception. The so-called “real-time coverage of information” mentioned here can be directly experienced by the audience with their body senses, or indirectly experienced by the device. Pokemon GO, which has taken the world by storm in the past two years (even though AR was always turned off in the past), and the latest iOS 11 ARKit technology are some of the best-known examples of augmented reality. If we take the reality-virtuality Continuum proposed by Milgram et al in 1994 (the picture below), we can know more clearly that augmented Reality (AR) tends to take “real environment” as the presentation basis. This is different from the popular “Virtual Reality” (VR), in which “real space” is seen as a very important element, as augmented Reality mainly attempts to embed Virtual objects into Reality. In general, augmented reality can be regarded as a “mixture” or “middle ground” between full-real and full-virtual environments. This is a very strange phenomenon in modern society. Through digital technology, we are connecting imagination and reality, and commuting to work, play and live in two places. Part of the information is from SBF forecast mitsp.com/

Get close to the core of the dance through amplification

I still remember that many years ago, there was a very special exhibition in Songyan: “In sync with Fosse — William Fosse New Media Art Series Exhibition”. This exhibition uses information visualization to enable us to see through and appreciate a dance work with the ideas of choreographers. Generally we see for the performing arts in recent years, in which to use the interactive technology, mostly use all kinds of electronic sensors, real-time detect the location of the performers and action, and let the computer or mechanical equipment (such as empty film machine, mechanical arm), feedback the corresponding dynamic visual image or object, with the performers on the stage. However, the exhibition “In Sync with Fosse” is not real-time, but a kind of performance art that “carefully records the current track, stores and analyzes it, and then reproduces it in different ways”.

Basically, the whole “Synchronous Objects” show is based on a project called Synchronous Objects, a collaboration between dancer William Forsythe and Ohio State University, The goal will be to reproduce the dances in different ways using technology to record and analyze them in different guest forms, based on Fosse’s 2000 dance “One Flat Thing, Reproduced.” Go to the Synchronous Objects website and you can find the core concepts of the program under the subheading: “By turning dance into data and then into objects, Visualizing choreographic structure from dance to Data to Objects “Visualizing choreographic structure from dance to Data to Objects” website contains many details of overall production, ideas of participating students and other data of text, static and dynamic images. It is a treasure house, which fully echoes the main purpose of the exhibition, and also allows the audience to understand the underlying information of the so-called dance and choreography from different angles. If not viewed in real time, Synchronous Objects in general also uses technology to amplify the reality of the original dance; Through this “amplification”, the work provides a new and more objective way of looking at “choreography”, so that the audience can directly feel the feelings of the choreographer, not just through the opinions of experts or dance critics.

“Mixed Reality” technology show, capture the live mixed virtual

Speaking of technological performance, recently there is a very interesting work: CyberCube, which is a new media exhibition led by Kurokawa Interactive Media Art (new media artist Hu Jinxiang) and integrated by many art creation teams. CyberCube allows performers to perform cube performances, and creates a “Mixed Reality” crossover creative presentation through real-time tracking technology (using the motion capture technology previously described in this article), LED light bars, computer lights, and webcasts (presenting real-time effects). Inspired by science fiction writer William Gibson’s Neuromancer, CyberCube describes the human struggle to navigate between the real world and the virtual world due to the rapid development of technology. The LED cube in CyberCube symbolizes the network world, and the performer controls the cube until he cannot extricate himself, implying the existence, escape, fusion and shuttle of people in the virtual and real space. Performance space, set up a number of high-speed cameras, these cameras, the detection of the performer dynamic, will be real-time calculation, influence the cube LED light changes, and the scene of the computer lamp projection effect; In addition, the real scene captured by the camera is combined with THE 3D object, and the online audience can actually see the visual effect of the integration of virtual and reality through the way of network broadcast. The whole performance field is flexibly transformed into the redefined organic life body, which is a highlight of this performance.

Low-cost reinforcement scenario: Just do it!

At the end of this article, another fun app that feels different, but which I personally consider to be the same, is Nintendo Labo, a new gameplay development for the Nintendo Switch console released on January 18, 2018. According to the video released by Nintendo, these new games include cardboard, rope, rubber bands and other diy materials, allowing players to assemble objects such as fishing rods, remote controls, pianos, equipment packs, pedals and more. Game does not want to let players into the experience of a kind of situation, the strategy taken by nintendo is relatively cheap material means (even with official buy rubber band should be also very expensive), to strengthen an experience, let players from both to construct, has to learn to play, can expand and deepen sensory experience completely, to play, More engaged in the virtual context constructed by the game. Going back to the reality-virtuality Continuum mentioned earlier in this article, Nintendo Labo could be considered Augmented Virtuality (AV), which is the combination of real-world information, Join the virtual environment; Using objects such as rubber bands and cardboard, Nintendo Labo allows players to keep their movements and postures in the game, and to adjust and control the movement of the game by adding real gravity and body movements to the game.

VR VR vs. AR augmented reality = Yin vs. Yin eyes

There is nothing wrong! The analogy is apt, as digital devices have become modern people’s new senses, and what happens in the virtual world is visible only through the screen, a window between the virtual and the real. The concept of people shuttling back and forth between virtual and real worlds has been around for a long time, but until now, the powerful computing power of digital technology has allowed many incredible applications to happen. Digital enhanced image production and communication ability, and reduce the cost of making threshold and then through the popularity of mobile devices, the size of the screen the rendered image, is full of various details in life, constantly to guide the audience immersed content developers from the arrangement of illusion, let everybody can come and go freely in the virtual and real world.