Front row advertisement: technical book, video member, gold digger booklet. Gen yu front-endA raffle with a high winning rateAnd the last few days.


This article is an extension of the Barrage UI, a Web-side component I recently implemented. Before reading the examples and related code in this article, take a look at the project documentation to understand how the components are used and the related interfaces.

If you visit bilibili.com frequently, you should be familiar with the concept of masked bullets.

Masked bullet screen is a kind of rendering effect introduced by bilibili, a well-known video website of bullet screen, in the middle of 2018, which can effectively reduce the interference of bullet screen text to the main information of video.

In fact, there have been a lot of detailed discussions and researches on the realization principle of masked bullet screen of STATION B. Personal summary, the general points are as follows:

  1. Based on user data and some machine learning related applications, the key subjects of the video can be extracted
  2. The server preprocesses the video and generates the corresponding mask data
  3. When the client plays a video, it loads corresponding resources in real time
  4. Through some front-end technical means, realize the mask processing of barrage

On the client side, since the bullet screen of station B is based on the implementation of DIV + CSS, it adopts SVG format to transmit vector mask (at least for now), and achieves rendering by means of CSS mask.

■ There is a discussion about this project on the website, those who are interested can go here to learn about it.

Barrage UI

The Barrage UI is a recent implementation of the Barrage UI, which is mainly used to mount Barrage animations in the front screen.

The component provides a series of operation interfaces for users to customize the features of the bullet screen. You can also manipulate each frame of an animation at the render level, for example:

  1. Read video information in real time
  2. Each frame of video image is processed in real time and matting mask is calculated
  3. Pass the calculated mask to the barrage component to realize the real-time mask barrage

The following is a mask Barrage effect implemented based on the Barrage UI component:

Since it is not convenient to embed video in this article, the actual effect of Demo can be viewed here.

Here is how to achieve the animation effect shown above.

Color keys (chroma keying)

The Demo uses a video of Hatsune dancing. The main feature is that except for the characters, the background of the video is relatively consistent solid color. For this type of image, we can matting (generate a “mask”) using color keys.

Chromaticity keying, also known as color inlay, is a kind of back synthesis technology. Chroma means solid color, and Key means removed color. Place the person or object being photographed in front of the green screen and remove it from behind, replacing it with another background. This technology is widely used in movies, TV shows, and game production. Color keys are also an important part of Virtual Studio and Visual Effects.

Below is an example of the color key technique: little sister in blue in front of a green screen, left to remove the front, right to remove the new background behind.

How to deduct video image

In the browser environment, we can draw every frame of the video in real time through canvas canvas, read RGBA information of every pixel in the image from canvas, and detect whether R(red), G(green) and B(blue) values of each point meet the requirements. Finally, the A(alpha) value of the pixel to be deducted is set to 0, and the mask map image used for the synthesis of mask barrage can be obtained.

Note:

Barrage of UI components mask function is based on 2 d Canvas API CanvasRenderingContext2D globalCompositeOperation attribute implementation (using the source – of in mixed mode), Therefore, you only need to set the unwanted pixels to transparent (alpha=0) without changing the RGB color value of the image.

The code implementation for this case is described below.

The specific implementation

Install the Barrage UI component

Install this component directly using YARN or NPM:

yarn add barrage-ui or npm install --save barrage-ui

HTML + CSS

Prepare a video element for playing the video. The parent element of the video is used to mount the barrage:

<div id="container">
  <video id="video" src="videos/demo.mp4" controls></video>
</div>
Copy the code

Set the style of #container and #video according to the actual size of the video (880×540) :

html.body {
  font: 14px/18px Helvetica, Arial, 'Microsoft Yahei', Verdana, sans-serif;
  width: 100%;
  margin: 0;
  padding: 0;
  background: #eee;
  overflow: hidden;
}

#container.#video {
  width: 880px;
  height: 540px;
}

#container {
  margin: 0 auto;
  margin-top: 50vh;
  margin-left: 50vw;
  transform: translate(50%, 50%);background-color: #ddd;
}
Copy the code

Create a barrage

import Barrage from 'barrage-ui';
import data from 'utils/mockData';

// Get the parent container
const container = document.getElementById('container');

// Create a barrage instance
const barrage = new Barrage({
  container: container,
});

// Reset the canvas height to avoid blocking the video playback control with bullets
barrage.canvas.height = container.clientHeight - 80;

// Load the barrage data
barrage.setData(data);
Copy the code

MockData is a method used to generate random barrage data.

For details on the content and format of Barrage data, see the Barrage UI project documentation

Capture video images in real time

// Get the video element
const video = document.getElementById('video');

// Create a new canvas to draw the video in real time (pure drawing, no need to add to the page)
const vCanvas = document.createElement('canvas');
vCanvas.width = video.clientWidth;
vCanvas.height = video.clientHeight;
const vContext = vCanvas.getContext('2d');

// Draw video to canvas in real time
barrage.afterRender = (a)= > {
  vContext.drawImage(video, 0.0, vCanvas.width, vCanvas.height);
};
Copy the code

AfterRender () draws the video image to the middle canvas, vCanvas, after each frame of the bullet screen animation is rendered. Note that the vCanvas canvas here is mainly used to fetch video images in real time and does not need to be added to the page.

Calculate mask information in real time

// Read the data of the canvas vCanvas before rendering, and process it into the mask image
barrage.beforeRender = (a)= > {
  // Read the image
  const frame = vContext.getImageData(0.0, vCanvas.width, vCanvas.height);

  // Total number of pixels
  const pxCount = frame.data.length / 4;

  // Construct the frame into the mask image we need
  for (let i = 0; i < pxCount; i++) {
    // We don't use ES6 destruct assignment for performance reasons
    // PS: Using the destruct assignment syntax will result in the creation of a large number of new objects, which is a time-consuming process
    const r = frame.data[i * 4 + 0];
    const g = frame.data[i * 4 + 1];
    const b = frame.data[i * 4 + 2];

    // Make the content outside the black area transparent
    if (r > 15 || g > 15 || b > 15) {
      frame.data[4 * i + 3] = 0; }}// Set the mask
  barrage.setMask(frame);
};
Copy the code

BeforeRender () calculates the mask image before rendering each frame of the bullet screen animation using the component’s render cycle hook. The interface used to update the mask is.setmask ().

Operation binding of video and bullet screen

Finally, in order to coordinate the behavior of bullet screen with the operation of video playback, some binding operations are needed:

// Bind the playback event
video.addEventListener(
  'play',
  () => {
    barrage.play();
  },
  false
);

// Bind the pause event
video.addEventListener(
  'pause',
  () => {
    barrage.pause();
  },
  false
);

// Toggle the playback progress
video.addEventListener(
  'seeked',
  () => {
    barrage.goto(video.currentTime * 1000);
  },
  false
);
Copy the code

The three interfaces of Brrage UI component.play().pause.goto () are used to play, pause, and switch the progress of the barrage animation respectively. It should be noted that the video playback progress obtained through the video.currentTime property is a floating point number in seconds, which needs to be converted to milliseconds before being passed to the bullet-screen component.

The source code in

The case for this article has been uploaded to Github, and those interested can view the source details here.

If you have any suggestions or questions about the Barrage UI component, please send me an issue in the project to help me keep improving and iterating. Star and PR are more welcome.


The text/Parksben

Sound/fluorspar

However, there is no use for the series: Write a Brainfuck interpreter

This article is authorized by chuangyu front-end author, copyright belongs to the author, chuangyu front-end production. Welcome to indicate the source of this article. Article link: juejin.cn/post/684490…

To subscribe for more sharing from the front line of KnownsecFED development, please search our wechat official account KnownsecFED. Welcome to leave a comment to discuss, we will reply as far as possible.

Thank you for reading.