I don’t know if you’ve used the Web version of a video interview, or the Web version of an online meeting, but they share the screen and you can turn on the camera, all in the browser.

As a front-end developer, have you ever wondered how these features work?

The capabilities related to audio and video communication on the browser are called WebRTC (Real Time Communication), which is a standard API for audio and video communication implemented by the browser as the Internet speeds up and audio and video demands increase.

There are five steps in the process of audio and video communication: acquisition, coding, communication, decoding and rendering.

These five steps are easy to understand, but there’s a lot to each step.

Today we will implement the collection part, to quickly enter the door, intuitive feel what WebRTC can do.

We will do screen recording, camera recording, and playback of recorded content, also support download.

So let’s get started.

Thought analysis

Provides the navigator browser. MediaDevices. GetDisplayMedia and the navigator mediaDevices. GetUserMedia API, respectively, can be used to get the screen flow, flow of microphones and cameras.

As you can see from the name, getDisplayMedia gets the stream of the screen, getUserMedia gets the stream of the user, that is, the stream of the microphone, the camera, etc.

Get the stream and set it to the video’s srcObject property to play it.

If you want to record a video, you need to use the MediaRecorder API, which listens for the data in the stream, and we can store that data in an array. And then when you play it back, you just set it to the srcObject property of the other video.

The download is also based on the data recorded by MediaRecorder, turned into a BLOB and triggered by the A tag.

So with that in mind, let’s write down the code.

Code implementation

We put two video tabs on the page, one for watching the recorded video in real time and the other for playback.

And then you put some buttons.

<selection>
    <video autoplay id = "player"></video>
    <video id = "recordPlayer"></video>
</selection>
<section>	
    <button id = "startScreen">Open to record screen</button>
    <button id = "startCamera">Enabling the Camera</button>
    <button id = "stop">The end of the</button>
    <button id = "reply">The playback</button>
    <button id = "download">download</button>
</selection>
Copy the code

Both “Start recording” and “Start Camera” buttons enable recording when clicked, but in different ways.

startScreenBtn.addEventListener('click'.() = > {
    record('screen');
});
startCameraBtn.addEventListener('click'.() = > {
    record('camera');
});
Copy the code

One is to use getUserMedia’s API to get microphone and camera data, and the other is to use getDisplayMedia’s API to get screen data.

async function record(recordType) {
    const getMediaMethod = recordType === 'screen' ? 'getDisplayMedia' : 'getUserMedia';
    const stream = await navigator.mediaDevices[getMediaMethod]({
        video: {
            width: 500.height: 300.frameRate: 20}}); player.srcObject = stream; }Copy the code

Specify the lower width, height and frame rate, and set the returned stream to the srcObject property of the video, and you can see the corresponding audio and video in real time.

Then, to record, you need to use the MediaRecorder API, pass in the stream, and then call the start method to start recording.

let blobs = [], mediaRecorder;

mediaRecorder = new MediaRecorder(stream, {
    mimeType: 'video/webm'
});
mediaRecorder.ondataavailable = (e) = > {
    blobs.push(e.data);
};
mediaRecorder.start(100);
Copy the code

The start parameter is the size of the split. Passing 100 means that the data is saved every 100ms.

Listen for the DataAvailable event in which the retrieved data is saved into the BLOBS array.

Create bloBS from blobs array, then download bloBS from blobs array.

Replay:

replyBtn.addEventListener('click'.() = > {
    const blob = new Blob(blobs, {type : 'video/webm'});
    recordPlayer.src = URL.createObjectURL(blob);
    recordPlayer.play();
});
Copy the code

The BLOB must be processed by url.createObjecturl before it can be played as an object URL.

Download:

download.addEventListener('click'.() = > {
    var blob = new Blob(blobs, {type: 'video/webm'});
    var url = URL.createObjectURL(blob);

    var a = document.createElement('a');
    a.href = url;
    a.style.display = 'none';
    a.download = 'record.webm';
    a.click();
});
Copy the code

Generate a hidden A tag and set the Download property to support the download. The Click event is then emitted.

So far, we have achieved microphone, camera, screen recording, playback and download support.

Let’s take a look at the effect:

The full code is uploaded to github: tygithub.com/QuarkGluonP…

Here’s a copy:

<html>
<head>
        <title>Record the screen and download it</title>
</head>
<body>
        <selection>
                <video autoplay id = "player"></video>
                <video id = "recordPlayer"></video>
        </selection>
<section>	
    <button id = "startScreen">Open to record screen</button>
    <button id = "startCamera">Enabling the Camera</button>
    <button id = "stop">The end of the</button>
    <button id = "reply">The playback</button>
    <button id = "download">download</button>
        </selection>

<script>
    const player = document.querySelector('#player');
    const recordPlayer = document.querySelector('#recordPlayer');
    let blobs = [], mediaRecorder;

    async function record(recordType) {
        const getMediaMethod = recordType === 'screen' ? 'getDisplayMedia' : 'getUserMedia';
        const stream = await navigator.mediaDevices[getMediaMethod]({
            video: {
                width: 500.height: 300.frameRate: 20}}); player.srcObject = stream; mediaRecorder =new MediaRecorder(stream, {
            mimeType: 'video/webm'
        });
        mediaRecorder.ondataavailable = (e) = > {
            blobs.push(e.data);
        };
        mediaRecorder.start(100);
    }

    const downloadBtn = document.querySelector('#download');
    const startScreenBtn = document.querySelector('#startScreen');
    const startCameraBtn = document.querySelector('#startCamera');
    const stopBtn = document.querySelector('#stop');
    const replyBtn = document.querySelector('#reply');

    startScreenBtn.addEventListener('click'.() = > {
        record('screen');
    });
    startCameraBtn.addEventListener('click'.() = > {
        record('camera');
    });

    stopBtn.addEventListener('click'.() = > {
        mediaRecorder && mediaRecorder.stop();
    });

    replyBtn.addEventListener('click'.() = > {
        const blob = new Blob(blobs, {type : 'video/webm'});
        recordPlayer.src = URL.createObjectURL(blob);
        recordPlayer.play();
    });

    download.addEventListener('click'.() = > {
        var blob = new Blob(blobs, {type: 'video/webm'});
        var url = URL.createObjectURL(blob);

        var a = document.createElement('a');
        a.href = url;
        a.style.display = 'none';
        a.download = 'record.webm';
        a.click();
    });
</script>
</body>
</html>
Copy the code

conclusion

Audio and video communication is divided into five steps: collection, coding, communication, decoding and rendering. The API related to audio and video communication of the browser is called WebRTC.

We implemented the collection section to get started with WebRTC, and also supported playback and download.

There are three apis involved:

  • The navigator. MediaDevices. GetUserMedia: get a microphone and camera
  • The navigator. MediaDevices. GetDisplayMedia: obtain the screen flow
  • MediaRecorder: Listen to the flow changes to achieve recording

We used the first two apis to capture the screen, microphone and camera streams, then recorded them with MediaRecorder, saved the data into an array, and generated the Blob.

Video You can set the srcObject property to a stream, which can be played directly. If you set blobs, you need to use url.createObjecturl.

The implementation of the download is to point to the object URL of the Blob object with the A tag and specify the download behavior with the Download attribute. Then manually trigger Click to download.

We learned how to use WebRTC to collect data, which is the data source of audio and video communication, and there will be coding, communication, decoding, rendering, etc., which we will learn more about later.