preface

I’m sure you’ve all used the Web version of video conferencing, video interviews, online video education and so on. And the realization of these functions are inseparable from a technology, that is webRTC, webRTC is a real-time communication technology.

MDN definition of WebRTC:

WebRTC (Web Real-Time Communications) is a real-time communication technology that allows Web applications or sites to establish peer-to-peer connections between browsers without resorting to intermediaries. Implement the transmission of video streams and/or audio streams or any other data. WebRTC includes standards that make it possible to create peer-to-peer data sharing and teleconferencing without having to install any plug-ins or third-party software.

In fact, this technology has been around for many years, but at that time, the application scenarios were not rich, the use was relatively few, and it was not mature enough. With the development of The Times and technology, this technology will become more and more stable and mature, and will be used more and more widely, especially in the epidemic era.

Today we are going to go through the door quickly and do a simple demo to get a feel for what webRTC can do. We will achieve computer screen recording, camera recording, and playback of recorded content, and support download.

Implementation approach

Provides the navigator browser. MediaDevices. GetDisplayMedia and the navigator mediaDevices. Two getUserMedia API.

  • navigator.mediaDevices.getDisplayMediaIs the stream that gets the screen
  • navigator.mediaDevices.getUserMediaIs to get the streams that are relevant to the user, that is, the microphone, the camera and so on.

Once you get the stream, set it to the srcObject property of the video container for playback.

If you want to record a video, you need to use the MediaRecorder API, which listens for the data in the stream, and we can store that data in an array. And then when you play it back, you just set it to the srcObject property of the other video container.

The download is also based on the data recorded by MediaRecorder, turned into a BLOB and triggered by the A tag.

Code implementation

Real-time playback

First, put two video tabs on the page, one for watching the recorded video in real time, and the other for playback. And then a couple of action buttons.

  <div>
    <video autoplay id="player"></video>
    <video id="recordPlayer"></video>
</div>
<div>
    <button id="startScreenBtn">Open to record screen</button>
    <button id="startCameraBtn">Enabling the Camera</button>
    <button id="replyBtn">The playback</button>
    <button id="downloadBtn">download</button>
</div>
Copy the code

Next, add a click event to the start recording and Start camera buttons, both of which are used to start recording, but in different ways. We can do different things depending on the parameters we pass in.

let blobs = [], mediaRecorder;

startScreenBtn.addEventListener('click'.() = > {
    record('screen');
});

startCameraBtn.addEventListener('click'.() = > {
    record('camera');
});

async function record(recordType) {
    GetDisplayMedia retrieves screen data, getUserMedia retrieves microphone and camera data
    const getMediaMethod = recordType === 'screen' ? 'getDisplayMedia' : 'getUserMedia';
    
    // Get related data is an asynchronous task
    // You can specify video width, height and frame rate
    const stream = await navigator.mediaDevices[getMediaMethod]({
        video: {
            width: 500.height: 300.frameRate: 20}});// Set the returned stream to the srcObject property of the video container to see the corresponding audio and video in real time
    player.srcObject = stream;
    
    // Record using MediaRecorder
    // We need to pass in the stream obtained above and configure the type
    mediaRecorder = new MediaRecorder(stream, {
        mimeType: 'video/webm'
    });
    
    // Listen for the DataAvailable event in which the retrieved data is saved into the BLObs array
    mediaRecorder.ondataavailable = (e) = > {
        blobs.push(e.data);
    };
    
    // Then call the start method to start recording. The start parameter is the size of the split, passing 100 to indicate that the data is saved every 100ms
    mediaRecorder.start(100);
}

Copy the code

The playback

The blobs array is rotated through the Blob and then processed by url.createObjecturl so that it can be played as the URL of the video.

replyBtn.addEventListener('click'.() = > {
    const blob = new Blob(blobs, {
        type: 'video/webm'
    });
    recordPlayer.src = URL.createObjectURL(blob);
    recordPlayer.play();
});
Copy the code

download

Download works the same as playback, just generate a hidden A tag and set the Download property to download. The Click event is then emitted.

 downloadBtn.addEventListener('click'.() = > {
    const blob = new Blob(blobs, {
        type: 'video/webm'
    });
    const url = URL.createObjectURL(blob);
    const a = document.createElement('a');
    a.href = url;
    a.download = 'record.webm';
    a.style.display = 'none';
    a.click();
});
Copy the code

conclusion

We have only demonstrated the recording screen here, and recording the camera works exactly the same as recording the screen.

There are three apis involved:

  • navigator.mediaDevices.getUserMedia: Gets the flow of the microphone and camera
  • navigator.mediaDevices.getDisplayMedia: Gets the stream of the screen
  • MediaRecorder: Monitors the flow changes to record
  1. We used the first two apis to capture the screen, microphone and camera streams, then recorded them with MediaRecorder, saved the data into an array, and generated the Blob.
  2. Video You can set the srcObject property to a stream, which can be played directly. If you set blobs, you need to use url.createObjecturl.
  3. The implementation of the download is to point to the object URL of the Blob object with the A tag and specify the download behavior with the Download attribute. Then manually trigger the click event to download.

Finally, post a complete code, thank you for watching.

<html>

<head>
    <title>At the beginning of webRTC experience</title>
</head>

<body>
    <div>
        <video autoplay id="player"></video>
        <video id="recordPlayer"></video>
    </div>
    <div>
        <button id="startScreenBtn">Open to record screen</button>
        <button id="startCameraBtn">Enabling the Camera</button>
        <button id="replyBtn">The playback</button>
        <button id="downloadBtn">download</button>
    </div>

    <script>
        let blobs = [],
            mediaRecorder;

        async function record(recordType) {
            const getMediaMethod = recordType === 'screen' ? 'getDisplayMedia' : 'getUserMedia';
            const stream = await navigator.mediaDevices[getMediaMethod]({
                video: {
                    width: 500.height: 300.frameRate: 20}}); player.srcObject = stream; mediaRecorder =new MediaRecorder(stream, {
                mimeType: 'video/webm'
            });
            mediaRecorder.ondataavailable = (e) = > {
                blobs.push(e.data);
            };
            mediaRecorder.start(100);
        }

        startScreenBtn.addEventListener('click'.() = > {
            record('screen');
        });

        startCameraBtn.addEventListener('click'.() = > {
            record('camera');
        });

        replyBtn.addEventListener('click'.() = > {
            const blob = new Blob(blobs, {
                type: 'video/webm'
            });
            recordPlayer.src = URL.createObjectURL(blob);
            recordPlayer.play();
        });

        downloadBtn.addEventListener('click'.() = > {
            const blob = new Blob(blobs, {
                type: 'video/webm'
            });
            const url = URL.createObjectURL(blob);
            const a = document.createElement('a');
            a.href = url;
            a.download = 'record.webm';
            a.style.display = 'none';
            a.click();
        });
    </script>
</body>

</html>
Copy the code