preface

WebRTC is short for Web Real-Time Communication. Through it, Web developers can easily and quickly develop rich real-time multimedia applications without downloading and installing any plug-ins.

WebRTC allows Web applications to establish point-to-point connections between browsers to stream video, audio, or any other data.

WebRTC common API details

WebRTC provides three core apis:

  • GetUserMedia: You can get a local media stream that contains several tracks, such as video and audio tracks.
  • GetDisplayMedia: Get the video stream from your computer screen, but you can’t get the audio media stream for now. If you need an audio stream, manually add it to the track and make it play simultaneously.
  • RTCPeerConnection: Used to establish P2P connections and transmit multimedia data.
  • RTCDataChannel: Establishes a two-way communication data channel that can pass multiple data types.

Through these apis, we can get local audio and video streams, and then establish point-to-point connections with other browsers to send audio and video streams to each other. We can also establish a two-way data channel to send real-time data such as text and files.

Constraints constraints

const promise = navigator.mediaDevices.getUserMedia(constraints);

Constraints is an object that sets the configuration of the video and audio media tracks.

Video, audio Supports Boolean or object. Simple use: {audio: true, video: true}, video can also set aspect ratio, video resolution, frame frequency, audio can set echo cancellation and other functions. See MediaTrackConstraints for details

RTCPeerConnection connection

Const PC = new RTCPeerConnection() : Create channel

Pc. addStream: Adds a local media stream to the channel

Pc. onAddStream: Adds a remote media stream

Pc. setLocalDescription: Sets the description of the local SDP

Pc. setRemoteDescription: Sets the description of the remote SDP

Pc. onicecandiDate: Receives ICE candidate information for NAT penetration, which is essentially address information

New RTCIceCandidate: Creates an ICE candidate information object

PC. AddIceCandidate: Put ICE Candiate information into the channel

getDisplayMedia

This API is used to fetch video streams from the computer screen. It is often used for video sharing or screen recording. It is the same as getUserMedia, but this API can only fetch video streams, not audio streams. If you want an audio stream, you need to manually call getUserMedia to get the audio stream and add it to the media track of the on-screen video stream.

The navigator. MediaDevices. GetDisplayMedia ({video: true}). Then ((s) = > {/ / add audio stream navigator. MediaDevices. GetMedia ({audio: true}) .then((audioStream) => { [audioTrack] = audioStream.getAudioTracks(); [videoTrack] = s.getVideoTracks(); stream = new MediaStream([videoTrack, audioTrack]); / / audio and video stream}). The catch ((err) = > {the console. The log (err)}); }) .catch((err) => { console.log(err) })Copy the code

Flow (similar to HTTP handshake flow, essentially exchange information flow)

Here is an example of the phone conversation between Xu and Luo:

  • Xiao Xu first created the PeerConnection object, then opened the local audio and video equipment, and encapsulated audio and video data into MediaStream to add PeerConnection.

  • Then Xu calls CreateOffer method of PeerConnection to create an SDP object for offer, saves the SDP object through SetLocalDescription method of PeerConnection, and sends it to Luo through signaling server.

  • Xiao Luo receives the offer SDP object sent by Xiao Xu, saves it through the SetRemoteDescription method of PeerConnection, and calls the CreateAnswer method of PeerConnection to create a reply SDP object. The reply SDP object is saved by SetLocalDescription method of PeerConnection and sent to Xiao Xu through signaling server.

  • Xiao Xu received the reply SDP object sent by Xiao Luo and saved it through the SetRemoteDescription method of PeerConnection.

  • In the offer/answer process of SDP information, ClientA and ClientB have created corresponding audio Channel and video Channel according to SDP information and started Candidate data collection. Candidate data can be simply understood as IP address information of the client (local IP address, public IP address, address assigned by the Relay server).

  • When Xu collects Candidate information, PeerConnection will send a notification to itself through the OnIceCandidate interface, and Xu will send the Candidate information received to Xu through the signaling server. Russell is saved through PeerConnection’s AddIceCandidate method. The same operation for Xiao Luo and Xu again.

  • Xu and Luo have established a P2P channel for audio and video transmission. Luo receives the audio and video stream sent by Xu and will return a MediaStream object identifying Xu’s audio and video stream through the OnAddStream callback interface of PeerConnection, which can be rendered on luo’s end. The same operation is also suitable for the transmission of audio and video stream from Xiao Luo to Xiao Xu.

Code Practices (VUE)

Here, for the convenience of testing, the signaling server is omitted, and the local end-to-end channel communication is directly simulated. The communication principle and process are the same, which is a simplified version.

<template> <div class="container"> <video id="localWebcam" class="local-video" autoplay="autoplay" /> <video id="webcam"  class="video" autoplay="autoplay" /> </div> </template> <script> export default { name: Mounted () {this.init()}, methods: {navigation.mediadevices.getUserMedia ({video: { width: 720, height: 1200 }, audio: true }) .then(stream => { document.querySelector('#localWebcam').srcObject = stream this.startPeerConnection(stream) }) .catch(err => { console.log('The following error occurred: ' + err.name) }) }, /** * @author xuchen * @date 2020-11-07 11:28:26 * @desc Establish communication */ startPeerConnection(stream) {// stun server const config =  { iceServers: [ { url: 'stun:stun.services.mozilla.com' }, { url: 'stun:stunserver.org' }, { url: 'stun:stun.l.google.com: 19302}}] / / create channels const selfConnection = new RTCPeerConnection (config) const otherConnection = new RTCPeerConnection (config) / / into the local flow selfConnection addStream (stream) / / Candidate information selfConnection. Onicecandidate = e => { if (e.candidate) { otherConnection.addIceCandidate(new RTCIceCandidate(e.candidate)) } } otherConnection.onicecandidate = e => { if (e.candidate) { selfConnection.addIceCandidate(new RTCIceCandidate (e.c. with our fabrication: andidate)}} / / receiving remote media streaming otherConnection. Onaddstream = e = > { Document. QuerySelector (' # webcam) srcObject = e.s tream} / / create offer selfConnection. CreateOffer (). Then (offer = > { SelfConnection. SetLocalDescription (offer) / / save the local SDP information otherConnection setRemoteDescription (offer) / / / / set the SDP on the far side of information Create a reply otherConnection. CreateAnswer (). Then (answer = > {otherConnection. SetLocalDescription (answer) / / save the local SDP information SelfConnection. SetRemoteDescription (answer) / / set the SDP information on the far side of}}}}})) < / script > < style lang = "SCSS scoped" >. Container { position: fixed; left: 0; right: 0; top: 0; bottom: 0; margin: auto; width: 300px; height: 500px; background-color: #1c4054; .local-video, .video { width: 100%; height: 250px; } } </style>Copy the code

Browser support

We can see from Caniuse that almost all modern browsers support WebRTC:

Different browsers use different API prefixes. You can use WebrtC-Adapter to eliminate these differences.