preface

Head to Toe will be published as a series of articles, including but not limited to WebRTC multiplayer videos, which are expected to include:

  • WebRTC: This is the basics and one-to-one local peer connection, network peer connection.
  • WebRTC Combat (2) : mainly explain data transmission and multi-terminal local peer connection, network peer connection.
  • WebRTC Combat (3) : To achieve a one-to-one video chat project, including but not limited to screenshots, recording, etc.
  • WebRTC + Canvas implements a shared artboard project.
  • 💘🍦🙈Vchat – a social chat system (vue + Node + mongodb) series of articles

Since the output of articles is a real effort, the plan is not necessarily to publish in this order, but to interweave articles in other directions, such as Vue, JavaScript, or other learning topics. In the article, I will focus on some of the pits I have encountered to remind you to pay attention to, as far as possible to let you take fewer detours in the learning process. Of course, mine is not the standard answer, but my personal way of thinking. If we have a better way to communicate with each other, I hope we can learn from it.

Here I also hope you can pay attention to a wave, your attention and support, but also can inspire me to output better articles.

  • This article’s sample source library webrtC-Stream
  • The article warehouse 🍹 🍰 fe – code
  • This article demo address (suggested Google view)

The goal of this session is to make a 1 V 1 video call (laptop camera doesn’t work, use virtual camera). The article is quite long, mark can read it slowly later.

At the end of the article there are exchange group and public number, hope you support, thank 🍻.

What is WebRTC?

WebRTC was developed by a Swedish company called Gobal IP Solutions, or GIPS. Google acquired GIPS in 2011 and made its source code open source. It then worked with the RELEVANT standards bodies of the IETF and W3C to ensure industry consensus. Among them:

  • Web Real-Time Communications (WEBRTC) W3C Organization: Defines browser apis.
  • Real-Time Communication in Web-Browsers (RTCWEB) IETF standard organization: Defines the protocols, data, and security functions required.

Simply put, WebRTC is an open source project that enables real-time communication of audio, video, and data within Web applications. In real – time communication, the acquisition and processing of audio and video is a very complicated process. Such as audio and video streaming codec, noise reduction and echo cancellation, but in WebRTC, all this is done by the browser’s low-level encapsulation. We can take optimized media streams directly and output them to local screens and speakers, or forward them to their peers.

WebRTC audio and Video processing engine:

So, we can implement a browser-to-browser peer-to-peer (P2P) connection for real-time audio and video communication without the need for any third-party plug-ins. Of course, WebRTC provides some apis for us to use, in the process of real-time audio and video communication, we mainly use the following three:

  • GetUserMedia: Get audio and video streams (MediaStream)
  • RTCPeerConnection: Point-to-point communication
  • RTCDataChannel: data communication

However, while the browser solves most of our audio and video processing problems, we need to pay special attention to the size and quality of the stream when requesting audio and video from the browser. Even if the hardware can capture hd mass stream, the CPU and bandwidth may not be able to keep up, which is something we have to consider when establishing multiple peer-to-peer connections.

implementation

Next, we will gradually understand the process of WebRTC real-time communication by analyzing the API mentioned above.

getUserMedia

MediaStream

The getUserMedia API is probably familiar because it is used for common H5 recording functions such as retrieving media streams from your device (MediaStream). It can take a constraint object named CONSTRAINTS as a parameter that specifies what media stream to fetch.

    navigator.mediaDevices.getUserMedia({ audio: true.video: true }) 
    // The parameter indicates that both audio and video need to be obtained
        .then(stream= > {
          // Get the optimized media stream
          let video = document.querySelector('#rtc');
          video.srcObject = stream;
        })
        .catch(err= > {
          // Catch an error
        });
Copy the code

Let’s take a quick look at the MediaStream we’ve captured.

You can see it has a lot of properties, we just need to look at it, and you can see more about it in MDN.

* id [String]: uniquely identifies the current MS. So every time you refresh the browser or retrieve the MS, the ID changes. * active [Boolean]: indicates whether MS is currently active (that is, whether it can play). * onActive: This event is triggered when active is true.Copy the code

In conjunction with the figure above, let’s review the prototype and prototype chain discussed in the previous section. MediaStream’s __proto__ points to its constructor’s prototype object, which in turn has a constructor property pointing to its constructor. That is, the constructor of MediaStream is a function named MediaStream. If you are not familiar with prototypes, you can take a look at the previous article JavaScript prototypes and prototype chains and Canvas captchas.

Here you can also view some information about the captured stream via getAudioTracks(), getVideoTracks(), and MDN for more information.

* kind: is the current media stream type (Audio/Video). * Label: is a media device. I used a virtual camera here. * muted: indicates whether a media track is muted.Copy the code

compatibility

To continue to see getUserMedia, navigator. MediaDevices. GetUserMedia is a new version of the API, the old version is the navigator. GetUserMedia. In order to avoid compatibility problems, we can deal with it a little bit (actually, WebRTC is not popular at the moment, if you need to choose some adapters, such as Adapter.js).

    // Check if there are navigator.mediaDevices and no empty objects assigned
    if (navigator.mediaDevices === undefined) {
        navigator.mediaDevices = {};
    }
    
    / / continue to determine whether there is the navigator. MediaDevices. GetUserMedia, without using the navigator. GetUserMedia
    if (navigator.mediaDevices.getUserMedia) {
        navigator.mediaDevices.getUserMedia = function(prams) {
            let getUserMedia = navigator.webkitGetUserMedia || navigator.mozGetUserMedia;
            // Compatible access
            if(! getUserMedia) {return Promise.reject(new Error('getUserMedia is not implemented in this browser'));
            }
            return new Promise(function(resolve, reject) {
                getUserMedia.call(navigator, prams, resolve, reject);
            });
        };
    }
    navigator.mediaDevices.getUserMedia(constraints)
        .then(stream= > {
            let video = document.querySelector('#Rtc');
            if ('srcObject' in video) { // Check whether the srcObject attribute is supported
                video.srcObject = stream;
            } else {
                video.src = window.URL.createObjectURL(stream);
            }
            video.onloadedmetadata = function(e) {
                video.play();
            };
        })
        .catch((err) = > { // Catch an error
            console.error(err.name + ':' + err.message);
        });
Copy the code

constraints

For constraints objects, we can use them to specify some properties related to the media stream. For example, specify whether to fetch a stream:

    navigator.mediaDevices.getUserMedia({ audio: false.video: true });
    // Only the video stream is required, no audio is required
Copy the code

Specify the width, frame rate, and ideal value of the video stream:

    When changing the width and height of the video stream,
    // If the aspect ratio is not the same as the collected one, some part will be cut off directly
    { audio: false.video: { width: 1280.height: 720}}// Set ideal value, maximum value, minimum value
    {
      audio: true.video: {
        width: { min: 1024.ideal: 1280.max: 1920 },
        height: { min: 776.ideal: 720.max: 1080}}}Copy the code

For mobile devices, you can also specify the front camera or rear camera:

    { audio: true.video: { facingMode: "user"}}/ / pre -
    { audio: true.video: { facingMode: { exact: "environment"}}}/ / rear
    // You can also specify the device ID,
    / / by the navigator. MediaDevices. EnumerateDevices () can get support equipment
    { video: { deviceId: myCameraDeviceId } }
Copy the code

Another interesting option is to set the video source as the screen, but only Firefox currently supports this feature.

    { audio: true.video: {mediaSource: 'screen'}}Copy the code

Here do not continue to do porter, more exciting in MDN, ^_^!

RTCPeerConnection

The RTCPeerConnection interface represents a WebRTC connection from the local computer to the remote. This interface provides implementations of methods to create, hold, monitor, and close connections. – the MDN

An overview of the

RTCPeerConnection, as an API for creating point-to-point connections, is the key for us to achieve real-time audio and video communication. In the process of point-to-point communication, a series of information needs to be exchanged. This process is usually called signaling. Tasks to be completed in the signaling phase: * Create an RTCPeerConnection for each connection end and add a local media stream. * Get and exchange local and remote descriptions: local media metadata in SDP format. Get and exchange network information: Potential connection endpoints are called ICE candidates.Copy the code

Just because WebRTC is called point-to-point connectivity does not mean that the server is not involved in the implementation process. In contrast, there is no way to communicate with each other until a point-to-point channel is established. This means that at the signaling stage, we need a communication service to help us establish the connection. WebRTC itself does not specify a signaling service, so we can, but not limited to, use XMPP, XHR, Socket, etc., to do the signaling exchange required services. In my work, I use strophe.js based on XMPP protocol for two-way communication, but in this case, I use socket. IO and Koa for project demonstration.

NAT traversal technology

Let’s start with the first part of the connection task: create an RTCPeerConnection for each end of the connection and add a local media stream. In fact, in general live broadcast mode, the player only needs to add the local stream for output, and other participants only need to accept the stream for watching.

RTCPeerConnection also needs to be prefixed due to browser differences.

let PeerConnection = window.RTCPeerConnection ||
                     window.mozRTCPeerConnection ||
                     window.webkitRTCPeerConnection;
let peer = new PeerConnection(iceServers);
Copy the code

RTCPeerConnection also receives a parameter, iceServers. Let’s see what it looks like:

{
  iceServers: [{url: "stun:stun.l.google.com:19302"}, // Google public services
    {
      url: "turn:***".username: * * *,/ / user name
      credential: *** / / password}}]Copy the code

Parameter configuration of two urls, respectively STUN and TURN, this is the key WebRTC point-to-point communication, is also the problem that the general P2P connection needs to solve: NAT traversal.

Network Address Translation (NAT) is a technology to solve the shortage of IPV4 IP addresses, that is, one public IP Address usually corresponds to N internal IP addresses. As a result, when browsers on different Lans attempt to connect to WebRTC, they cannot directly obtain the public IP address of the other and therefore cannot communicate with each other. Therefore, NAT traversal (also called hole making) is required. The basic flow of NAT traversal is as follows:

Generally, NAT traversal is carried out using the ICE protocol framework, which stands for Interactive Connectivity Establishment. It uses STUN protocol and TURN protocol for traversal. More information about NAT traversal can be found in STUN&TURN, ICE of P2P Communication Standard.

Here, we can find that WebRTC communication needs at least two services:

  • The signaling phase requires two-way communication services to assist information exchange.
  • STUN and TURN assist NAT traversal.

Establish point-to-point connections

What is the process of WebRTC’s point-to-point connection? We analyze the connection by combining the legend.

Obviously, in the process of the above connection:

The caller (in this case, the browser) needs to send a message called offer to the receiver.

After receiving the request, the receiving end returns an Answer message to the calling end.

This is one of the above tasks, the exchange of local media metadata in SDP format. SDP messages generally look like this:

    v=0
    o=- 1837933589686018726 2 IN IP4 127.0. 01.
    s=-
    t=0 0
    a=group:BUNDLE audio video
    a=msid-semantic: WMS yvKeJMUSZzvJlAJHn4unfj6q9DMqmb6CrCOT
    m=audio 9 UDP/TLS/RTP/SAVPF 111 103 104 9 0 8 106 105 13 110 112 113 126. .Copy the code

But the task is not just to exchange, but also to keep information about yourself and the other person separately, so let’s spice things up a bit:

After creating the offer message, the caller calls setLocalDescription to store the local offer description and then sends it to the receiver. ** * After receiving the offer, the receiver calls setRemoteDescription to store the description of the remote offer. Then create the answer information, also need to call setLocalDescription to store the local answer description, and then return to the ** caller ** * ** caller ** after getting the answer, Call setRemoteDescription again to set the description of the remote answer.Copy the code

One step short of this point-to-point connection is the network information ICE candidate exchange. However, there is no sequence between this step and the exchange of offer and answer information, and the process is the same. After the ICE candidate information is prepared, the caller and the receiver exchange information and save each other’s information to complete a connection.

This diagram is more perfect in my opinion, describing the whole connection process in detail. Just in time to summarize:

* Infrastructure: necessary signaling services and NAT traversal services * clientA and clientB create RTCPeerConnection and add local media streams for the output end, respectively. If the call type is video call, media streams need to be added on both ends for output. * After local ICE candidate information is collected, it is exchanged through signaling service. * The calling end (for example, A makes A video call to B, and A is the calling end) initiates the offer message, and the receiving end receives and returns an answer message. The calling end saves the message and completes the connection.Copy the code

Local 1 V 1 peer connection

The basic process is finished, so the mule is pulled out. Let’s start by implementing a local peer connection to familiarize ourselves with the process and some of the apis. Local connection means connecting between two videos on a local page without service. You know what? Let’s go to the picture. It makes sense.

To clarify the target, A, as the output end, needs to get the local stream and add it to its own RTCPeerConnection. B, as the caller, does not need output, so it only needs to receive streams.

Creating a Media stream

The page layout is simple, consisting of two video tabs representing A and B. So we directly look at the code, although the source code is built with Vue, but there is no special API, the overall and ES6 class syntax is not much different, and there are detailed annotations, so it is recommended that students without Vue foundation can directly as ES6 to read. Sample source code library WebrtC-stream

Async createMedia () {/ / save the local flow to the global enclosing localstream = await the navigator. MediaDevices. GetUserMedia ({audio: true, video: true }) let video = document.querySelector('#rtcA'); video.srcObject = this.localstream; this.initPeer(); // Call RTCPeerConnection to initialize RTCPeerConnection}Copy the code

Initialize the RTCPeerConnection

  initPeer(){...this.peerA.addStream(this.localstream); // Add a local stream
      this.peerA.onicecandidate = (event) = > {
      // Monitor A's ICE candidate information if collected, add to B's connection status
          if (event.candidate) {
              this.peerB.addIceCandidate(event.candidate); }}; .// Listen for media stream access, if so, assign to SRC of rtcB
      this.peerB.onaddstream = (event) = > {
          let video = document.querySelector('#rtcB');
          video.srcObject = event.stream;
      };
      this.peerB.onicecandidate = (event) = >{Connection state// Listen on B's ICE candidate information, if collected, add to A
          if (event.candidate) {
              this.peerA.addIceCandidate(event.candidate); }}; }Copy the code

This part mainly creates peer instances and exchanges ICE information with each other. But there is one property that needs to be mentioned here, and that is iceConnectionState.

  peer.oniceconnectionstatechange = (evt) = > {
      console.log('ICE connection state change: ' + evt.target.iceConnectionState);
  };
Copy the code

We can through oniceconnectionstatechange method to detect ICE connection status, it has a total of seven kinds of state:

The New ICE agent is collecting candidates or waiting to provide remote candidates. The Checking ICE agent has received remote candidates on at least one component and is checking the candidates but has not yet found connections. In addition to checking, it may also be collecting. The Connected ICE agent has found available connections for all components, but is still checking other candidate pairs to see if a better connection exists. It may still be collecting. The Completed ICE agent has completed collecting and checking to find connections for all components. The Failed ICE agent has finished checking all candidate pairs but failed to find a connection for at least one component. Connections to some components may have been found. Disconnected CONNECTION Closed THE ICE agent has closed and is no longer responding to STUN requests.Copy the code

The things we need to be aware of are completed and disconnected, either when the connection completes or when it disconnects.

Create a connection

  async call() {
      if (!this.peerA || !this.peerB) { // Check whether there is an instance. If there is no instance, create it again
          this.initPeer();
      }
      try {
          let offer = await this.peerA.createOffer(this.offerOption); / / create the offer
          await this.onCreateOffer(offer);
      } catch (e) {
          console.log('createOffer: ', e); }}Copy the code

In this case, you need to check whether there is a corresponding instance. It is to process the call again after hanging up.

  async onCreateOffer(desc) {
      try {
          await this.peerB.setLocalDescription(desc); // The caller sets the local offer description
      } catch (e) {
          console.log('Offer-setLocalDescription: ', e);
      }
      try {
          await this.peerA.setRemoteDescription(desc); // The receiver sets the remote offer description
      } catch (e) {
          console.log('Offer-setRemoteDescription: ', e);
      }
      try {
          let answer = await this.peerA.createAnswer(); // The receiver creates the answer
          await this.onCreateAnswer(answer);
      } catch (e) {
          console.log('createAnswer: ', e); }},async onCreateAnswer(desc) {
      try {
          await this.peerA.setLocalDescription(desc); // The receiver sets the local answer description
      } catch (e) {
          console.log('answer-setLocalDescription: ', e);
      }
      try {
          await this.peerB.setRemoteDescription(desc); // Call end Sets remote answer description
      } catch (e) {
          console.log('answer-setRemoteDescription: ', e); }}Copy the code

This is basically a process that has been repeated several times before and written in code, so when you look at this, it should be a little clearer. However, there is one point that needs to be explained, that is, in the current situation, A as the calling end, B can also get A’s media stream. Once the connection is established, it is bidirectional, but THE local stream is not added when B initializes the peer, so A does not have THE media stream of B.

Network 1 V 1 Peer connection

I think we are all familiar with the basic process, through diagrams, examples back and forth several times. So to strike while the iron is hot, we added services this time to make a true point-to-point connection. Before looking at the following articles, I hope you have a little bit of a grounding in Koa and Scoke. IO, and some basic APIS. Koa, socke. IO, or Vchat — a social chat system (vue + node + mongodb).

demand

As usual, we need to know what we need. Images load slowly, you can directly see the demo address

The connection process involves many links, here is not a screenshot, you can directly go to the demo address to view. A brief analysis of what we are doing: * After joining the room, get all the online members of the room. * Select any member to call, that is, call action. At this point, there are some details to deal with: you cannot call yourself, you can only call one person at a time, and you need to determine whether the other party is in the call, and you need to make corresponding judgments (agree, reject, and in the call) when answering a call. * There is no follow-up action for refusing or in the call, and you can call another person. Once you’ve agreed, start building point-to-point connections.

To join the room

Here’s a quick look at the process for joining a room:

  / / the front
  join() {
      if (!this.account) return;
      this.isJoin = true; // Enter the box shell logic
      window.sessionStorage.account = this.account; // Refresh to determine whether you have logged in
      socket.emit('join', {roomid: this.roomid, account: this.account}); // Send a room join request
  }
  
  / / the back end
  const sockS = {}; // Sock instances for different clients
  const users = {}; // Member list
  sock.on('join'.data= >{
      sock.join(data.roomid, () = > {
          if(! users[data.roomid]) { users[data.roomid] = []; }let obj = {
              account: data.account,
              id: sock.id
          };
          let arr = users[data.roomid].filter(v= > v.account === data.account);
          if(! arr.length) { users[data.roomid].push(obj); } sockS[data.account] = sock;// Save sock instances for different clients
           // Send the list of room members to everyone in the room
          app._io.in(data.roomid).emit('joined', users[data.roomid], data.account, sock.id);
      });
  });
Copy the code

The back-end member list is processed because the multi-room logic is done, and the member list is returned by each room. You don’t have to do that if you don’t have multiple rooms. SockS is handled to send private chat messages.

call

I’ve already mentioned the call considerations, so I’ll go through them together. It is important to note that the message should contain the account of oneself and the other party, because this is the identification of member SOCK, which is previously stored in SOCKS for sending private messages. Then there are the three states mentioned above, which are distinguished by type values 1, 2 and 3, and then give different responses.

  / / the front
  apply(account) { // Send the request
      // account self is its own account
      this.loading = true;
      this.loadingText = 'On call'; // The call is loading
      socket.emit('apply', {account: account, self: this.account});
  },
  reply(account, type) { // Process the reply
      socket.emit('reply', {account: account, self: this.account, type: type});
  }
  // The request was received
  socket.on('apply'.data= > {
      if (this.isCall) { // Check whether the call is on
          this.reply(data.self, '3');
          return;
      }
      this.$confirm(data.self + 'To request a video call from you, do you agree? '.'tip', {
          confirmButtonText: '同意'.cancelButtonText: 'refuse'.type: 'warning'
      }).then(async() = > {this.isCall = data.self;
          this.reply(data.self, '1');
      }).catch(() = > {
          this.reply(data.self, '2');
      });
  });
  
  / / the back end
  sock.on('apply'.data= >{ // Forward the request
      sockS[data.account].emit('apply', data);
  });
Copy the code

The back end is simpler, simply forwarding the request to the corresponding client. In fact, our example of the back end, the basic is this operation, so the back end code will not paste, you can go to the source code directly see.

reply

Reply and call are the same logic, just handle the different reply separately.

  / / the front
  socket.on('reply'.async data =>{ // Reply received
      this.loading = false;
      switch (data.type) {
          case '1': / / agree
              this.isCall = data.self; // Store the call object
              break;
          case '2': / / deny
              this.$message({
                  message: 'Your request has been rejected! '.type: 'warning'
              });
              break;
          case '3': // The line is busy
              this.$message({
                  message: 'You're on the line! '.type: 'warning'
              });
              break; }});Copy the code

Create a connection

Now that the logic of call and reply is clear, when should you create a P2P connection? As we said before, there is no need to process rejection or call, only consent is required, and then the creation should start at the consent request location. It is important to note that consent requests have two places: one is when you click on consent, and the other is when the other person knows you have clicked on consent.

This example takes the caller to send the Offer, which must be paid attention to as long as one party creates the Offer, because once the connection is two-way.

  socket.on('apply'.data= > { // You click where you agree.this.$confirm(data.self + 'To request a video call from you, do you agree? '.'tip', {
          confirmButtonText: '同意'.cancelButtonText: 'refuse'.type: 'warning'
      }).then(async() = > {await this.createP2P(data); // Create a peer and wait for the offer.// There is no offer here})... }); socket.on('reply'.async data =>{ // The other person knows that you clicked on the agreement
      switch (data.type) {
          case '1': // The only place to send an offer is here
              await this.createP2P(data); // Create a peer after the peer agrees
              this.createOffer(data); // And send an offer
              break; . }});Copy the code

As with video calls such as wechat, both parties need to stream media because you both need to see each other. So the difference between this and the previous local peer connection is that each needs to add a media stream to its own RTCPeerConnection instance, and then each can get the other’s video stream after the connection. When initializing RTCPeerConnection, remember to add the onicecandiDate function to send ICE candidates to the peer.

  async createP2P(data) {
      this.loading = true; / / loading animation
      this.loadingText = 'Establishing call connection';
      await this.createMedia(data);
  },
  async createMedia(data){...// Get and assign the local stream to video as before
      this.initPeer(data); // After obtaining the media stream, call the function to initialize RTCPeerConnection
  },
  initPeer(data) {
      // Create an output PeerConnection.this.peer.addStream(this.localstream); // Local streams need to be added
      this.peer.onicecandidate = (event) = > {
      // Listen for ICE candidate information. If collected, send it to the other party
          if (event.candidate) { // Send ICE candidates
              socket.emit('1v1ICE',
              {account: data.self, self: this.account, sdp: event.candidate}); }};this.peer.onaddstream = (event) = > {
      // Check whether there is media stream access. If there is, assign a value to SRC of rtcB, change the corresponding loading state, and omit the assignment
          this.isToPeer = true;
          this.loading = false; . }; }Copy the code

Information exchange such as createOffer is the same as before, but needs to be forwarded to the corresponding client through the Socket. After receiving the message, take corresponding measures respectively.

  socket.on('1v1answer'.(data) = >{ // The answer is received
      this.onAnswer(data);
  });
  socket.on('1v1ICE'.(data) = >{ // ICE is received
      this.onIce(data);
  });
  socket.on('1v1offer'.(data) = >{ // Receive the offer
      this.onOffer(data);
  });
  
  Create createOffer (createOffer, createOffer, createOffer, createOffer, createOffer, createOffer)
  // It is recommended that you type it yourself. If you have any questions, you can communicate with the source code.
  async createOffer(data) { // Create and send the offer
      try {
          / / create the offer
          let offer = await this.peer.createOffer(this.offerOption);
          // The caller sets the local offer description
          await this.peer.setLocalDescription(offer);
          // Send an offer
          socket.emit('1v1offer', {account: data.self, self: this.account, sdp: offer});
      } catch (e) {
          console.log('createOffer: ', e); }}Copy the code

Hang up

The idea of hanging up is still to close their peer, but the hanging up party also needs to use the Socket to tell the other party that you have hung up, otherwise the other party is still waiting.

  hangup() { // Hang up the call and perform corresponding operations. After receiving the message, the peer party must close the connection
      socket.emit('1v1hangup', {account: this.isCall, self: this.account});
      this.peer.close();
      this.peer = null;
      this.isToPeer = false;
      this.isCall = false;
  }
Copy the code

Refer to the article

  • WebRTC
  • WebRTC live broadcasting era

Communication group

Qq front-end communication group: 960807765, welcome all kinds of technical exchanges, looking forward to your joining

Afterword.

If you see here, and this article is a little help to you, I hope you can move a small hand to support the author, thank 🍻. If there is something wrong in the article, we are welcome to point out and encourage each other.

  • This article’s sample source library webrtC-Stream
  • The article warehouse 🍹 🍰 fe – code

Previous articles:

  • JavaScript prototype and prototype chain and Canvas captcha practices
  • Stop, you Promise!
  • 💘🍦🙈Vchat – A social chat system from head to toe (vue + node + mongodb)

Welcome to pay attention to the public number front-end engine, the first time to get the author’s article push, and all kinds of front-end quality articles, I hope to grow with you in the future front-end road.