We all know WebRTC can be used for real-time audio and video processing, so the first step must be to get the local audio and video stream, in WebRTC to get the local audio and video stream, need to use the API: getUserMedia.

This tutorial will show you how to use getUserMedia to get local audio and video streams.

Create a project

Create the following project:

One index.html + one main.js.

Add video to HTML

To display the video, add a video to the HTML as follows:

<! DOCTYPE html> <html> <body> <div id="container">

    <video id="gum-local" autoplay playsinline></video>
</div>

<script src="js/main.js"></script>

</body>
</html>

Copy the code

Then add a showVideo button, which means that after pressing this button, the camera will display the video, and the code is as follows:

<! DOCTYPE html> <html> <body> <div id="container">

    <video id="gum-local" autoplay playsinline></video>
    <button id="showVideo">Open camera</button>

</div>

<script src="js/main.js"></script>

</body>
</html>

Copy the code

Results as follows:

Add events to showVideo

Get the showVideo button and add an event listener for it:

document.querySelector('#showVideo').addEventListener('click', e => init(e));

async function init(e) {

}
Copy the code

Using getUserMedia

GetUserMedia can be used as follows:

navigator.mediaDevices.getUserMedia(constraints);
Copy the code

You need to pass in the constraints parameter, which controls whether video and audio are enabled, as follows:

const constraints = window.constraints = {
  audio: true,
  video: true
};
Copy the code

So in the init() method you could say:

async function init(e) {
  const constraints = window.constraints = {
    audio: true,
    video: true
  };
  
  try {
    const stream = await navigator.mediaDevices.getUserMedia(constraints);
    handleSuccess(stream);
    e.target.disabled = true; } catch (e) { handleError(e); }}Copy the code

HandleSuccess () and handleError() are:

function handleSuccess(stream) {
  const video = document.querySelector('video');
  const videoTracks = stream.getVideoTracks();
  console.log('Got stream with constraints:', constraints);
  console.log(`Using video device: ${videoTracks[0].label}`);
  window.stream = stream; // make variable available to browser console
  video.srcObject = stream;
}

function handleError(error) {
  console.error(error);
}
Copy the code

After clicking Open Camera, the authorization box will pop up first:

Click Allow to see the video:

Agora SDK experience essay contest essay | the nuggets technology, the campaign is underway