WebRTC series

  1. WebRTC source research (1) WebRTC architecture

  2. WebRTC source code research (2) WebRTC source directory structure

  3. WebRTC source research (3) WebRTC operation mechanism

4.WebRTC source research (4) Web server working principle and common protocol basis

  1. WebRTC source code research (5) Nodejs build environment

  2. WebRTC source research (6) Create a simple HTTP service

  3. WebRTC source code Research (7)

  4. [WebRTC source code Research (8)

  5. [WebRTC source code research (9)

  6. [WebRTC source code Research (10)

  7. WebRTC source research (11) access to audio and video equipment

  8. WebRTC source research (12) video constraints

  9. WebRTC source code study (13) audio constraints

  10. WebRTC source research (14) video effects

  11. WebRTC source study (15) get the figure from the video

  12. WebRTC source code study (16) only audio data acquisition

  13. WebRTC source research (17) MediaStreamAPI get video constraints

  14. WebRTC source research (18) WebRTC recording video principle

  15. WebRTC source research (19) WebRTC record acquisition plane data

  16. WebRTC source code research (20) WebRTC signaling server principle

  17. WebRTC source code research (21) WebRTC signaling server implementation

WebRTC source research (11) access to audio and video equipment

Download the source code for this blog: Click here to download it

1. Obtain the access permission Api of audio and video devices

2. WebRTC implements access permissions on web pages

In our last blog post, we learned how to use the enumerateDevices API to retrieve audio and video devices, but now we have a problem with how the names of audio and video devices are visible in Chrome, but are blank in Safari.

You can obtain device information in ChromeNo information is available in Safari, as shown below:

The above cannot be obtained in Safari because it does not obtain the corresponding permissions. Since the implementation of it is completely different in each browser, the control of this permission is also different. The control of Safari and FireFOX permissions is more strict, and it is also related to different versions in Chrome.

So how to solve this problem?

  1. I have done IOS development and know that apple requires a lot of permissions to be authorized by the user. Only authorized users can use it.
  2. So when we’re collecting audio and video data, the browser is going to pop up a window asking if you want to access the audio and video device, and we’re allowing the audio and video device, and that gives us access to the device,
  3. Once we have this permission, we can call enumerateDevices and get all the device names.

Let’s practice, see how to call API in the web to obtain device permissions, no more talk, directly on the code:

Take a look at the contents of the index.html file:

<html>
  <head>
    <title>WebRTC Capture Video and Audio data</title>
  </head>
  <body>
		<div>
			<label>audio Source:</label>
			<select id="audioSource"></select>
		</div>
 
		<div>
			<label>audio Output:</label>
			<select id="audioOutput"></select>
		</div>
 
		<div>
			<label>video Source:</label>
			<select id="videoSource"></select>
		</div>
    <! -- We create a video TAB that displays the audio and video data we captured. Autoplay means play the video directly when we get the video source. Playsinlin means play it in a browser page instead of calling a third-party tool.
    <video autoplay playsinlin id="player"></video>
    <! -- Introduce adapter.js library to make compatibility between different browsers -->
    <script src="https://webrtc.github.io/adapter/adapter-latest.js"></script>
    <script src="./js/client.js"></script>
  </body>
</html>

Copy the code

Take a look at the script file clent.js, which looks like this:

'use strict'
 
var audioSource = document.querySelector('select#audioSource');
var audioOutput = document.querySelector('select#audioOutput');
var videoSource = document.querySelector('select#videoSource');
// Get the video label
var videoplay = document.querySelector('video#player');
 
// deviceInfos is an array of device information
function gotDevices(deviceInfos){
  Deviceinfo = deviceinfo = deviceinfo = deviceInfo = deviceInfo = deviceInfo
	deviceInfos.forEach(function(deviceinfo){
    // Create each item
		var option = document.createElement('option');
		option.text = deviceinfo.label;
		option.value = deviceinfo.deviceId;
	
		if(deviceinfo.kind === 'audioinput') {// Audio input
			audioSource.appendChild(option);
		}else if(deviceinfo.kind === 'audiooutput') {// Audio output
			audioOutput.appendChild(option);
		}else if(deviceinfo.kind === 'videoinput') {// Video inputvideoSource.appendChild(option); }})}// What do we do with the stream? In gotMediaStream we pass a parameter, the stream,
// This stream actually contains both audio and video tracks, because we set video and audio to be captured via Constraints
// Let's directly assign this stream to the HTML assigned video tag
// The user has agreed to access the audio/video device
function gotMediaStream(stream){  
  videoplay.srcObject = stream; // Specify that the data source comes from stream, so that the video tag can play the video and audio after collecting this data
  // After we collect audio and video data, we return a Promise
  return navigator.mediaDevices.enumerateDevices();
}
 
function handleError(err){
	console.log('getUserMedia error:', err);
}
 
// Check whether the browser supports it
if(! navigator.mediaDevices || ! navigator.mediaDevices.getUserMedia){console.log('getUserMedia is not supported! ');
}else{
  var constraints = { // Capture video and audio at the same time
    video : true.audio : false 
  }
  navigator.mediaDevices.getUserMedia(constraints)
    .then(gotMediaStream)  // Using the Promise concatenation method, the fetch stream succeeded
    .then(gotDevices)
    .catch(handleError);
}
Copy the code

The running effect is as follows: