The opening shoot the breeze

A case I worked on a while ago was to develop an audio courseware. Basically, after importing documents, pictures and other resources, the page turned into a layout similar to PPT, and then selecting a picture, audio could be inserted, with two modes of single-page editing and global editing. There are two ways to import audio, one is to import from the resource library, and one is to mention the recording. To be honest, I didn’t touch the HTML5 Audio API at first, and it had to be optimized based on the code before we took over. Of course, there are a lot of pits, and this time I will talk about feelings around these pits (I will omit some basic object initialization and acquisition, because these are not the focus of this time, interested students can find the document on MDN) :

  • Call Audio API compatibility writing
  • Get the size of the recording sound (should be frequency)
  • Pause recording compatibility writing method
  • Gets the current recording time

Preparation before recording

Before recording, check whether the current device supports the Audio API. Early method of the navigator. GetUserMedia has been the navigator. MediaDevices. GetUserMedia is replaced. Normally now. Most modern browsers have support for the navigator mediaDevices. The use of the getUserMedia, of course also gives the MDN compatibility of writing

const promisifiedOldGUM = function(constraints) { // First get ahold of getUserMedia, if present const getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia; // Some browsers just don't implement it - return a rejected promise with an error // to keep a consistent interface if (! getUserMedia) { return Promise.reject( new Error('getUserMedia is not implemented in this browser') ); } // Otherwise, wrap the call to the old navigator.getUserMedia with a Promise return new Promise(function(resolve, reject) { getUserMedia.call(navigator, constraints, resolve, reject); }); }; // Older browsers might not implement mediaDevices at all, so we set an empty object first if (navigator.mediaDevices === undefined) { navigator.mediaDevices = {}; } // Some browsers partially implement mediaDevices. We can't just assign an object // with getUserMedia as it would overwrite existing properties. // Here, we will just add the getUserMedia property if it's missing. if (navigator.mediaDevices.getUserMedia === undefined) { navigator.mediaDevices.getUserMedia = promisifiedOldGUM; }Copy the code

Because this method is asynchronous, we can make friendly hints for incompatible devices

The navigator. MediaDevices. GetUserMedia (constraints). Then (function (mediaStream) {} / / success. Function (error) {// fail const {name} = error; let errorMessage; Switch (name) {// User rejected case 'NotAllowedError': case 'PermissionDeniedError': errorMessage = 'User has forbidden web page to call recording device '; break; // No recording device is connected case 'NotFoundError': case 'DevicesNotFoundError': errorMessage = 'Recording device not found '; break; // Other error case 'NotSupportedError': errorMessage = 'Recording function not supported '; break; Default: errorMessage = 'Recording call error '; window.console.log(error); } return errorMessage; });Copy the code

If all goes well, we can move on to the next step. (The method of getting the context is omitted because it is not the focus of this time.)

Start recording and pause recording

A special point here is that you need to add an intermediate variable to indicate whether or not you are currently recording. On Firefox, we found a problem. The recording process was normal, but when we clicked pause, we found that it could not be paused. We used disconnect method at that time. This method does not work. This method requires all connections to be disconnected. It turns out that you should add an intermediate variable, this.isRecording, to determine if you are recording, set it to true when you hit Start and false when you pause. When we start recording, there will be an event onAudioProcess for recording listening, which will write the stream if it returns true, and not if it returns false. If this. IsRecording is false, return

// Some initialization const audioContext = new audioContext (); const sourceNode = audioContext.createMediaStreamSource(mediaStream); const scriptNode = audioContext.createScriptProcessor( BUFFER_SIZE, INPUT_CHANNELS_NUM, OUPUT_CHANNELS_NUM ); sourceNode.connect(this.scriptNode); scriptNode.connect(this.audioContext.destination); / / listen to the recording process scriptNode. Onaudioprocess = event = > {if (! this.isRecording) return; / / determine whether regular recording this. Buffers. Push (event. InputBuffer. GetChannelData (0)); // Get the current channel data and write to array};Copy the code

There is a catch, of course, that there is no longer a built-in method to get the current recording duration, because it is not actually pausing, but not writing to the stream. So we also need to get the duration of the current recording, and we need to get it by a formula

Const getDuration () = = > {return (4096 * this. Buffers. Length)/enclosing audioContext. SampleRate / / 4096 to a flow length, SampleRate = sampleRate}Copy the code

This will get the correct recording duration.

End of the tape

To end the recording, I pause the recording first, then try listening or perform other operations first, and then set the array length of the storage stream to 0.

To obtain the frequency

getVoiceSize = analyser => {
	const dataArray = new Uint8Array(analyser.frequencyBinCount);
	analyser.getByteFrequencyData(dataArray);
	const data = dataArray.slice(100, 1000);
	const sum = data.reduce((a, b) => a + b);
	return sum;
};
Copy the code

For details, see developer.mozilla.org/zh-CN/docs/…

other

  • HTTPS: requires site-wide access under ChromeHTTPSBefore being allowed to use
  • Wechat: The built-in browser in wechat needs to call JSSDK to use
  • Audio format conversion: there are many ways of audio format, can look up most of the information, we are basically copy each other, of course, there is an audio quality problem, here will not repeat.

conclusion

Most of the problems encountered this time are compatibility problems, so in the above stepped on a lot of pits, especially the mobile end of the problem, there are at the beginning because of the acquisition of recording time writing error, resulting in a direct jam. This experience also fills in some gaps in the HTML5 API, but the most important thing to remember is that the native API documentation is still very simple to view directly from MDN.