This is the second day of my participation in the August More text Challenge. For details, see:August is more challenging

  • Source address: github.com/goblin-labo…
  • The demo address: goblin – laboratory. Dead simple. IO/volume – mete…

When WebRTC interacted, because only others could hear its voice, it needed a volume dashboard to indicate that the microphone was working properly. The code for using AudioWorklet to get microphone volume in the React project has been put on Github for reference if needed.

The volume of MediaStream can be checked using AudioContext, and the first version was completed by referring to WebRTC samples and cwilso/volume-meter.

Use AudioContext’s ScriptProcessor to detect the volume

The volume detection code using AudioContext ScriptProcessor is relatively simple, although I don’t understand the principle, but it works.

const audioContext = new AudioContext();
const mediaStreamSource = audioContext.createMediaStreamSource(mediaStream);
const scriptProcessor = ref.current.audioContext.createScriptProcessor(2048.1.1);
scriptProcessor.onaudioprocess = () = > {
  const input = event.inputBuffer.getChannelData(0);
  let sum = 0.0;
  for (let i = 0; i < input.length; i += 1) {
    sum += input[i] * input[i];
  }
  setVolume(Math.round(Math.sqrt(sum / input.length) * 100));
  const volume = Math.round(Math.sqrt(sum / input.length) * 100);
  console.log(`volume: ${volume}`);
};
scriptProcessor.connect(audioContext.destination);
mediaStreamSource.connect(scriptProcessor);
Copy the code

The complete code

Use AudioWorklet to detect the volume cause

It can work normally using ScriptProcessor. In the Spring Festival of 2011, after chrome was updated, the following warning information appeared in the browser console.

Look at the browser compatibility in MDN, browsers basically already support AudioWorklet, the following is sure to migrate to the past.

The AudioWorklet is a way to remove all computations related to Audio processing from the main thread, which in theory can improve performance. At that time, the performance of interactive pages also needs to be improved. Start reading the materials and learn how to use AudioWorklet to detect volume.

The Audio processing in the Web Audio API runs in a different thread than the main UI thread, so it runs smoothly. To enable custom Audio processing in JavaScript, the Web Audio API proposes a ScriptProcessorNode that uses event handlers to invoke user scripts in the main UI thread.

There are two problems with this design: event handling is designed asynchronously, and code execution occurs on the main thread. The former causes delays, while the latter strains the main thread, which is often packed with various UI and DOM-related tasks, causing the UI to “stall” or the audio to “malfunction.” Because of this basic design flaw, ScriptProcessorNode was deprecated from the specification and replaced with an AudioWorklet.

The Audio Worklet does a good job of keeping user-supplied JavaScript code in the Audio processing thread — that is, it doesn’t have to jump to the main thread to process the Audio. This means that user-provided script code can be run on the audio rendering thread (AudioWorkletGlobalScope) with other built-in Audionodes, ensuring zero additional latency and synchronous rendering.

AudioWorklet detects the volume implementation

How to get microphone volume using AudioWorklet

An important thing to understand when using an AudioWorklet is that the code for an AudioWorklet runs in the Scope of the AudioWorkletGlobalScope, not in the context of the window. CurrentFrame, currentTime, sampleRate, and other global variables are available to AudioWorkletGlobalScope just like window, Document, etc. The first time I saw the code on StackOverflow, I was very confused. I felt how the code could work normally if it was all wrong. After I continued to learn, I resolved my doubts.

The code to detect the volume using an AudioWorklet isn’t too complicated either. The trouble is that the code needs to be separated into two files, and the AudioWorklet needs to be loaded over the network and placed in a public directory when used in a React project.

// volume-meter.js
const audioContext = new AudioContext();
await audioContext.audioWorklet.addModule('./worklet/vumeter.js');
const source = audioContext.createMediaStreamSource(mediaStream);
const node = new AudioWorkletNode(audioContext, 'vumeter');
node.port.onmessage = (event) = > {
  if (event.data.volume) {
    console.log(Math.round(event.data.volume * 200)); }}; source.connect(node).connect(audioContext.destination);Copy the code
// /worklet/vumeter.js
/* eslint-disable no-underscore-dangle */
const SMOOTHING_FACTOR = 0.8;
// eslint-disable-next-line no-unused-vars
const MINIMUM_VALUE = 0.00001;
registerProcessor(
  'vumeter'.class extends AudioWorkletProcessor {
    _volume;
    _updateIntervalInMS;
    _nextUpdateFrame;

    constructor() {
      super(a);this._volume = 0;
      this._updateIntervalInMS = 25;
      this._nextUpdateFrame = this._updateIntervalInMS;
      this.port.onmessage = (event) = > {
        if (event.data.updateIntervalInMS) {
          this._updateIntervalInMS = event.data.updateIntervalInMS;
          // console.log(event.data.updateIntervalInMS);}}; }get intervalInFrames() {
      // eslint-disable-next-line no-undef
      return (this._updateIntervalInMS / 1000) * sampleRate;
    }

    process(inputs, outputs, parameters) {
      const input = inputs[0];

      // Note that the input will be down-mixed to mono; however, if no inputs are
      // connected then zero channels will be passed in.
      if (0 < input.length) {
        const samples = input[0];
        let sum = 0;
        let rms = 0;

        // Calculated the squared-sum.
        for (let i = 0; i < samples.length; i += 1) {
          sum += samples[i] * samples[i];
        }

        // Calculate the RMS level and update the volume.
        rms = Math.sqrt(sum / samples.length);
        this._volume = Math.max(rms, this._volume * SMOOTHING_FACTOR);

        // Update and sync the volume property with the main thread.
        this._nextUpdateFrame -= samples.length;
        if (0 > this._nextUpdateFrame) {
          this._nextUpdateFrame += this.intervalInFrames;
          this.port.postMessage({ volume: this._volume }); }}return true; }});Copy the code

The complete code

legacy

Using an AudioWorklet to remove soundsrelated calculations from the main thread theoretically improves the performance of the page, but the actual measurement of performance is not obvious.

Optimization idea:

  1. As you can see from the code, the test results are reported every 25 milliseconds, so you don’t need to use them that often
  2. Changing the _updateIntervalInMS to 125 makes the volume detection less accurate. The reason is that the volume is averaged after the detection interval is larger
  3. The currentTime of each postMessage is cached, and the next time it is reported, check whether the interval is greater than 0.125 seconds

However, due to my lack of understanding of performance tools, I can only improve the theoretical analysis performance, and the actual analysis will be more proficient in performance analysis later.

The resources

  • Codepen. IO/forgived/PE…
  • Stackoverflow.com/questions/6…
  • Github.com/webrtc/samp…