Visual music player availablePoking meWatch the effect

Understand the Web – Audio – Api

  • Basic knowledge of

The

AudioContext is an audio playback environment, which is similar to the drawing environment of canvas. It is necessary to create the environment context, create audio nodes by invoking the context, control the audio stream playback and pause operations, etc. All these operations need to occur in this environment.

try{
    var audioCtx = new (window.AudioContext || window.webkitAudioContext)(); 
}catch(e){
    alert('Web Audio API is not supported in this browser');
}
Copy the code

The AudioNode interface is a general module for processing audio. It can be an audio source module, an audio player device module, or an intermediate audio processing module. The connection of the different audio nodes (through AudioContext.connect()) and the destination connection to audioContext.destination (which can be thought of as a connection to a headset or speaker device) is completed before the music is output.

Common audio node: AudioBufferSourceNode: plays and processes audio data AnalyserNode: displays audio time and frequency data (through analyzing frequency data can draw a view such as waveform, the main way of visualization) Volume, total volume control audio MediaElementAudioSourceNode: relevance HTMLMediaElement, playing and processing from < video > and < audio > element of audio OscillatorNode: A periodic waveform that creates only one tone...Copy the code
  • The operation mode
  1. Creating an Audio Context
  2. In this context, create the audio source
  3. Create audio nodes, process audio data and connect
  4. Output devices

Creating an Audio Context

try{
    var audioCtx = new (window.AudioContext || window.webkitAudioContext)(); 
}catch(e){
    alert('Web Audio API is not supported in this browser');
}
Copy the code

Creating an Audio Source

Since the audio file data is binary (not text), we set the responseType of the request header to arrayBuffer and convert the.mp3 audio file to arrayBuffer

When AudioContext. DecodeAudioData decoding success after obtaining buffer, implement the callback function, the data in the AudioBufferSourceNode

The first method uses streaming to load music files, which is easy to understand. The disadvantage is that SRC files loaded through createMediaElementSource must be the same source and cannot be cross-domain

The following steps are based on method 2.

  • Method 1: Stream load through HTMLMediaElement
  <audio src="1.mp3"></audio>
  <script>
    let audio = document.querySelector('audio');
    let audioCtx = new (window.AudioContext || window.webkitAudioContext)();
    audio.addEventListener('canplay'.function () {
      let source = audioCtx.createMediaElementSource(audio);
      source.connect(audioCtx.destination);
      audio.play()
    })
  </script>
Copy the code
  • Method 2: Obtain resources using XMLHttpRequest
    let xhr = new XMLHttpRequest();
    xhr.open('GET'.'1.mp3'.true);
    xhr.responseType = 'arraybuffer';
    xhr.onload = function () {
      audioCtx.decodeAudioData(xhr.response, function (buffer) {
        getBufferSuccess(buffer)
      })
    }
Copy the code
  • Method 3: Obtain the file from input file
    let input = document.querySelector('input');
    input.addEventListener('change'.function () {
      if(this.files.length ! = = 0) {let file = this.files[0];
        let fr = new FileReader();
        fr.onload = function () {
          let fileRet = e.target.result;
          audioCtx.decodeAudioData(fileRet, function (buffer) {
            getBufferSuccess(buffer);
          }, function(err) { console.log(err) }) } fr.readAsArrayBuffer(file); }})Copy the code

Processing audio data

functionGetBufferSuccess (buffer) {// Create frequency analysis nodeletanalyser = audioCtx.createAnalyser(); // Determine the size of the fast Fourier transform in the frequency domain analyser.fftSize = 2048; // This property smoothes the transition between values over time from the last analysis frame. Analyser. SmoothingTimeConstant = 0.6; // Create a player object nodelet source= audioCtx.createBufferSource(); // Fill the audio buffer data source.buffer = buffer; // Create a volume node (if you need to adjust the volume)letgainNode = audioCtx.createGain(); // Connect the node object source.connect(gainNode); gainNode.connect(analyser); analyser.connect(audioCtx.destination); }Copy the code

Get audio frequency

  • Method 1: use js method to obtain (by listening to the AudioProcess event, due to performance problems, will be deprecated, not to do the detailed description, interested can understand)
// This method needs to replenish the connection of the nodelet javascriptNode = audioCtx.createScriptProcessor(2048, 1, 1);
      javascriptNode.connect(audioCtx.destination);
      analyser.connect(javascriptNode);
      
        this.javascriptNode.onaudioprocess = function () {
            currData = new Uint8Array(analyser.frequencyBinCount);
            analyser.getByteFrequencyData(currData);
        }

Copy the code
  • Method 2: Obtain the data by AnalyserNode

Get AnalyserNode frequencyBinCount frequency in the node length, instantiation for 8-bit integer array length, through AnalyserNode. GetByteFrequencyData frequency data copies of the nodes in the array, The value ranges from 0 to 256. The higher the value is, the higher the frequency is. AnalyserNode. GetByteTimeDomainData principle, but to obtain frequency size, choose two methods according to demand a can.

    function getData () {
      // analyser.frequencyBinCount The number of visual values, which is half of the previous fftSizelet currData = new Uint8Array(analyser.frequencyBinCount);
      analyser.getByteFrequencyData(currData);
      analyser.getByteTimeDomainData(currData);
    }
Copy the code

Output devices

AudioBufferSourceNode. Start (n) n start time, the default is 0, start playing the audio AudioBufferSourceNode. Stop (n) audio in n seconds to stop, if there is no value to immediately stop

Other apis

Audiocontext.resume () Controls the playback of audio audioContext.suspend () controls the suspension of audio audioContext.currentTime Obtains the current audio playback time AudioBufferSourceNode. Buffer. The total duration for audio playback time GainNode. Gain. The value control volume [0, 1] GainNode. Gain. LinearRampToValueAtTime volume gradually into the fade out

Canvas draws visualizations

You can draw whatever you want. Frequent calls to the Canvas API cost performance. Here are some tips I used to improve performance in testing.

  • Multi-layer canvas. Some drawings that do not need frequent changes, such as background and fixed decoration drawings, can be drawn in the context of another canvas
  • Off-screen drawing: The principle is to generate a canvas that does not appear on the page and draw in the cached canvas. The canvas that is really displayed only needs to draw the picture through the drawImage API, refer to this blog post
  • Fix the length of the lineWidth instead of setting the lineWidth every time you draw one
  • In short, don’t call the Canvas API too much, but don’t throw away some of your pegasus ideas just to improve performance

Problems encountered

Failed to set the ‘buffer’ property on ‘AudioBufferSourceNode’: Cannot set buffer to non-null after it has been already set to a non-null buffer at AudioContext, The AudioBufferSourceNode’s buffer property has already been set, so it cannot be reset to the new buffer value. Since songs are played mainly through its ArrayBuffer ArrayBuffer, the solution is to switch songs. Destroys the current AudioBufferSourceNode, recreates the context, audio node, connection, and so on.

The source code is here, and the interaction part is a little bit messy, because at the time, I was just trying to visualize things, and then I added whatever I thought of, so the code looks a little bit redundant, so if you look at the Audio implementation, mainly in MusicPlay objects.

Xiaobai first published blog, found that writing blog is longer than writing a demo, afraid to write something wrong will mislead you (there are mistakes please comment out ~), so will go to check a lot of relevant information, this process is also a learning process, later will often write blog drops! Finally, I hope you can learn to do this kind of visual effect by yourself through this article, with some visual library can also make a very cool effect, learn and progress together, come on! (. · д ·.