This is the first day of my participation in the August More text Challenge. For details, see: August More Text Challenge.

preface

About [SSD series] : Some interesting content on the front end, aimed at 3-10 minutes, 500-1500 words, gain without being tired.

As the question, today we use pure Web technology, to achieve camera + microphone video recording function, about 100 lines of code, mainly involved knowledge:

  1. MediaDevices provides access to connected media input devices, such as cameras and microphones, as well as screen sharing.
  2. MediaRecorder records audio or video.
  3. IndexedDB stores a transactional database of large data structures.
  4. URL

    Used to generate an address from the Blob data of the video and provide it tovideoLabel use.

Results demonstrate

Effect of real machine

PC + Analog Mobile + Virtual Camera (VCam)

The source address

Source code -recordAV

Note:

  1. Permission issues require explicit authorization
  2. If you want to preview the demo on the mobile phone, you need to enable HTTPS. The demo has a certificate attached

Train of thought

  1. Use MediaDevices to evoke cameras and microphones
  2. Take the first step to get the stream, and use itvideoandMediaRecorder

    Because we need to see what’s on our camera at the same time
  3. After recording, the recorded video is stored in indexedDB
  4. Follow keys to list the recorded videos, click on it, get the Blob file, generate the URL, and provide it to the video tag to play.

implementation

Evoke the camera and microphone and get their stream

Need is MediaDevices here, the corresponding API is the navigator. MediaDevices. GetUserMedia

The core code is as follows:

const stream = await navigator.mediaDevices.getUserMedia({
    video: { facingMode: "environment" },  // Evoke the inner camera,
    audio: true  // Audio is required, such as a microphone
})
// Pass it to the video element to see the content of the camera
videoEL.srcObject = stream;  
// Initialize MediaRecorder
mediaRecorder = new MediaRecorder(stream, { mimeType: "video/webm" });
Copy the code

Notes:

  1. FacingMode parameter to the getUserMedia method

    userFor the front-facing camera,environmentFor the rear camera.
  2. New MediaRecorder parameters{ mimeType: "video/webm" }

    If it’s not set correctly, it might just be video and no microphone sound

Record and Save

Recording must have two operations: start and stop. There are many ways to do this. Let’s use the simplest form of two buttons, and register the relevant event handlers for each.

The code is as follows: As a small point of knowledge, any node that has an ID attribute can be accessed directly by using the variable corresponding to the ID attribute.

<button id="btnRecord" class="btn">The recording</button>
<button id="btnStop"  class="btn" >stop</button>

btnRecord.addEventListener("click", () => {
    startRecord(mediaRecorder);
    mediaRecorder.start();
});

btnStop.addEventListener("click", () => {
    mediaRecorder.stop();
})

Copy the code

Isn’t that simple. Notice that after the stop button is clicked above, we call the mediaRecorder. Stop method, which then triggers the Recorder. Onstop event.

IndexedDB access has many encapsulation libraries, indexedDB see section listed no less than 10 libraries, here we use idB-Keyval library, its simple and small (~600B) based on Promise key value pair storage, is also very simple to use, get, set is ok.

With that knowledge in hand, look at the code: isn’t it simple?

function startRecord(recorder) {
    var chunks = [];
    // Collect data
    recorder.ondataavailable = function (e) {
        chunks.push(e.data);
    }
    // Listen for stop events
    recorder.onstop = async() = > {var clipName = prompt('Please enter the name of the video');
        var blob = new Blob(chunks, { 'type': 'audio/mp4; ' });
        await idbKeyval.set(clipName + ".mp4", blob); listHistory(); }}Copy the code

History and viewing

Technically, you should use indexedDB’s cursor to read keys. For simplicity, this article reads all keys directly, and then determines the file suffix to filter.

async function listHistory() {
    list.innerHTML = null;
    const keys = await idbKeyval.keys();
    console.log("keys:", keys);

    keys.filter(k= > k.endsWith(".mp4")).forEach(key= > {
        const divEl = document.createElement("div");
        divEl.textContent = key;
        divEl.onclick = () = > playVideo(key);

        list.appendChild(divEl);
    });
}
Copy the code

At this point, we just need to click on a historical video playback logic, which is also very simple:

  async function playVideo(key) {
        const blob = await idbKeyval.get(key);
        // Generate the address
        fplayer.src = URL.createObjectURL(blob);
        fplayer.style.display = "block";
        fplayer.play();
    }
Copy the code

At this point, all the core code has been implemented.

summary

Is not very simple, everything looks not so difficult, so, you can easily into the pit ah.

Write in the last

Do not forget the original intention, [SSD series], 3-5 minutes, 500-1000 words, something, but not tired, if you feel good, your praise is the biggest motivation for me to move forward.

The technical exchange group, please come here. Or add my wechat cloud-dirge and study together.

MediaDevices. GetUserMedia () part of the pit and solutions