The article was first published on the official wechat account: Byte Flow

FFmpeg Development Series serial:

  • FFmpeg development (01) : FFmpeg compilation and integration

  • FFmpeg development (02) : FFmpeg + ANativeWindow to achieve video decoding playback

  • FFmpeg development (03) : FFmpeg + OpenSLES to achieve audio decoding playback

  • FFmpeg development (04) : FFmpeg + OpenGLES to achieve audio visual playback

  • FFmpeg development (05) : FFmpeg + OpenGLES to achieve video decoding playback and video filter

  • FFmpeg development (06) : FFmpeg player to achieve audio and video synchronization in three ways

  • FFmpeg Development (07) : FFmpeg + OpenGLES to achieve 3D panoramic player

  • FFmpeg Development (08) : FFmpeg player video rendering optimization

  • FFmpeg development (09) : FFmpeg, X264 and FDK-AAC compilation integration

  • FFmpeg development (10) : FFmpeg video recording – Adding filters and encoding to video

  • FFmpeg development (11) : FFmpeg + Android AudioRecorder audio recording coding

  • FFmpeg development (12) : Android FFmpeg to achieve the micro channel small video recording function with filter

In the previous FFmpeg series of articles, has achieved the audio and video playback, recording has added filters and other functions, this article will use FFmpeg to achieve streaming media playback while recording function.

Streaming media

Streaming media (English: Streaming media) is a series of multimedia data compression, send information through the Internet section, real-time video transmission on the Internet for viewing a technology and process, this technology makes data packets to send like water, if you do not use this technique, you must download the entire media file before using.

Streaming media does not download the entire file before playing, only the beginning part of the content is stored in memory, streaming media data stream is always transmitted and played, but there is some delay at the beginning.

The key technology of streaming media is streaming transmission, which is divided into real-time stream and sequential stream.

A user can watch online media while downloading files. At a given moment, the user can only watch the downloaded part and cannot skip to the earlier part that has not been downloaded. During the transmission, the download order is not adjusted according to the user’s connection speed.

Real-time streaming transmission means to ensure that the bandwidth of the media signal matches the network connection, so that the media can be watched in real time. Real-time streaming transmission adjusts the quality of the output audio and video according to the network condition so as to realize the continuous real-time transmission of the media. Users can fast forward or backward to watch the content in front or behind.

FFmpeg plays streaming media

The processing of audio and video data in FFmpeg can be divided into four layers: protocol layer, container layer, coding layer and original data layer.

Protocol layer: Provides the function of sending and receiving network protocols and can receive or push media streams in encapsulated formats.

The protocol layer is supported by the LibavFormat library and third-party libraries such as Librtmp.

Container layer: handles various encapsulation formats (MP4, FLV, etc.). The container layer is supported by the LibavFormat library.

Encoding layer: processing audio and video encoding and decoding. The encoding layer is supported by a rich variety of codecs (the Libavcodec library and third-party codecs such as libx264).

Raw data layer: Processing unencoded raw audio and video frames.

Libavformat library in FFmpeg provides rich protocol processing and encapsulation format processing functions. When opening input/output, FFmpeg will detect the input/output format according to the input URL/output URL, and select the appropriate protocol and encapsulation format.

URL, for example, if the output is “RTMP: / / 122.125.10.22 / live”, then open FFmpeg output, will be determined using the RTMP protocol, encapsulation format for FLV.

FFmpeg open input/output internal processing details users do not need to pay attention to, the main difference lies in the input/output URL form is different, if the URL carrying “RTMP ://”,” RPT ://”, “udp://” and other prefixes, it involves stream processing; Otherwise, local files are processed.

Because FFmpeg encapsulates different transport protocols, there is no difference in the flow between using FFmpeg to play streaming media and playing local files (for FFmpeg versions above 4.2.2).

FFmpeg playback while recording

FFmpeg playback while recording there are two ways to achieve:

  • In the process of demultiplexing, the coded data packet is obtained, and then re-multiplexed.
  • After decoding, the original data is obtained, and then the original data is processed (such as adding filters), and finally the processed data is encoded and packaged.

In this paper, the decoding of the original data is recoded to achieve playback and recording.

For video recording, we can directly use the class defined in the previous FFmpeg video recording. Once recording is started, we just need to keep plugging video frames into it.

class SingleVideoRecorder {
public:
    SingleVideoRecorder(const char* outUrl, int frameWidth, int frameHeight, long bitRate, int fps);
    ~SingleVideoRecorder(a);// Start recording
    int StartRecord(a);
	// Receive video data
    int OnFrame2Encode(VideoFrame *inputFrame);
	// Stop recording
    int StopRecord(a);

private:... };Copy the code

The recording is then done in the decoding class.

No. : * * * * / Created by the public byte flow on 2021/3/16. * https://github.com/githubhaohao/LearnFFmpeg * latest article starts from the public no. : Byte Flow, questions or technical communication can add wechat bytes-flow, get video tutorials, pull you into the technical communication group * * */

#include "VideoDecoder.h"

void VideoDecoder::OnDecoderReady(a) {
    LOGCATE("VideoDecoder::OnDecoderReady");
    m_VideoWidth = GetCodecContext()->width;
    m_VideoHeight = GetCodecContext()->height;

    if(m_VideoRender ! =nullptr) {
        int dstSize[2] = {0};
        m_VideoRender->Init(m_VideoWidth, m_VideoHeight, dstSize);
        m_RenderWidth = dstSize[0];
        m_RenderHeight = dstSize[1];

		/ / initialization
        int fps = 25;
        long videoBitRate = m_RenderWidth * m_RenderHeight * fps * 0.2;
        m_pVideoRecorder = new SingleVideoRecorder("/sdcard/learnffmpeg_output.mp4", m_RenderWidth, m_RenderHeight, videoBitRate, fps);
		// Start recording
        m_pVideoRecorder->StartRecord(a); }else {
        LOGCATE("VideoDecoder::OnDecoderReady m_VideoRender == null"); }}void VideoDecoder::OnDecoderDone(a) {
    LOGCATE("VideoDecoder::OnDecoderDone");

    if(m_VideoRender)
        m_VideoRender->UnInit(a);// Stop recording after the video is played
    if(m_pVideoRecorder ! =nullptr) {
        m_pVideoRecorder->StopRecord(a);delete m_pVideoRecorder;
        m_pVideoRecorder = nullptr; }}void VideoDecoder::OnFrameAvailable(AVFrame *frame) {
    LOGCATE("VideoDecoder::OnFrameAvailable frame=%p", frame);
    if(m_VideoRender ! =nullptr&& frame ! =nullptr) {
        NativeImage image;
        sws_scale(m_SwsContext, frame->data, frame->linesize, 0,
                  m_VideoHeight, m_RGBAFrame->data, m_RGBAFrame->linesize);
        image.format = IMAGE_FORMAT_RGBA;
        image.width = m_RenderWidth;
        image.height = m_RenderHeight;
        image.ppPlane[0] = m_RGBAFrame->data[0];
        image.pLineSize[0] = image.width * 4;

        m_VideoRender->RenderVideoFrame(&image);

		// Keep plugging video frames
        if(m_pVideoRecorder ! =nullptr) {
            m_pVideoRecorder->OnFrame2Encode(&image); }}}Copy the code

The complete implementation code can be found in the project:

https://github.com/githubhaohao/LearnFFmpeg
Copy the code

Refer to the article

Zh.wikipedia.org/wiki/%E6%B5…

zhuanlan.zhihu.com/p/29567587

www.cnblogs.com/leisure_chn…

Implementation code path

LearnFFmpeg

Technical communication

Technical exchange/get video tutorials can be added to my wechat: bytes-flow