Ffmpeg quick browse

Video codec usually has soft codec FFMPEG, and hard coding MediaCodec, hard codec high efficiency, fast speed but poor compatibility, here we choose FFMPEG. FFmpeg can also integrate other codec libraries, such as X264, FAAC, LAMC, FDKAAC, etc., most video websites on the market are also using FFmpeg packaging software, the following introduces FFmpeg main modules and functions:

libavformat

It is used to generate and parse various audio and video packaging formats, including obtaining the required decoding information to generate the decoding context structure and reading audio and video frames, etc. A format resolution protocol for audio and video that provides an independent source of audio or video streams for libavCodec analysis streams.

libavcodec

Used for various types of sound/image encoding and decoding; Libavcodec library is the core of audio and video codec, realizing the functions of most of the decoders visible in the market. Libavcodec library is included or applied by other major decoders ffdshow, Mplayer, etc.

libavfilter

Filter (FileIO, FPS, DrawText) audio and video filter development, such as watermark, double speed playback, etc.

libavtutil

A library of common utility functions, including arithmetic operation character operations;

libswreasmple

Raw audio format transcoding

libswscale

(Original video format conversion) for video scene scaling, color mapping conversion; Image color space or format conversion, such as RGB565, RGB888 and YUV420, etc.

libpostproc+libavcodec

Video Playback Process

Java layer

Here is a brief description, detailed can see the source code, PlayActivity, Player and other classes.

Datasource: Indicates the source

TinaPlayer, controls the start, stop and other states of a video.

SurfaceView/TexureView: Used for video display, providing Surface to Player

Native layer

Initialize the

First get the video source from the Java layer and pass it to the TinaFFmpeg class for the video flow

extern "C"
JNIEXPORT void JNICALL
Java_tina_com_player_TinaPlayer_native_1prepare(JNIEnv *env, jobject instance, jstring dataSource_) {
    const char *dataSource = env->GetStringUTFChars(dataSource_, 0);
    callHelper = new JavaCallHelper(javaVM, env, instance);
    ffmpeg = new TinaFFmpeg(callHelper, dataSource);
    ffmpeg->setRenderFrameCallback(render);
    ffmpeg->prepare();
    env->ReleaseStringUTFChars(dataSource_, dataSource);
}

// Save to TinaFFmpeg via constructor
TinaFFmpeg::TinaFFmpeg(JavaCallHelper *callHelper, const char *dataSource) {
    // Memory copy is required, otherwise dangling Pointers will be created
// this->dataSource = const_cast
      
       (dataSource);
      
    this->callHelper = callHelper;
    //strlen gets the length of the string, excluding \0
    this->dataSource = new char[strlen(dataSource) + 1];
    stpcpy(this->dataSource, dataSource);
}
Copy the code

TinaFFmpeg as a class for unpacking, decoding, rendering, audio and video synchronization of the entire video, encapsulation object-oriented:

class TinaFFmpeg {
public:
    TinaFFmpeg(JavaCallHelper *javaCallHelper, const char *dataSource);
    ~TinaFFmpeg();
    void prepare(a);
    void _prepare();
    void start(a);
    void _start();
    void setRenderFrameCallback(RenderFrameCallback callback);
    void stop(a);
public:
    char *dataSource;
    pthread_t pid;
    pthread_t pid_play;
    AVFormatContext *formatContext = 0;
    JavaCallHelper *callHelper;
    AudioChannel *audioChannel = 0;// Pointer initialization is best assigned to null
    VideoChannel *videoChannel = 0;
    bool isPlaying;
    RenderFrameCallback callback;
    pthread_t pid_stop;
};
Copy the code
The decoding process

TinaFFmpeg decoder process procedure that calls functions in FFmpeg that are no longer called in the new version of av_register_all().

Prepare for release

Prepare: unpack the Video and Audio information and start the thread processing:

void TinaFFmpeg::prepare() {
    // Create a thread, task_prepare as the function to handle the thread, this is the function argument
    pthread_create(&pid, 0, task_prepare, this);
}

// Call the real handler _prepare
void *task_prepare(void *arges) {
    TinaFFmpeg *ffmpeg = static_cast<TinaFFmpeg *>(arges);
    ffmpeg->_prepare();
    return 0;
}
Copy the code

Call _prepare and create AVFormatContext *formatContext = 0 in tinaffmpeg. h; Get the Vedio Audio off of it. Avformat_open_input (&formatContext, dataSource, 0, &options); This method is time-consuming and may fail, so you need to call back the error message to Java, and you need to call JavaCallHelper to launch the call Java method.

void TinaFFmpeg::_prepare() {

    // Initialize the network so ffmPEG can
    avformat_network_init();

    //1. Open the media address (file, live address)
    //AVFormatContext contains the information of the video (width, height)
    // The file path is not correct
    // The third argument: indicates the format of the media to open (pass NUll, ffmPEG will deduce whether MP4 or FLV)
    // The fourth argument:
    AVDictionary *options = 0;
    // Set the timeout to microseconds
    av_dict_set(&options, "timeout"."5000000".0);
    // Time consuming operation
    int ret = avformat_open_input(&formatContext, dataSource, 0, &options);

    av_dict_free(&options);

    // If ret is not 0, opening media fails
    if(ret ! =0) {
        LOGE("Failed to open media :%s", av_err2str(ret));
        callHelper->onError(THREAD_CHILD, FFMPEG_CAN_NOT_OPEN_URL);
        return;
    }

    //2. Find streams in audio and video
    ret = avformat_find_stream_info(formatContext, 0);
    if (ret < 0) {
        LOGE("Failed to find stream :%s", av_err2str(ret));
        callHelper->onError(THREAD_CHILD, FFMPEG_CAN_NOT_FIND_STREAMS);
        return; }... }Copy the code

JavaCallHelper, which handles Native layer reflection calling Java layer methods, only handles onError and onPrepare methods in instance (TinaPlayer). JNIEnv *env processing is Java and Native in the same thread, and different threads can be obtained through JavaVM * VM. The instance object passed in needs to create a global reference via env->NewGlobalRef(instace).

JavaCallHelper::JavaCallHelper(JavaVM *vm, JNIEnv *env, jobject instace) {
    this->vm = vm;
    // if in the main thread callback
    this->env = env;
    Global references need to be created as soon as jobject is involved across methods and threads
    this->instance = env->NewGlobalRef(instace);
    jclass clazz = env->GetObjectClass(instace);
    onErrorId = env->GetMethodID(clazz, "onError"."(I)V");
    onPrepareId = env->GetMethodID(clazz, "onPrepare"."()V");
}

JavaCallHelper::~JavaCallHelper() {
    env->DeleteGlobalRef(instance);
}

void JavaCallHelper::onError(int thread, int error) {
    / / main thread
    if (thread == THREAD_MAIN) {
        env->CallVoidMethod(instance, onErrorId, error);
    } else {
        / / the child thread
        JNIEnv *env;
        // Get the jnienv of my thread
        vm->AttachCurrentThread(&env, 0); env->CallVoidMethod(instance, onErrorId, error); vm->DetachCurrentThread(); }}void JavaCallHelper::onPrepare(intThread) {... }Copy the code

Decoded audio and video

AVFormatContext *formatContext get the corresponding video and audio stream, get the decoder AVCodec, and pass the decoder context to AVCodecContext to the corresponding VideoChannel. AudioChannel to handle the corresponding decoding

void TinaFFmpeg::_prepare() {
    // Time consuming operation
    int ret = avformat_open_input(&formatContext, dataSource, 0, &options); av_dict_free(&options); .//2. Find streams in audio and video
    ret = avformat_find_stream_info(formatContext, 0);
    if (ret < 0) {
        LOGE("Failed to find stream :%s", av_err2str(ret));
        callHelper->onError(THREAD_CHILD, FFMPEG_CAN_NOT_FIND_STREAMS);
        return;
    }
    //nb_streams; Several streams (several pieces of video/audio)
    for (int i = 0; i < formatContext->nb_streams; ++i) {
        // May represent a video, may also represent an audio
        AVStream *stream = formatContext->streams[i];
        // Contains the job parameter information to decode this stream
        AVCodecParameters *codecpar = stream->codecpar;
        // Whatever the audio, video, need to do (get the decoder)
        AVCodec *dec = avcodec_find_decoder(codecpar->codec_id);
        if (dec == NULL) {
            LOGE("Failed to find decoder :%s", av_err2str(ret));
            callHelper->onError(THREAD_CHILD, FFMPEG_FIND_DECODER_FAIL);
            return;
        }
        // Get the decoder context
        AVCodecContext *context = avcodec_alloc_context3(dec);
        if (context == NULL) {
            LOGE("Failed to create decoding context :%s", av_err2str(ret));
            callHelper->onError(THREAD_CHILD, FFMPEG_ALLOC_CODEC_CONTEXT_FAIL);
            return;
        }
        //3. Set some parameters in the context
        ret = avcodec_parameters_to_context(context, codecpar);
        if (ret < 0) {
            LOGE("Failed to set decoding context parameters :%s", av_err2str(ret));
            callHelper->onError(THREAD_CHILD, FFMPEG_OPEN_DECODER_FAIL);
            return;
        }
        //4. Open the decoder
        ret = avcodec_open2(context, dec, 0);
        if(ret ! =0) {
            LOGE("Failed to open decoder :%s", av_err2str(ret));
            callHelper->onError(THREAD_CHILD, FFMPEG_ALLOC_CODEC_CONTEXT_FAIL);
            return;
        }
        / / unit
        AVRational time_base = stream->time_base;
        / / audio
        if (codecpar->codec_type == AVMEDIA_TYPE_AUDIO) {/ / 0
            audioChannel = new AudioChannel(i, context, time_base);
        } else if (codecpar->codec_type == AVMEDIA_TYPE_VIDEO) {/ / video
            // Frame rate: how many images should be displayed per unit of time
            AVRational frame_rate = stream->avg_frame_rate;
            int fps = av_q2d(frame_rate);
            videoChannel = newVideoChannel(i, context, time_base, fps); videoChannel->setRenderFrameCallback(callback); }}if(! audioChannel && ! videoChannel) { LOGE("No audio or video.");
        callHelper->onError(THREAD_CHILD, FFMPEG_NOMEDIA);
        return;
    }
    LOGE("Native preparation process is ready");
    // Notify Java that you are ready to start playing whenever you are ready
    callHelper->onPrepare(THREAD_CHILD);
}
Copy the code

The next chapter into the video, audio decoding, audio and video synchronization processing process, and attach the source address.