This article was first published on the wechat official account Byteflow

FFmpeg development series serial:

FFmpeg Development (01) : FFmpeg compilation and integration

FFmpeg development (02) : FFmpeg + ANativeWindow video decoding playback

FFmpeg development (03) : FFmpeg + OpenSLES audio decoding playback

FFmpeg development (04) : FFmpeg + OpenGLES audio visual playback

FFmpeg development (05) : FFmpeg + OpenGLES video decoding playback and video filter

FFmpeg development (06) : FFmpeg player to achieve audio and video synchronization in three ways

FFmpeg development (07) : FFmpeg + OpenGLES implementation of 3D panorama player

FFmpeg Development (08) : FFmpeg player video rendering optimization

FFmpeg development (09) : FFmpeg, X264 and FDK-AAC compiler integration

FFmpeg Development (10) : FFmpeg video recording – Video adding filters and coding

FFmpeg development (11) : FFmpeg + Android AudioRecorder audio recording encoding

FFmpeg development (12) : Android FFmpeg with filter micro channel video recording function

FFmpeg development (13) : Android FFmpeg streaming media playback while recording function

FFmpeg development (14) : Android FFmpeg + MediaCodec video hard decoding

In the previous FFmpeg series of articles, FFmpeg has been implemented compilation and integration, based on FFmpeg audio and video playback, recording, and combined with OpenGL add rich filters and other functions, the demo will basically use FFmpeg involves knowledge basically covered.

After learning this, you must have some ideas, such as using FFmpeg to build your own universal player, make an audio and video editing software, etc., so the next recommendation is to learn some excellent open source projects, the first is ExoPlayer, ijkPlayer, However, these well-known open source projects have a large amount of code and many functions, which are difficult for some beginner developers to learn and not easy to stick to.

Therefore, we can start from some excellent open source projects with medium code volume. Based on this, after learning FFmpeg series in this paper, we can learn and study open source cross-platform player Fanplayer.

Project address: github.com/rockcarry/f…

Fanplayer is a universal player based on FFmpeg that supports Android and Windows platforms. It supports hard decoding, double speed playback, streaming media playback and other functions. The common functions of the player are basically supported. Among them, our learn-FFmpeg project references fanPlayer’s project structure.

However, fanPlayer requires you to compile the FFmpeg source code in Linux environment to generate a dependency library, but the compilation script author has written, you need to compile FFmpeg and integrate it into the project.

For those of you who don’t like the trouble, I have finished compiling and integrating the project, so you can pull down the project code directly

Project code: github.com/githubhaoha…

Next, I will briefly explain the source code of fanPlayer project for your reference. Among them, the Java code is relatively simple, that is, the surface of SurfaceView is passed down to build NativeWindow. Here, I will focus on the implementation of PART C.

The JNI entry function is defined in the file fanplayer_jni.cpp, which defines several apis commonly used by players:

static const JNINativeMethod g_methods[] = {
    { "nativeOpen"              , "(Ljava/lang/String; Ljava/lang/Object; IILjava/lang/String;) J", (void*)nativeOpen },
    { "nativeClose"             , "(J)V"  , (void*)nativeClose    },
    { "nativePlay"              , "(J)V"  , (void*)nativePlay     },
    { "nativePause"             , "(J)V"  , (void*)nativePause    },
    { "nativeSeek"              , "(JJ)V" , (void*)nativeSeek     },
    { "nativeSetParam"          , "(JIJ)V", (void*)nativeSetParam },
    { "nativeGetParam"          , "(JI)J" , (void*)nativeGetParam },
    { "nativeSetDisplaySurface" , "(JLjava/lang/Object;) V", (void*)nativeSetDisplaySurface },
};

JNIEXPORT jint JNICALL JNI_OnLoad(JavaVM* vm, void* reserved)
{
    DO_USE_VAR(reserved);

    JNIEnv* env = NULL;
    if (vm->GetEnv((void**) &env, JNI_VERSION_1_4) ! = JNI_OK || ! env) { __android_log_print(ANDROID_LOG_ERROR,"fanplayer_jni"."ERROR: GetEnv failed\n");
        return - 1;
    }

    jclass cls = env->FindClass("com/rockcarry/fanplayer/MediaPlayer");
    int ret = env->RegisterNatives(cls, g_methods, sizeof(g_methods)/sizeof(g_methods[0]));
    if(ret ! = JNI_OK) { __android_log_print(ANDROID_LOG_ERROR,"fanplayer_jni"."ERROR: failed to register native methods ! \n");
        return - 1;
    }

    // for g_jvm
    g_jvm = vm;
    av_jni_set_java_vm(vm, NULL);
    return JNI_VERSION_1_4;
}
Copy the code

Next comes the ffPlayer. c file, which encapsulates the entire player and contains three sub-modules, namely demultiplexing, video decoding and audio decoding modules, which are located in three sub-threads:

// Function implementation
void* player_open(char *file, void *win, PLAYER_INIT_PARAMS *params)
{
    PLAYER *player = NULL;

	/ /... The code is omitted

	pthread_create(&player->avdemux_thread, NULL, av_demux_thread_proc, player);
    pthread_create(&player->adecode_thread, NULL, audio_decode_thread_proc, player);
    pthread_create(&player->vdecode_thread, NULL, video_decode_thread_proc, player);
    return player; // return

error_handler:
    player_close(player);
    return NULL;
}
Copy the code

The three sub-threads of the demultiplexing, video decoding and audio decoding module communicate through the packet queue, producer and consumer model.

The file adev-Android. CPP, where audio is played, creates an AudioTrack object through JNI, which opens a child thread to continuously read from the queue (linked list) holding PCM data:

// Interface function implementation
void* adev_create(int type, int bufnum, int buflen, CMNVARS *cmnvars)
{
	/ /... Omit code

    jclass jcls         = env->FindClass("android/media/AudioTrack");
    ctxt->jmid_at_init  = env->GetMethodID(jcls, "<init>" , "(IIIIII)V");
    ctxt->jmid_at_close = env->GetMethodID(jcls, "release"."()V");
    ctxt->jmid_at_play  = env->GetMethodID(jcls, "play"   , "()V");
    ctxt->jmid_at_pause = env->GetMethodID(jcls, "pause"  , "()V");
    ctxt->jmid_at_write = env->GetMethodID(jcls, "write"  , "([BII)I");

    // new AudioRecord
    #define STREAM_MUSIC        3
    #define ENCODING_PCM_16BIT  2
    #define CHANNEL_STEREO      3
    #define MODE_STREAM         1
    jobject at_obj = env->NewObject(jcls, ctxt->jmid_at_init, STREAM_MUSIC, ADEV_SAMPLE_RATE, CHANNEL_STEREO, ENCODING_PCM_16BIT, ctxt->buflen * 2, MODE_STREAM);
    ctxt->jobj_at  = env->NewGlobalRef(at_obj);
    env->DeleteLocalRef(at_obj);

    // start audiotrack
    env->CallVoidMethod(ctxt->jobj_at, ctxt->jmid_at_play);

    // create mutex & cond
    pthread_mutex_init(&ctxt->lock, NULL);
    pthread_cond_init (&ctxt->cond, NULL);

    // create audio rendering thread
    pthread_create(&ctxt->thread, NULL, audio_render_thread_proc, ctxt);
    return ctxt;
}
Copy the code

C render_video, then call vdev_android_lock in vdev-android. CPP:

static void vdev_android_lock(void *ctxt, uint8_t *buffer[8].int linesize[8].int64_t pts)
{
    VDEVCTXT *c = (VDEVCTXT*)ctxt;
    if (c->status & VDEV_ANDROID_UPDATE_WIN) {
        if (c->win    ) { ANativeWindow_release(c->win); c->win = NULL; }
        if (c->surface) c->win = ANativeWindow_fromSurface(get_jni_env(), (jobject)c->surface);
        if (c->win    ) ANativeWindow_setBuffersGeometry(c->win, c->vw, c->vh, DEF_WIN_PIX_FMT);
        c->status &= ~VDEV_ANDROID_UPDATE_WIN;
    }
    if (c->win) {
        ANativeWindow_Buffer winbuf;
        if (0 == ANativeWindow_lock(c->win, &winbuf, NULL)) {
            buffer  [0] = (uint8_t*)winbuf.bits;
            linesize[0] = winbuf.stride * 4;
            linesize[6] = c->vw;
            linesize[7] = c->vh;
        }
    }
    c->cmnvars->vpts = pts;
}
Copy the code

Vdev_avsync_and_complete (vdev_avsync_and_complete) in vdev-cmn.c:

void vdev_avsync_and_complete(void *ctxt)
{
    LOGCATE("vdev_avsync_and_complete");
    VDEV_COMMON_CTXT *c = (VDEV_COMMON_CTXT*)ctxt;
    int     tickframe, tickdiff, scdiff, avdiff = - 1;
    int64_t tickcur, sysclock;

    if(! (c->status & VDEV_PAUSE)) {//++ frame rate & av sync control ++//
        tickframe   = 100 * c->tickframe / c->speed; / / c - > speed 100 by default
        tickcur     = av_gettime_relative() / 1000; // The current system time
        tickdiff    = (int)(tickcur - c->ticklast); // The (actual) interval between 2 frames of rendering
        c->ticklast = tickcur;

        //(tickcur-c ->cmnvars->start_tick) How long was played, system clock time, in ms
        sysclock= c->cmnvars->start_pts + (tickcur - c->cmnvars->start_tick) * c->speed / 100;
        scdiff  = (int)(sysclock - c->cmnvars->vpts - c->tickavdiff); // diff between system clock and video pts
        avdiff  = (int)(c->cmnvars->apts  - c->cmnvars->vpts - c->tickavdiff); // diff between audio and video pts
        avdiff  = c->cmnvars->apts <= 0 ? scdiff : avdiff; // if apts is invalid, sync video to system clock

        // Tickdiff: the actual interval between renders, the theoretical interval calculated by TickFrame based on the frame rate
        if (tickdiff - tickframe >  5) c->ticksleep--;
        if (tickdiff - tickframe < - 5) c->ticksleep++;
        if (c->cmnvars->vpts >= 0) {
            if      (avdiff >  500) c->ticksleep -= 3;
            else if (avdiff >  50 ) c->ticksleep -= 2;
            else if (avdiff >  30 ) c->ticksleep -= 1;
            else if (avdiff < - 500.) c->ticksleep += 3;
            else if (avdiff < - 50 ) c->ticksleep += 2;
            else if (avdiff < - 30 ) c->ticksleep += 1;
        }
        if (c->ticksleep < 0) c->ticksleep = 0;
        LOGCATE("vdev_avsync_and_complete tickdiff=%d, tickframe=%d, c->ticksleep=%d", tickdiff, tickframe, c->ticksleep);
        //-- frame rate & av sync control --//
    } else {
        c->ticksleep = c->tickframe;
    }

    if (c->ticksleep > 0&& c->cmnvars->init_params->avts_syncmode ! = AVSYNC_MODE_LIVE_SYNC0) av_usleep(c->ticksleep *1000);
    av_log(NULL, AV_LOG_INFO, "d: %3d, s: %3d\n", avdiff, c->ticksleep);
}
Copy the code

That’s the core code snippet of the FanPlayer project.

communicate

Add author wechat: byte-flow, into the technical exchange group discussion.