Objective of this paper

What is the process of building a live broadcast player from 0 and what technologies need to be learned? Take a look at the renderings:

The knowledge involved in this article

  • cross-compilation
  • JNI
  • FFmpeg

How to compile a C/C ++ source file?

For example, here is code test.c

#include<stdio.h>
int main(a){
	printf("excute success \n");
return 0;
}
Copy the code

To get the executable on a Mac, simply run the following command:

gcc test.c -o test
Copy the code

After executing the above command, you will find an additional executable file, test

#Execute this shell script to successfully print excute Success
./test 
Copy the code

The compiler

What is GCC used above?

It is a C/C ++ compiler that can compile source files into executable files or library files i.e..so. a etc.

It has some common commands: 15 common GCC commands

There are other compilers besides GCC, such as:

  • g++
  • clang

Their use is similar to GCC, such as using clang to compile an executable as follows:

clang test.c -o test
Copy the code

cross-compilation

A quick question: Can the test executable run on Android?

You can try the test push executable on an Android phone:

# /data/localThe/TMP folder does not require root to execute script files
adb push test /data/local/tmp
adb shell
cd /data/local/tmp
./test
Copy the code

As a result, you will see an error.

How about changing the NDK compiler chain?

Replace GCC with clang under NDK, for example:

/Users/wangzhen/Library/Android/sdk/android-ndk-r20b/toolchains/llvm/prebuilt/darwin-x86_64/bin/aarch64-linux-android29- clang test.c -o test_androidCopy the code

At this point, test_Android push to Android will be successful.

What is cross-compilation

The process of compiling a secondary file on one platform that can be executed on another platform is called cross-compilation. Such as compiling libraries on MacOS that are available on Android.

Cross-compiling for Android

If you want to compile libraries that run on the Android platform, you need to use the NDK.

So how many ways can you cross-compile on Android? First we need to know what the source files and main processes are usually involved in cross-compilation.

Abstract the main flow of compilation

The structure of a cross-compiled file is usually as follows: headers, libraries, and your own source files:

├ ─ ─ the include │ └ ─ ─ util. H ├ ─ ─ lib │ └ ─ ─ libutil. A │ └ ─ ─ libextra. So └ ─ ─ the SRC └ ─ ─ main. CPP └ ─ ─ extra. The CPPCopy the code

What we need to do can be abstracted into the following flow:

The following three compilations follow this process.

Manually compile

GCC lib_source.c source.c -o source

Compile the source file into a dynamic library so: gcc-fpic-shared extra_source2.c -o libextrasource2.so

Compiling multiple source files should reference third parties so: GCC extra_source1.c source.c -o source-l. -lextrasource2

-l sets the search path for library and header files for the compiler, “. “indicates the current folder

-l represents the library file to connect to, such as extrasource2

.so name: must be libxxx.so, the real extrasource2 file linked above is libextrasource2.so

GCC parameter order problem:

gcc -L. -lextrasource2 extra_source1.c source.c -o source wrong order

gcc extra_source1.c source.c -o source -L. -lextrasource2 right order

More questions about order can be found in StackOverflow

Makefile

A Makefile is “automated compilation” : the source files in a project are not counted, they are placed in several directories by type, function, and module. A Makefile defines a set of rules that specify which files to compile first, which files to post compile, how to link, and so on. Android uses the Android.mk file to configure makefiles. Here is the simplest android. mk file:

#The location of the source file. The my-dir macro returns the path to the current directory (including the directory of the Android.mk file itself).
LOCAL_PATH := $(call my-dir)

#Import additional makefiles. The CLEAR_VARS variable points to a special GNU Makefile that clears many LOCAL_XXX variables for you
#The LOCAL_PATH variable will not be cleaned
include $(CLEAR_VARS)

#Specify the library name. If the module name starts with lib, the build system does not append the extra prefix lib; Instead, take the module name as is and add the.so extension.
LOCAL_MODULE := hello
#The list of C and/or C++ source files that contain the modules to be built into is separated by Spaces
LOCAL_SRC_FILES := hello.c
#Building dynamic libraries
include $(BUILD_SHARED_LIBRARY)
Copy the code

Writing makefiles by hand can be tedious when the project is very large and the directory structure is very complex. You need to write different makefiles under different directories, which can be very large files.

Google now recommends that developers use CMake instead of makefiles for cross-compilation.

CMake

CMake is a cross-platform build tool that describes installation (compilation) for all platforms in simple statements.

Cmake does not directly build the final software, but rather produces scripts for other tools (such as makefiles) that are then used in the way the tool is built.

Android Studio uses CMake to generate Ninja, a small, speed-focused build system. We don’t need to worry about Ninja scripts, we just need to know how to configure cmake.

CMake uses the cmakelists. TXT file to describe the configuration. The simplest cmakelists. TXT is as follows:

#Set the minimum supported version of cmakeCmake_minimum_required (VERSION 3.4.1 track)
#Create a libraryAdd_library (# library name, e.g., will now generate native-lib.so native-lib # set whether SHARED or STATIC SHARED # set relative path of source file native-lib.cpp) #Search for and specify prebuilt libraries and store paths as variables.
 #There are already some pre-built libraries in the NDK (e.glog), and the NDK library is already configured as part of the cmake search path
 #You can write this in target_link_libraries without writing itlogFind_library (# set path variable name log-lib # specify NDK library name log) #Specifies the library that CMake should link to the target library. You can link to multiple libraries, such as build scripts, pre-built third-party libraries, or system libraries.
 target_link_libraries( # Specifies the target library.
                       native-lib
                       ${log-lib} )
Copy the code

See this blog post for more information about the CMake compilation process

Cross – compile FFmpeg for Android platform

FFmpeg supports make file compilation, so let’s start compiling FFmpeg.

Download FFmpeg from FFmpeg and unzip it to find a configure file:

Configure script

This file is a shell script that is used to generate mk files. It contains a wealth of configuration options, some of which are commonly used:

Toolchain options

These options are compiler specific (and often required) :

option desc
–arch=ARCH Architecture, such as ARM
–cpu=CPU ABI, such as ARM-V7
–target-os=OS Target platforms such as Android and iOS
–enable-cross-compile Using cross compilation
–cross-prefix Cross-compile toolchain location
–sysroot=PATH Link header file and library file path, Android platform to find under the NDK
–cc=CC C compilers such as Clang
–cxx=CXX C + + compiler
–extra-cflags=ECFLAGS Arguments passed to the compiler, such as instructing the compiler to find header/library files in the specified directory

Selection of the configuration

option desc
–disable-static Do not build static libraries [default false]
–enable-shared Build dynamic libraries [default false]
–enable-small Optimize the size of

Component options

Here you can select the features you need and turn off the ones you don’t, thus reducing the library file size.

option desc
–disable-avdevice You can operate collection devices such as cameras. Android does not support this option. You can select Disable
–disable-avfilter Disable watermarked subtitle

Individual component options

option desc
–disable-encoders Prohibit encoding, such as playing only need decoding, no encoding
–enable-decoder=NAME Selective decoder
–enable-filter=NAME Selection filter
–disable-filter=NAME Disabling a filter
–disable-filters Disable all filters
–disable-muxers Disable the mixer and turn off the mixing package, such as the process of transferring image + audio to MP4, which only needs to be unpacked when playing

Compilation step

According to the official documentation, FFmpeg is compiled in three main steps

  • Configure with./configure

  • Type make to build FFmpeg

  • Type Make Install to install all binaries and libraries built.

The./configure configuration includes the following:

  • Cross-compile toolchain related configuration, which is a must
  • Select the required modules, and precise configuration can reduce the size of the compiled product
  • Set the codec algorithm

Write a script

#! /bin/bash
#Clear the last compilationMake the clean echo "> > > > > > > > > began to compile < < < < < < < <" echo "> > > > > > > > > using the NDK version is r20b, using ffmpeg version is holdings < < < < < < < <"#The directory of your NDK
export NDK=/Users/wangzhen/Library/Android/sdk/android-ndk-r20b
#Directory of the NDK tool chain
TOOLCHAIN=$NDK/toolchains/llvm/prebuilt/darwin-x86_64
#The android API version
API=29

function build_android
{
echo "Compiling FFmpeg for $CPU ..."
./configure \
    --prefix=$PREFIX \
    --enable-small \
    --enable-neon \
    --enable-hwaccels \
    --enable-gpl \
    --disable-postproc \
    --enable-jni \
    --disable-doc \
    --enable-ffmpeg \
    --disable-muxers \
    --disable-ffplay \
    --disable-ffprobe \
    --disable-avdevice \
    --disable-doc \
    --disable-symver \
    --disable-filters \
    --cross-prefix=$CROSS_PREFIX \
    --target-os=android \
    --arch=$ARCH \
    --cpu=$CPU \
    --cc=$CC
    --cxx=$CXX
    --enable-cross-compile \
    --sysroot=$SYSROOT \
    --extra-cflags="-Os -fpic $OPTIMIZE_CFLAGS" \
    
make clean
make 
make install
}

#Build the armv8-a version
ARCH=arm64
CPU=armv8-a
CC=$TOOLCHAIN/bin/aarch64-linux-android$API-clang
CXX=$TOOLCHAIN/bin/aarch64-linux-android$API-clang++
SYSROOT=$NDK/toolchains/llvm/prebuilt/darwin-x86_64/sysroot
CROSS_PREFIX=$TOOLCHAIN/bin/aarch64-linux-android-
#Save path of compiled product
PREFIX=$(pwd)/android_arm_static/$CPU
OPTIMIZE_CFLAGS="-march=$CPU"
build_android

# #armv7-a
# ARCH=arm
# CPU=armv7-a
# CC=$TOOLCHAIN/bin/armv7a-linux-androideabi$API-clang
# CXX=$TOOLCHAIN/bin/armv7a-linux-androideabi$API-clang++
# SYSROOT=$NDK/toolchains/llvm/prebuilt/darwin-x86_64/sysroot
# CROSS_PREFIX=$TOOLCHAIN/bin/arm-linux-androideabi-
# PREFIX=$(pwd)/android_arm_static/$CPU
# OPTIMIZE_CFLAGS="-mfloat-abi=softfp -mfpu=vfp -marm -march=$CPU "
# build_android

# #x86
# ARCH=x86
# CPU=x86
# CC=$TOOLCHAIN/bin/i686-linux-android$API-clang
# CXX=$TOOLCHAIN/bin/i686-linux-android$API-clang++
# SYSROOT=$NDK/toolchains/llvm/prebuilt/darwin-x86_64/sysroot
# CROSS_PREFIX=$TOOLCHAIN/bin/i686-linux-android-
# PREFIX=$(pwd)/android/$CPU
# OPTIMIZE_CFLAGS="-march=i686 -mtune=intel -mssse3 -mfpmath=sse -m32"
# build_android
Copy the code

Compile the product

The directory for the compiled products is as follows:

It consists of two parts: header file include and library file lib. The functions of each library file are as follows:

libavformat

It is used to generate and parse various audio and video packaging formats, including obtaining the required decoding information to generate the decoding context structure and reading audio and video frames, etc. A format resolution protocol for audio and video that provides an independent source of audio or video streams for libavCodec analysis streams.Copy the code

libavcodec

Used for various types of sound/image encoding and decoding; Libavcodec library is the core of audio and video codec, realizing the functions of most of the decoders visible in the market. Libavcodec library is included or applied by other major decoders ffdshow, Mplayer, etc.Copy the code

libavfilter

Filter (FileIO, FPS, DrawText) audio and video filter development, such as watermark, double speed playback, etc.Copy the code

libavutil

A library of common utility functions, including arithmetic operation character operations;Copy the code

libswresample

Raw audio format transcoding.Copy the code

libswscale

(Original video format conversion) for video scene scaling, color mapping conversion; Image color space or format conversion, such as RGB565, RGB888 and YUV420, etc.Copy the code

Integrate FFmpeg with CMake

Copy the compiled product into the project

  • Copy header file

  • Copy library files

Configuration CMakeLists. TXT

To integrate FFmpeg with CMake in our Android project, cmakelists.txt is configured as follows:

cmake_minimum_required(VERSION 3.4.1)

message(${CMAKE_SOURCE_DIR} \n CPU architecture: ${CMAKE_ANDROID_ARCH_ABI}")

# define all SOURCE files in CPP folder as SOURCE (use relative path for later SOURCE files)
file(GLOB SOURCE ./*.cpp )

add_library( # project name
        native-lib
        Build as a dynamic library
        SHARED
        Import all SOURCE files under SOURCE
        ${SOURCE}
        )

find_library( # Sets the name of the path variable.
        log-lib

        # Specifies the name of the NDK library that
        # you want CMake to locate.
        log)

# import FFmpeg header files
include_directories(${CMAKE_SOURCE_DIR}/include)

Add a lookup path directly to cmake, where all static libraries can be found
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -L${CMAKE_SOURCE_DIR}/${CMAKE_ANDROID_ARCH_ABI}")

target_link_libraries( # Specifies the target library.
        native-lib
        # Add the compiled FFmpeg library file
        avformat
        avcodec
        avfilter
        avutil
        swresample
        swscale
        Other library files to use
        ${log-lib}
        z
        android
        OpenSLES)
Copy the code

Audio and video principle introduction

The composition of a video

Image + Audio

  • Original image data (RGB/YUV)
  • Raw Audio Data (PCM)

Why compress?

Digital audio

Question: What is mp3?

Lossy compression, compression ratio 4-12

Digital audio generation

Sampling: sampling times per second, range that human ear can hear: 20HZ-20khz, according to the sampling theorem, the recommended value is 44.1khz

Quantization: How many bits of binary data are used to represent each sampled sound wave, e.g. 16 bits, 32 bits

Sound channel: There are several sounds

PCM size

Bit rate: 44100 * 16 * 2 = 1378.125 KBPS

Space required for 1 minute audio = 1378.125 x 60/8/1024 = 10.09 MB

As you can see, the audio size in 1 minute is about 10 meters, so you have to compress it.

image

Graphical representation
  • RGB

    1280 * 720 RGBA_8888 image size:

    1280 * 720 * 4 = 3.516 MB

  • YUV

    YUV allows for reduced chromaticity bandwidth, taking human perception into account when encoding photos or movies.

    “Y” denotes Luminance, Luma, and “U” and “V” denotes Chrominance, Chroma.

1280 * 720 RGBA_8888 video frame size:

1280 * 720 * 1 + 1280 * 720 * 0.5 = 1.318 MB

Video image size

1.318 MB * 60 fps * 90 min * 60s = 417 GB

Build a simple live broadcast player

Live broadcast of the process

Live streaming is divided into push streaming and pull streaming. When we build live streaming player, we only need to consider pull streaming. The main steps of pull streaming are as follows:

  • Unpacking: Unpacking live stream and splitting it into video stream and audio stream
  • Decode the split video stream, format conversion, and finally play
  • Decode the split audio stream, format conversion, and finally play

Pull stream Playback sequence diagram – take parse video stream as an example

The main class responsibilities involved in the sequence diagram are as follows:

  • MediaManager
    • Initialize FFmpeg and set the livestream source
    • Decapsulation (shown in green)
      • The essence of decapsulation is to open a child thread and open a loop to continuously retrieve data from the live stream for decapsulation
      • The FFmpeg library used for unpacking isavformat
  • VideoChannel
    • Used to parse video streams, it has four main member variables:
      • AVPackageQueue: AVPackage queue. AVPackage is an encoded packet parsed from the live stream
      • DecodeThread: DecodeThread, indicated in blue, using FFmpegavcodec
      • AVFrameQueue: AVFrame queue, AVPackage decoded AVFrame
      • RenderThread: the rendering thread responsible for rendering AVFrame data into the SurfaceView, shown in purple, using FFmpegswscaleThe library files

A brief description of the sequence diagram:

  • MediaManagerOpen a Looper, constantly pulling coded packets from the live streamAVPackage, constantly depositAVPackageQueueIn the.
  • DecodeThread opened a Looper, constantly fromAVPackageQueueTake the data and decode it into raw packetsAVFrame, will be decodedAVFramePackagedepositAVFrameQueueIn the.
  • The RenderThread opened a looper, constantly fromAVFrameQueueGet the data in theAVFrameFormat conversion, and finally display inSurfaceViewIn the.

The essence of the above process is that there are two producers – consumers.

Setting the Source of live streaming

Here choose the video source: China central television (CCTV) – 1 ivi.bupt.edu.cn/hls/cctv1hd…

Open live stream

    // ffmpeg initializes the network
    avformat_network_init(a);// 1. Open the multimedia stream (local file/network address)
    formatContext = nullptr;
    int result = avformat_open_input(&formatContext, dataSource, nullptr.nullptr);
    if(result ! =0) {
        LOGE("Failed to open media :%s".av_err2str(result));
        mediaBridge->onError(THREAD_CHILD, result);
        return;
    }
Copy the code

decapsulation

Unsealed items:

  • Iterate over all streams in a live stream, such that a live stream will contain at least video and audio streams

  • Decoder information for each split stream is parsed and decoder parameters are set to FFmpeg

 // 3. Process the contained stream, traversal
    for (int i = 0; i < formatContext->nb_streams; ++i) {
        AVStream *stream = formatContext->streams[i];
        // Decode information about this stream
        AVCodecParameters *codecpar = stream->codecpar;
        // Find the decoder for the streamAVCodec *decoder = avcodec_find_decoder(codecpar->codec_id); .// Get the decoder contextAVCodecContext *decoderContext = avcodec_alloc_context3(decoder); .// Set parameters for the decoder contextresult = avcodec_parameters_to_context(decoderContext, codecpar); .// Open the decoder
        result = avcodec_open2(decoderContext, decoder, nullptr); . AVRational time_base = stream->time_base;// If it is an audio stream
        if (codecpar->codec_type == AVMEDIA_TYPE_AUDIO) {
            audioChannel = new AudioChannel(i, decoderContext, time_base);
        } 
      	// If it is a video stream
        else if (codecpar->codec_type == AVMEDIA_TYPE_VIDEO) {
            // Frame rate: how many images need to be displayed per unit time
            AVRational frame_rate = stream->avg_frame_rate;
            int fps = av_q2d(frame_rate);
            videoChannel = newVideoChannel(i, decoderContext, time_base, fps); videoChannel->setRenderFrameCallback(callback); }... }Copy the code

decoding

Decode AVPackage to AVFrame:

void VideoChannel::decode(a) {
    AVPacket *packet = nullptr;
    while (isPlaying) {
        // Retrieve a packet from the queue
        int result = avPacketQueue.pop(packet);
        if(! isPlaying) {break;
        }
        // Failed to fetch
        if(! result) {continue;
        }
        // Drop the packet to the decoder
        result = avcodec_send_packet(avCodecContext, packet);
        // AvCodec can destroy the packet after processing it
        releaseAvPacket(&packet);
        / / try again
        if(result ! =0) {
            break;
        }
        // Represents an image (output this image first)
        AVFrame *frame = av_frame_alloc();
        // Read the decoded packet AVFrame from the decoder
        result = avcodec_receive_frame(avCodecContext, frame);
        // More data is required to be decoded
        if (result == AVERROR(EAGAIN)) {
            continue;
        } else if(result ! =0) {
            break;
        }
        // Start another thread to play (for smoothness)
        avFrameQueue.push(frame);
    }
    releaseAvPacket(&packet);
}
Copy the code

Apply colours to a drawing

Rendering does two main things:

  • Format conversion: YUV -> RGB in AVFrame
  • Hand RGB data to SurfaceView for display

Format conversion

// Convert the image to RGBA format
    // The conversion uses the swScale library. First create a swscale context
    swsContext = sws_getContext(
            avCodecContext->width, avCodecContext->height, avCodecContext->pix_fmt,
            avCodecContext->width, avCodecContext->height, AV_PIX_FMT_RGBA,
            SWS_BILINEAR, nullptr.nullptr.nullptr); . AVFrame *frame =nullptr;
// Array of Pointers
uint8_t *dst_data[4];
int dst_linesize[4];
// Since SWs_scale does not accept a buffer array of second-level Pointers, it needs to allocate a chunk of memory to which sws_scale will store data
av_image_alloc(dst_data, dst_linesize, avCodecContext->width, avCodecContext->height, AV_PIX_FMT_RGBA, 1);
while (isPlaying) {
  int ret = avFrameQueue.pop(frame);
  if(! isPlaying) {break;
     }
  // src_linesize: indicates the length of bytes stored in each line
  sws_scale(swsContext, reinterpret_cast<const uint8_t *const *>(frame->data),
                  frame->linesize, 0, avCodecContext->height, dst_data, dst_linesize); .// call back to play
        callback(dst_data[0], dst_linesize[0], avCodecContext->width, avCodecContext->height);  
  
Copy the code

Image rendering

To directly render the Java SurfaceView at the native end, we need to use the NativeWindow, which is a structure defined in C/C++ and equivalent to the Surface in Java.

  • Gets the ANativeWindow corresponding to the Surface

    ANativeWindow* ANativeWindow_fromSurface(JNIEnv* env, jobject surface);
    Copy the code
  • Hold/release the reference to the ANativeWindow object

    void ANativeWindow_acquire(ANativeWindow* window);
    
    void ANativeWindow_release(ANativeWindow* window);
    Copy the code
  • Write data to buffer and commit

    int32_t ANativeWindow_lock(ANativeWindow* window, ANativeWindow_Buffer* outBuffer, ARect* inOutDirtyBounds); // The code in between can perform some operations to write data to buffer
    int32_t ANativeWindow_unlockAndPost(ANativeWindow* window);
    
    Copy the code
  • Get Window Surface information in width/height/pixel format

    int32_t ANativeWindow_getWidth(ANativeWindow* window); 
    int32_t ANativeWindow_getHeight(ANativeWindow* window); 
    int32_t ANativeWindow_getFormat(ANativeWindow* window);
    Copy the code
  • Change the format and size of the Window Buffer

    int32_t ANativeWindow_setBuffersGeometry(ANativeWindow* window,
    int32_t width, int32_t height, int32_t format);
    Copy the code

    See this blog for more information

The above callback function is defined as follows:

void render(uint8_t *data, int lineszie, int w, int h) {
    pthread_mutex_lock(&mutex);
    if(! window) { pthread_mutex_unlock(&mutex);return;
    }
    // 2. Set the size and format of buffer
    ANativeWindow_setBuffersGeometry(window, w, h, WINDOW_FORMAT_RGBA_8888);
    ANativeWindow_Buffer window_buffer;
    // 3. Populate data
    if (ANativeWindow_lock(window, &window_buffer, nullptr)) {
        ANativeWindow_release(window);
        window = 0;
        pthread_mutex_unlock(&mutex);
        return;
    }

    // Populate RGB data to dST_data
    auto dst_data = static_cast<uint8_t *>(window_buffer.bits);
    // Stride: how many lines of data (RGBA) * 4
    int dst_linesize = window_buffer.stride * 4;
    // Copy line by line
    for (int i = 0; i < window_buffer.height; ++i) {
        memcpy(dst_data + i * dst_linesize, data + i * lineszie, dst_linesize);
    }
    ANativeWindow_unlockAndPost(window);
    pthread_mutex_unlock(&mutex);
}
Copy the code

Fluency optimization

Image parsing can be seen as a producer-consumer model. As soon as a package is acquired, it will be rendered. It can be expected that the playback is not smooth, sometimes fast and sometimes slow, so how to optimize?

The easiest way to do this is to run the RenderThread as per FPS. For example, if FPS is 60, this means that the RenderThread plays an image every 16ms. Therefore, we only need to render every 16ms in the RenderThread.

double frame_period = 1.0 / fps;
// The unit is microseconds
av_usleep(frame_period * 1000000)
Copy the code

FFmpeg provides parameters to calculate sleep time more accurately:

Repeat_pict: Indicates how long this image must be delayed

 double frame_period = 1.0 / fps; 
 // Extra interval time: How long the image must be delayed
 double extra_delay = frame->repeat_pict / (2 * fps); // NOLINT(bugprone-integer-division)
 // The interval that is really needed
 double delays = extra_delay + frame_period;
 av_usleep(delays * 1000000);
Copy the code

Parsing the audio

Sequence diagram

The timing diagram of audio resolution is basically the same as that of image resolution.

Native plays audio

OpenSLES can be used to record and play audio in Native terminal. It supports various sound effects.

For details, see GoogleNdkSample

Audio and video synchronization

Let’s start with a question

The above audio playback and video playback are separated into two independent threads, what problems do you think will occur?

It is to be expected that sound and image are out of sync.

Three synchronization schemes

  • Focus on the video schedule
  • Mainly audio progress
  • Synchronize audio and video with an external clock

Since the human ear is more sensitive than the eye, it is better able to detect the audio out of sync, so the solution here is based on audio progress.

Audio progress based synchronization scheme

  • Video is faster than audio

    Increase the sleep time of the video thread

  • Video is slower than audio

    • Reduce the sleep time of the video
    • Active packet loss
      • Discard AVPackage, need to consider IPB frame type
      • Throw AVFrame
// Estimate the timestamp of the frame in various ways
 double relativeTime = frame->best_effort_timestamp * av_q2d(time_base);
// Time difference between audio and video
double diff = relativeTime - audioChannel->relativeTime;
if(diff > 0) {// Video is faster than audio
   av_usleep((delays + diff) * 1000000);
}else{
   // Audio is faster than video
   // The number of video frames is too high
   if (fabs(diff) >= 0.05) {
        releaseAvFrame(&frame);
        // Drop the packet
        frames.sync();
        continue; }}Copy the code

JNI main usage

There’s too much to cover, but if you’re interested, read the basics of What you need to know about Android-JNI development

The source code

FFmpegTutorial