This article is the exclusive authorization of Jianxi, Jianxi is also doing Android multimedia development, it is the same way, but he mainly focuses on video compression, using FFmpeg, can do a lot of things, but to achieve good effect, but not much. Today to see his share, sword of blog is: http://blog.csdn.net/mabeijianxi.

preheating


Time flies, time flies, from the last boast force has passed two or three months, the female ticket of a lot of people around has been divided and together, together and points, this cock is still proud of single. Last time, we talked about some simple FFmpeg commands and the simple way to call Java layer. Then a lot of friends left me messages on Github or CSDN. Most of the time, I chose to avoid answering, because the library used so package is not open source, I can not change the contents. But this time we play big, I recompiled FFmpeg and rewrite JNI interface function, this time will be from C to Java full open source, 2.0 project took me two months of spare time, today finally completed, very chicken frozen, and this blog will express all the voice of the author, there is no very chicken frozen, there is no. When 3.0 will try hard coding, or in 2.0 iteration will use H265 coding, this is a future story, but look at wechat to change the pace of small video into large video, should be able to do.

This article involves knowledge points:
  • Andorid video and audio capture

  • YUV video processing (manual cutting, rotation, mirroring, etc.)PCM audio processing

  • Using FFmpeg API, YUV code is H264, PCM code is AAC

  • FFmpeg encoder configuration

  • Practical application of JNI in engineering

  • Android FFmpeg command tool production and application

  • Android Studio plugin cMake in the project application

Charged:
  • At least you need to know what YUV, PCM, MP4 are (video and audio codec technology zero-based learning method).

  • It’s a good idea to read FFmpeg for Android (including libx264 and libfDK-aac), FFmpeg for Android (including executable commands), and the old and new ways to play JNI for Android. Most of the knowledge shared in these articles won’t be repeated for the purpose of not being too verbose.

  • Basic understanding of C/C++ basic syntax.

I environment and tools
  • System: macOS – 10.12.5

  • Compiler: Android Studio-2.3.2

  • ndk: r14

  • FFmpeg: 3.2.5

Project Summary:

1. Effect picture:

Reuse project address hasn’t changed: https://github.com/mabeijianxi/small-video-record here 1.0 GIF figure, because haven’t changed a bit interface, function temporarily didn’t wrap so much, it doesn’t matter later.

2. Overall process:



3. Project Catalog Browsing:





New project


For a new project, maybe different from the past, we need to select C++ 11 when selecting C++ support and C++ standard, as shown below:





C++ support is a must, and there is a reason for choosing C++ 11. We will use some of its apis later. Then we put together the six dynamic libraries, headers, and cmDutils.c cmDutils.h cmDutils_common_opts.h config.h compiled in FFmpeg(including libx264 and libfDK-aac) available for compiling Android C ffmpeg.h ffmpeg_filter.c ffmpeg_opt.c copy to our project CPP directory, after completion of your CPP directory should be as follows



Maybe you’ll have one more auto-generated native-lib.cpp than I did, and this file will stay with it for now.

Writing JNI interfaces:

I created a new interface class, ffmpegbridge.java, and temporarily defined the following methods according to my requirements:

package com.mabeijianxi.smallvideorecord2.jniinterface; import java.util.ArrayList; /** * Created by jianxi on 2017/5/12. * https://github.com/mabeijianxi * [email protected] */public class FFmpegBridge { private static ArrayList<FFmpegStateListener> listeners=new ArrayList(); static { System.loadLibrary("avutil"); System.loadLibrary("swresample"); System.loadLibrary("avcodec"); System.loadLibrary("avformat"); System.loadLibrary("swscale"); System.loadLibrary("avfilter"); System.loadLibrary("jx_ffmpeg_jni"); } /** * public static final int ALL_RECORD_END =1; public final static int ROTATE_0_CROP_LF=0; Public final static int ROTATE_90_CROP_LT =1; public final static int ROTATE_90_CROP_LT =1; Public final static int ROTATE_180=2; Public final static int ROTATE_270_CROP_LT_MIRROR_LR=3; /** ** @return return ffmpeg */ public static native String getFFmpegConfig(); Private static native int jxCMDRun(String CMD []); private static native int jxCMDRun(String CMD []); * @param data * @return */ public static native int encodeFrame2H264(byte[] data); * @param data * @return */ public static native int encodeFrame2AAC(byte[] data); @return */ public static native int recordEnd(); Public static native void initJXFFmpeg(Boolean debug,String logUrl); public static native void initJXFFmpeg(String logUrl); public static native void nativeRelease(); /** ** @param mediaBasePath Video directory * @param mediaName Video name * @param filter Rotating image cutting * @param in_width Enter the video width * @param In_height input video height * @param out_height output video height * @param out_width output video width * @param frameRate video frameRate * @param bit_rate video bit rate * @return */ public static native int prepareJXFFmpegEncoder(String mediaBasePath, String mediaName, int filter,int in_width, int in_height, int out_width, int out_height, int frameRate, long bit_rate); @param CMD public static int jxFFmpegCMDRun(String CMD){String regulation="[\\t]+"; @param CMD public static int jxFFmpegCMDRun(String CMD){String regulation="[\\t]+"; final String[] split = cmd.split(regulation); return jxCMDRun(split); } @param state @param what public static void notifyState(int state,float what){ for(FFmpegStateListener listener: listeners){ if(listener! =null){ if(state== ALL_RECORD_END){ listener.allRecordEnd(); }}}} / register to record the callback * * * * @ param listener * / public static void registFFmpegStateListener (FFmpegStateListener listener) { if(! listeners.contains(listener)){ listeners.add(listener); } } public static void unRegistFFmpegStateListener(FFmpegStateListener listener){ if(listeners.contains(listener)){ listeners.remove(listener); } } public interface FFmpegStateListener { void allRecordEnd(); }}Copy the code

When you create these methods, since native does not have a definition, they will be popular. Do not worry about it. Place the cursor on the corresponding method and gently press Atl + Enter.



The interface will be added at Native after confirmation. I didn’t really like to call it native-lib.cpp, so I changed it to jx_ffmpeg_jni.cpp

Writing native code


I use C/C ++ not much, Java and used to use, so sometimes very tangled in naming, don’t like how to do? Just bear with it a little bit

1. Prepare log function:

It doesn’t matter what language you play, there’s no log to play knitting, so that’s the first step. Create jx_log. CPP and jx_log.h. jx_log.h:

/** * Created by jianxi on 2017/6/2. * https://github.com/mabeijianxi * [email protected] */#ifndef JIANXIFFMPEG_JX_LOG_H#define JIANXIFFMPEG_JX_LOG_H#include <android/log.h>extern int JNI_DEBUG; #define LOGE(debug, format, ...) if(debug){__android_log_print(ANDROID_LOG_ERROR, "jianxi_ffmpeg", format, ##__VA_ARGS__); }#define LOGI(debug, format, ...) if(debug){__android_log_print(ANDROID_LOG_INFO, "jianxi_ffmpeg", format, ##__VA_ARGS__); }#endif //JIANXIFFMPEG_JX_LOG_HCopy the code

jx_log.cpp:

/**
 * Created by jianxi on 2017/6/2.
 * https://github.com/mabeijianxi
 * [email protected]
 */#include "jx_log.h"int JNI_DEBUG= 1;Copy the code

Of course, we also define a flag JNI_DEBUG to enable debug.

2. Prepare FFmpeg interface to execute commands:

This assumes that you have finished compiling FFmpeg for Android executable commands, because we need to make some changes to the source code that we copied in before, otherwise it won’t work. We create two new files to connect FFmpeg, one function for Java layer call, one for Native call, and one for initialization debug control log, you can leave it at that.

Jx_ffmpeg_cmd_run. H:

/** * Created by jianxi on 2017/6/4. * https://github.com/mabeijianxi * [email protected] */#ifndef JIANXIFFMPEG_FFMPEG_RUN_H#define JIANXIFFMPEG_FFMPEG_RUN_H#include <jni.h>JNIEXPORT jint JNICALLJava_com_mabeijianxi_smallvideorecord2_jniinterface_FFmpegBridge_jxCMDRun(JNIEnv *env, jclass type, jobjectArray commands); void log_callback(void* ptr, int level, const char* fmt, va_list vl); JNIEXPORT void JNICALLJava_com_mabeijianxi_smallvideorecord2_jniinterface_FFmpegBridge_initJXFFmpeg(JNIEnv *env, jclass type, jboolean debug, jstring logUrl_); int ffmpeg_cmd_run(int argc, char **argv); #endif //JIANXIFFMPEG_FFMPEG_RUN_HCopy the code

jx_ffmpeg_cmd_run.c:

/** * Created by jianxi on 2017/6/4.. * https://github.com/mabeijianxi * [email protected] */#include "jx_ffmpeg_cmd_run.h"#include "ffmpeg.h"#include "Jx_log. h"/** * to run on the command line, JNIEXPORT Jint JNICALLJava_com_mabeijianxi_smallvideorecord2_jniinterface_FFmpegBridge_jxCMDRun(JNIEnv *env, jclass type, jobjectArray commands){ int argc = (*env)->GetArrayLength(env,commands); char *argv[argc]; int i; for (i = 0; i < argc; i++) { jstring js = (jstring) (*env)->GetObjectArrayElement(env,commands, i); argv[i] = (char *) (*env)->GetStringUTFChars(env,js, 0); } return ffmpeg_cmd_run(argc,argv); }int ffmpeg_cmd_run(int argc, char **argv){ return jxRun(argc, argv); }char *logUrl; /** * Initialize the debug tool */JNIEXPORT void JNICALLJava_com_mabeijianxi_smallvideorecord2_jniinterface_FFmpegBridge_initJXFFmpeg(JNIEnv *env, jclass type, jboolean debug, jstring logUrl_) { JNI_DEBUG = debug; if (JNI_DEBUG&&logUrl_! =NULL) { av_log_set_callback(log_callback); const char* log = (*env)->GetStringUTFChars(env,logUrl_, 0); logUrl = (char*)malloc(strlen(log)); strcpy(logUrl,log); (*env)->ReleaseStringUTFChars(env,logUrl_, log); } }void log_callback(void *ptr, int level, const char *fmt, va_list vl) { FILE *fp = NULL; if (! fp) fp = fopen(logUrl, "a+"); if (fp) { vfprintf(fp, fmt, vl); fflush(fp); fclose(fp); }}Copy the code

Because you add so many files, cMake doesn’t know. The correct thing to do is to go to cmakelists.txt every time you add a C/C++ file and say, Don’t forget to click Sync.

3. Prepare a secure queue:

We collect audio and video data will be sent to FFmpeg to do a series of processing, because it is soft coding, so the encoding speed and CPU have a great relationship, on the current X264 algorithm combined with today’s CPU is unable to keep up with why we collect 20 frames per second + speed, directly collect a frame on the encoding of a frame will certainly lose the frame, Therefore, I decided to put it into a team. Due to multi-threaded programming, safety was needed in our queue, just like several men robbing a girl, she naturally needed someone like me to protect her. This queue code is my online copy, nothing to say ~~

threadsafe_queue.cpp

/** * Created by jianxi on 2017/5/31. * https://github.com/mabeijianxi * [email protected] */#ifndef JIANXIFFMPEG_THREADSAFE_QUEUE_CPP#define JIANXIFFMPEG_THREADSAFE_QUEUE_CPP#include <queue>#include <memory>#include <mutex>#include <condition_variable>/ template<typename T>class threadsafe_queue {private: mutable std::mutex mut; std::queue<T> data_queue; std::condition_variable data_cond; public: threadsafe_queue() {} threadsafe_queue(threadsafe_queue const &other) { std::lock_guard<std::mutex> lk(other.mut); data_queue = other.data_queue; } void push(T new_value) {STD ::lock_guard< STD ::mutex> LK (mut); data_queue.push(new_value); data_cond.notify_one(); } void wait_and_pop(T &value)// until the element can be removed {STD ::unique_lock< STD ::mutex> lk(mut); data_cond.wait(lk, [this] { return ! data_queue.empty(); }); value = data_queue.front(); data_queue.pop(); } std::shared_ptr<T> wait_and_pop() { std::unique_lock<std::mutex> lk(mut); data_cond.wait(lk, [this] { return ! data_queue.empty(); }); std::shared_ptr<T> res(std::make_shared<T>(data_queue.front())); data_queue.pop(); return res; } bool try_pop(T &value) {STD ::lock_guard< STD ::mutex> LK (mut); if (data_queue.empty()) return false; value = data_queue.front(); data_queue.pop(); return true; } std::shared_ptr<T> try_pop() { std::lock_guard<std::mutex> lk(mut); if (data_queue.empty()) return std::shared_ptr<T>(); std::shared_ptr<T> res(std::make_shared<T>(data_queue.front())); data_queue.pop(); return res; } bool empty() const { return data_queue.empty(); }}; #endif //JIANXIFFMPEG_THREADSAFE_QUEUE_CPPCopy the code

The C++ 11 standard uses lib

4. Prepare a structure to store configuration information:

In fact, this thing and JavaBean similar, directly make code, code in the JXJNIHandler field let’s not see.

jx_user_arguments.h:

/** * Created by jianxi on 2017/5/26. * https://github.com/mabeijianxi * [email protected] */#ifndef JIANXIFFMPEG_JX_USER_ARGUMENTS_H#define JIANXIFFMPEG_JX_USER_ARGUMENTS_H#include "jni.h"class JXJNIHandler; typedef struct UserArguments { const char *media_base_path; // File storage address const char *media_name; // File command prefix char *video_path; // Video storage address char *audio_path; Char *media_path; // Store the address of MP4 int in_width; // Output width int in_height; // Input height int out_height; // Output height int out_width; // Output width int frame_rate; // Long long video_bit_rate; // Video bitrate control int audio_bit_rate; // audio bit rate control int audio_sample_rate; // Audio sample rate control (44100) int v_custom_format; // some filter controls JNIEnv *env; //env global pointer JavaVM * JavaVM; // JVM pointer jclass java_class; // Calss object of Java interface class JXJNIHandler *handler; // a pointer to a global processing object}; #endif //JIANXIFFMPEG_JX_USER_ARGUMENTS_HCopy the code

This structure is used throughout the process.

5. Write video (YUV) coding code

This section is one of the core of this article, and the simplified idea is as follows:



Some of you might ask why don’t you encode a frame and compose a frame, because I’ve tested the synthesis time, and it’s basically in milliseconds, and I don’t want to bother, so I can just use the FFmpeg command tool that we made and do it in a few lines of code

Code posted over, now listen to this cock about its past life, very key ~.

1) Video encoder parameter configuration

Ffmpeg encoder AVCodecContext configuration parameters

    size_t path_length = strlen(arguments->video_path);    char *out_file = (char *) malloc(path_length + 1);    strcpy(out_file, arguments->video_path);Copy the code

Through the code above we copy the video output address, the video output address ends with.h264 is very important, Because the following avformat_alloc_output_context2(&pFormatCtx, NULL, NULL, out_file) checks the validity and assigns the value to pFormatCtx according to your suffix format.

  • PCodecCtx ->codec_id = AV_CODEC_ID_H264

  • pCodecCtx->pix_fmt = AV_PIX_FMT_YUV420P; Specify the data format to encode;

  • PCodecCtx ->bit_rate = arguments->video_bit_rate specifies the video bitrate. This parameter is very important and largely determines the quality and size of your video, but it also depends on the bit rate mode. In VBR mode, it will have some fluctuations.

  • PCodecCtx ->thread_count = 16; pCodecCtx->thread_count = 16

  • pCodecCtx->time_base.num = 1; Den = arguments->frame_rate pCodecCtx->time_base. Den = arguments->frame_rate The frame rate you set to your camera may not be the same as the actual number of frames saved. This will also cause the visual sound to be out of sync. This will be addressed later when interfacing with the Java layer.

  • Av_opt_set (pCodecCtx->priv_data, “preset”, “superfast”, 0) here is to specify a preset encoding speed, I’m temporarily preset to the fastest.

  • PCodecCtx ->qmin pCodecCtx->qmax This is the quantization range setting, its value range is 0~51, the smaller the higher the quality, the larger the bit rate required, 0 is lossless coding. For details on the coding process and principles, read the basic principles of video coding and audio coding

  • PCodecCtx ->max_b_frames = 3 The maximum b frame is 3, which can be set to 0 so that the encoding will be faster, because motion estimation and motion compensation encode at I, B and P frames. Reference a sentence of Thor :I frame only uses the data in this frame to encode. It does not need motion estimation and compensation in the process of coding. Obviously, the compression ratio is relatively low because i-frames do not eliminate the correlation of time direction. In the process of coding, p-frame uses a previous I frame or P-frame as the reference image for motion compensation, actually encoding the difference between the current image and the reference image. The encoding method of B frame is similar to that of P frame, the only difference is that it uses a preceding I frame or P frame and a following I frame or P frame for prediction during the encoding process. Thus, the encoding of each P frame needs to use one frame as the reference image, while B frame needs two frames as the reference image. In contrast, B frames have a higher compression ratio than P frames, so B frames have a certain delay.

  • Av_dict_set (&param, “profile”, “baseline”, 0) it limits your output to a specific H.264 profile, all of which include: Baseline, the main high, high10 high422, high444, pay attention to use – profile option and lossless coding are incompatible.

2) YUV data structure collected by Android camera

YUV format is similar to RGB. YUV is also a color coding method. Y represents Luminance or Luma, which is the gray value. U and V: represent Chrominance or Chroma, which describe the color and saturation of the image and specify the color of the pixel. If it’s just Y then it’s black and white. According to different sampling methods, there are mainly YUV4:4:4, YUV4:2:2 and YUV4:2:0. YUV 4:4:4 sampling, each Y corresponds to a set of UV components. YUV 4:2:2 sampling, every two Y share a set of UV components. YUV 4:2:0 sampling, each four Y share a set of UV components. For example, if you have eight pixels on the screen, yuV4:4:4 will have eight Y’s, eight U’s, and eight V’s. YUV4:2:2 will have 8 Y’s,4 U’s, and 4 V’s. YUV4:2:0 will have 8 Y’s,2 U’s, and 2 V’s. We need to know the data type and data structure. In the old version of Android SDK, it can only collect data of two modes, YV12 and NV12, which belong to YUV420, but their arrangement structure is different. Let’s take a look at the picture below. Of course, I photoshopped the first picture below, because the original one was wrong, but the old hand didn’t photoshopped perfectly, so I’ll just have to look at it.



As you can see, the four physically similar pixels Y1, Y2, Y7, and Y8 share the same U1 and V1, and the similar ones Y3,Y4,Y9, and Y10 use U2 and V2. The different colors here make this feature very visual. The number of cells is the size of the byte array of the image, and the order of the array elements is the following bar. NV12 is as follows:



It can be found that they are only different in the emission position of UV.

3)YV12 data processing

It is ok to use YV12 or NV12. I chose YV12 when configuring the camera parameters. Next, we will write a few simple algorithms to achieve the clip rotation of the video.

So let’s say that the video we’re capturing here is 640 wide and 480 high, and we’re going to cut it to 400 wide and 300 high. (1/4) *640*480 V’s, and (1/4) *640*480 U’s. We’ll cut it to 400*300, naturally keeping part of the data. We first build a model for Y. Since it is 640*480, we can regard it as a row with 640 Y, and there are 480 rows in total, as shown in the figure below. The red marks represent 640*480 Y, and the yellow areas are all the values of Y that we have cut.



You have to pay attention to the direction of the image. So with this model we can write code to manipulate arrays. Here’s some code:

Shear Y:

unsigned char *in_buf; unsigned char *out_buf_y; for(int i=480-300; i<480; I++){// for(int j=0; j<400; Int index= 1 * I +j; // Unsigned char value=*(in_buf+index); / / the current under the Angle Y value / / assigned to our target array * (out_buf_y + (I - (480-300)) * 400 + j) = value; }}Copy the code

Assuming in_buf is a frame of YV12 video data, after executing this loop we get the clipped Y value. Then we parse the clipped UV data. The model of UV is a little different from that of Y. Yuv4:2:0 is not called because there is no V, it is actually scanned vertically by UV switching, for example, the first line scans U, the second line scans V, and the third line scans U again. Horizontally, it’s another scan, so if you scan the first column, you don’t scan the second column, and then you scan the third column. So U is 1/2 of Y in both directions, 1/4 in total, and the same thing for V. Knowing this, we can easily model it.



The 320 by 240 region is our region of our U value or our V value, and the 200 by 150 region is our target region of our cut U value or our V value. The code is as follows:

Shear UV:

unsigned char *in_buf; unsigned char *out_buf_u; unsigned char *out_buf_v; for(int i=(480-300)/2; i<480/2; I++){// for(int j=0; j<400/2; Int index=(640/2)* I +j; Unsigned char v=*(in_buf+(640*480)+index); unsigned char v=*(in_buf+(640*480)+index); Unsigned char u=*(in_buf+(640*480*5/4)+index); // Unsigned char u=*(in_buf+(640*480*5/4)+index); Out_buf_u *(out_buf_u+(I -(480-300)/2)*400/2+j)= U; out_buf_u+(I -(480-300)/2)*400/2+j)= U; *(out_buf_v+(i-(480-300)/2)*400/2+j)=v; }}Copy the code

After the above operations, we have completed the most basic cutting, the data collected by the camera is in horizontal screen, if we record in vertical screen and we do not do any operation, then the video we recorded is rotated 90° counterclockwise, TND you rotate 90° clockwise, so it should be correct.



Here we have the idea. As shown in the figure above, our for loop stays the same, because the position we need to cut is the same, we just change the emission position of the output array, so the first row goes to the last column, and the second row goes to the penultimate column, and so on. The following code is also used to demonstrate:

Y cut and rotate 90° clockwise:

unsigned char *in_buf; unsigned char *out_buf_y; for(int i=(480-300); i<480; I++){// for(int j=0; j<400; Int index=(640)* I +j; // Unsigned char value=*(in_buf+index); / / the current under the Angle Y * (out_buf_y + j * 300 + (300 - (I - (480-300) - 1))) = value; }}Copy the code

And when Y is done, UV is really easy, because we already know the rules, and UV is half the value of Y in both directions.

Shear UV:

unsigned char *in_buf; unsigned char *out_buf_u; unsigned char *out_buf_v; for(int i=(480-300)/2; i<480/2; I++){// for(int j=0; j<400/2; Int index=(640/2)* I +j; Unsigned char value_v=*(in_buf+(640*480)+index); unsigned char value_v=*(in_buf+(640*480)+index); Unsigned char value_u=*(in_buf+(640*480*5/4)+index); // Unsigned char value_u=*(in_buf+(640*480*5/4)+index); / / the current under the Angle values of U * (out_buf_u 300/2 + + j * (300/2 - (I - (480-300) / 2-1))) = value_u; / / the combination of the output array images can understand * (out_buf_v 300/2 + + j * (300/2 - (I - (480-300) / 2-1))) = value_v; }}Copy the code

Because of the front camera, will lead to mirror, so when recording with the front camera also need to deal with mirror, more details can consult the source code, in addition to these we can do more interesting operations, such as when the UV value is given 128 when it became a black image, you can also adjust the brightness tone and so on.

After processing the data, call the FFMPEg-encoded API.

6. Audio coding

As can be seen from the above flow chart, its steps are similar to the video, and the amount of data is relatively small. If you use liBFDK-AAC, you can basically catch up with the collection speed. Serve food first, then chat:

jx_pcm_encode_aac.h:

/** * Created by jianxi on 2017/5/18. * https://github.com/mabeijianxi * [email protected] */#ifndef JIANXIFFMPEG_JX_PCM_ENCODE_AAC_H#define JIANXIFFMPEG_JX_PCM_ENCODE_AAC_H#include "base_include.h"#include "jx_user_arguments.h"using namespace std; Class JXPCMEncodeAAC {public: JXPCMEncodeAAC(UserArguments* arg); public: int initAudioEncoder(); static void* startEncode(void* obj); void user_end(); int sendOneFrame(uint8_t* buf); int encodeEnd(); private: int flush_encoder(AVFormatContext *fmt_ctx, unsigned int stream_index); private: threadsafe_queue<uint8_t *> frame_queue; AVFormatContext *pFormatCtx; AVOutputFormat *fmt; AVStream *audio_st; AVCodecContext *pCodecCtx; AVCodec *pCodec; AVFrame *pFrame; AVPacket pkt; int got_frame = 0; int ret = 0; int size = 0; int i; int is_end=0; UserArguments *arguments; ~JXPCMEncodeAAC() { } }; #endif //JIANXIFFMPEG_JX_PCM_ENCODE_AAC_HCopy the code

Audio I study is not so much, the following is only a brief introduction to the parameters, more access to video and audio data processing entry: PCM audio sampling data processing

Coding parameters:

  • PCodecCtx ->sample_fmt = AV_SAMPLE_FMT_S16 Sets its sampling format. Ours is a 16-bit unsigned integer, which needs to correspond to the parameter set during Java audio acquisition.

  • PCodecCtx ->sample_rate = arguments->audio_sample_rate

  • pCodecCtx->channel_layout = AV_CH_LAYOUT_MONO; PCodecCtx ->channels = av_get_channel_layout_nb_channels(pCodecCtx->channel_layout) This also needs to correspond to the parameters set during Java audio collection. There are also many options such as AV_CH_LAYOUT_STEREO, which has two channels, and AV_CH_LAYOUT_4POINT0, which has four channels.

  • PCodecCtx ->bit_rate = arguments->audio_bit_rate

After configuring the parameters, FFmpeg takes care of the rest.

7. Write video synthesis classes

After the audio and video are encoded, we need to synthesize the MP4. Now we can use our FFmpeg command tool. We just need to throw the address to it.

jx_media_muxer.h:

/** * Created by jianxi on 2017/5/24. * https://github.com/mabeijianxi * [email protected] */#ifndef JIANXIFFMPEG_JX_MEDIA_MUXER_H#define JIANXIFFMPEG_JX_MEDIA_MUXER_H#include "base_include.h"class JXMediaMuxer{public: int startMuxer(const char * video, const char *audio , const char *out_file); private: }; #endif //JIANXIFFMPEG_JX_MEDIA_MUXER_HCopy the code

The first time to get [not only personal original Android/audio and video technology drygoods, problem depth summary, FrameWork source code analysis, plug-in research, FFmpeg research, live technology, the latest open source project recommendation, and more workplace thinking], welcome to follow my wechat public number, Scan the qr code below or long press to identify the QR code