This article introduces the simple implementation of push flow and drainage in live streaming technology.

1. Test the streaming media server

Firstly, kuailive APP (other apps that support RTMP streaming and drainage are also available) and ffplay.exe are used to test the streaming media server.

Quick Live app download address:

Apkpure. Biz/cn. Nodemedi…

The push and drainage interface of Fast live broadcasting:

In Windows, ffplay is used for traffic diversion.

Ffplay RTMP: / / 192.168.0.0 / live/test # IP addresses for streaming media server address, test said live room numberCopy the code

Test results:

2. Push flow

In this paper, the steps of stream pushing are shown:

  • Use AudioRecord to collect audio and Camera API to collect video data
  • Faac and XH264 third-party libraries are used to encode audio and video in Native layer respectively
  • Use the rtmp-dump third-party library for packaging and streaming

Project Catalog:

Main JNI methods:

public class NativePush { public native void startPush(String url); public native void stopPush(); public native void release(); /** * set video parameters * @param width * @param height * @param bitrate * @param FPS */ public native void setVideoOptions(int width, int height, int bitrate, int fps); @param sampleRateInHz @param channel */ public native void setAudioOptions(int sampleRateInHz, int channel); /** * @param data */ public native void fireVideo(byte[] data); /** * @param data * @param len */ public native void fireAudio(byte[] data, int len); }Copy the code

Video acquisition

Video capture is mainly based on camera-related API, using SurfaceView for preview, and obtaining Camera preview data through PreviewCallback.

Video preview main code implementation:

public void startPreview(){ try { mCamera = Camera.open(mVideoParams.getCameraId()); Camera.Parameters param = mCamera.getParameters(); List<Camera.Size> previewSizes = param.getSupportedPreviewSizes(); int length = previewSizes.size(); for (int i = 0; i < length; i++) { Log.i(TAG, "SupportedPreviewSizes : " + previewSizes.get(i).width + "x" + previewSizes.get(i).height); } mVideoParams.setWidth(previewSizes.get(0).width); mVideoParams.setHeight(previewSizes.get(0).height); param.setPreviewFormat(ImageFormat.NV21); param.setPreviewSize(mVideoParams.getWidth(), mVideoParams.getHeight()); mCamera.setParameters(param); //mCamera.setDisplayOrientation(90); / / vertical screen mCamera setPreviewDisplay (mSurfaceHolder); buffer = new byte[mVideoParams.getWidth() * mVideoParams.getHeight() * 4]; mCamera.addCallbackBuffer(buffer); mCamera.setPreviewCallbackWithBuffer(this); mCamera.startPreview(); } catch (IOException e) { e.printStackTrace(); }}Copy the code

Use FrameCallback to get preview data into Native layer, and then encode:

@Override public void onPreviewFrame(byte[] bytes, Camera camera) { if (mCamera ! = null) { mCamera.addCallbackBuffer(buffer); } if (mIsPushing) { mNativePush.fireVideo(bytes); }} </pre> ** Audio collection ** Audio collection is implemented based on AudioRecord, which collects audio PCM data in a child thread and continuously transmits the data to Native layer for encoding. <pre style="margin: 0px; padding: 0px; max-width: 100%; box-sizing: border-box ! important; word-wrap: break-word ! important; font-size: inherit; color: inherit; line-height: inherit;" > private class AudioRecordRunnable implements Runnable { @Override public void run() { mAudioRecord.startRecording(); While (mIsPushing) {// AudioRecord = new byte[] buffer = new byte[mMinBufferSize]; int length = mAudioRecord.read(buffer, 0, buffer.length); If (length > 0) {// Pass to Native code for audio encoding mnativepush. fireAudio(buffer, length); }}}}Copy the code

Encoding and pushing streams

Audio and video data encoding and pushing stream are implemented in Native layer. First, faAC, X264 and LIBRTMP third-party libraries are added to THE AS project, and then relevant Settings are initialized. Based on producer and consumer mode, the encoded audio and video data, Package the RTMPPacket in the producer thread and put it in the bidirectional linked list. Fetch the RTMPPacket from the linked list in the consumer thread and send it to the server through the RTMP_SendPacket method.

X264 initialization:

JNIEXPORT void JNICALL Java_com_haohao_live_jni_NativePush_setVideoOptions(JNIEnv *env, jobject instance, jint width, jint height, jint bitRate, jint fps) { x264_param_t param; //x264_param_default_preset Sets x264_param_default_preset(& Param, "ultrafast", "zerolatency"); // X264_param_default_preset (& Param, "ultrafast", "zerolatency"); I_csp = X264_CSP_I420; YUV420P param.i_csp = X264_CSP_I420; param.i_width = width; param.i_height = height; y_len = width * height; u_len = y_len / 4; v_len = u_len; I_rc_method = X264_RC_CRF; // I_rc_method = X264_RC_CRF; param.rc.i_bitrate = bitRate / 1000; //* bitRate (bitRate, Kbps) param.rc.i_vbv_max_bitrate = bitRate / 1000 * 1.2; // Instantaneous maximum bit rate // bit rate control does not pass timebase and timestamp, but FPS param.b_vfr_input = 0; param.i_fps_num = fps; //* Frame rate molecular param.i_fps_den = 1; //* Frame-rate denominator param.i_timebase_den = param.i_fps_num; param.i_timebase_num = param.i_fps_den; param.i_threads = 1; // The number of parallel coding threads, 0 default is multithreading // whether to put SPS and PPS in each keyframe //SPS Sequence Parameter Set, Param.b_repeat_headers = 1; I_level_idc = 51; // Set Level Level param.i_level_idc = 51; X264_param_apply_profile (&param, "baseline") x264_param_apply_profile(&param, "baseline"); //x264_picture_t (input image) initializes x264_picture_alloc(&pic_in, param.i_csp, param.i_width, param.i_height); pic_in.i_pts = 0; // Open the encoder video_encode_handle = x264_encoder_open(&param); If (video_encode_handle) {LOGI(" Open video encode_handle successfully "); } else { throwNativeError(env, INIT_FAILED); }}Copy the code

Faac initialization:

JNIEXPORT void JNICALL Java_com_byteflow_live_jni_NativePush_setAudioOptions(JNIEnv *env, jobject instance, jint sampleRateInHz, jint channel) { audio_encode_handle = faacEncOpen(sampleRateInHz, channel, &nInputSamples, &nMaxOutputBytes); if (! Audio_encode_handle) {LOGE(" audio encode_handle failed to open "); return; } / / set the audio encoding parameters faacEncConfigurationPtr p_config = faacEncGetCurrentConfiguration (audio_encode_handle); p_config->mpegVersion = MPEG4; p_config->allowMidside = 1; p_config->aacObjectType = LOW; p_config->outputFormat = 0; P_config ->useTns = 1; P_config ->useLfe = 0; // p_config->inputFormat = FAAC_INPUT_16BIT; p_config->quantqual = 100; p_config->bandWidth = 0; P_config ->shortctl = SHORTCTL_NORMAL; if (! FaacEncSetConfiguration (Audio_ENcode_handle, p_config) {LOGE("%s", "Audio encoder configuration failed.." ); throwNativeError(env, INIT_FAILED); return; } LOGI("%s", "audio encoder configured successfully "); }Copy the code

Encode and package the video data and put it into the linked list by add_rtmp_packet:

JNIEXPORT void JNICALL Java_com_byteflow_live_jni_NativePush_fireVideo(JNIEnv *env, jobject instance, Jbyte *nv21_buffer = (*env)->GetByteArrayElements(env, buffer_, NULL); jbyte *u = pic_in.img.plane[1]; jbyte *v = pic_in.img.plane[2]; //nv21 4:2:0 Formats, 12 Bits per Pixel // Nv21 = yVU YUv420p y= W *h, U/V = W *h/ /nv21 = yVU YUV420p = YUV y= Y U =y+1+1 v=y+1 Memcpy (pic_in.img.plane[0], nv21_buffer, y_len); int i; for (i = 0; i < u_len; i++) { *(u + i) = *(nv21_buffer + y_len + i * 2 + 1); *(v + i) = *(nv21_buffer + y_len + i * 2); } x264_nal_t *nal = NULL; //NAL int n_nal = -1; If (x264_encoder_encode(video_encode_handle, &nal, &n_nal, &pic_in, &pic_out) < 0) {LOGE("%s", "Encoding failed "); return; } // Use RTMP protocol to send h264 encoded video data to streaming media server // Frame is divided into key frame and ordinary frame, in order to improve the error correction rate of the picture, the key frame should contain SPS and PPS data int sps_len, pps_len; unsigned char sps[100]; unsigned char pps[100]; memset(sps, 0, 100); memset(pps, 0, 100); pic_in.i_pts += 1; For (I = 0; i < n_nal; If (nal[I].i_type == NAL_SPS) {sps_len = nal[I].i_payload-4; memcpy(sps, nal[i].p_payload + 4, sps_len); If (nal[I].i_type == NAL_PPS) {if (nal[I].i_type == NAL_PPS) { Picture Parameter Set ppS_len = nal[I].i_payload-4; memcpy(pps, nal[i].p_payload + 4, pps_len); Add_264_sequence_header (PPS, SPS, PPs_len, sps_len); add_264_sequence_header(PPS, PPs_len, SPs_len); } else {add_264_body(nal[I].p_payload, nal[I].i_payload); } } (*env)->ReleaseByteArrayElements(env, buffer_, nv21_buffer, 0); }Copy the code

Similarly, audio data is encoded and packaged into a linked list:

JNIEXPORT void JNICALL Java_com_byteflow_live_jni_NativePush_fireAudio(JNIEnv *env, jobject instance, jbyteArray buffer_, jint length) { int *pcmbuf; unsigned char *bitbuf; jbyte *b_buffer = (*env)->GetByteArrayElements(env, buffer_, 0); pcmbuf = (short *) malloc(nInputSamples * sizeof(int)); bitbuf = (unsigned char *) malloc(nMaxOutputBytes * sizeof(unsigned char)); int nByteCount = 0; unsigned int nBufferSize = (unsigned int) length / 2; unsigned short *buf = (unsigned short *) b_buffer; while (nByteCount < nBufferSize) { int audioLength = nInputSamples; if ((nByteCount + nInputSamples) >= nBufferSize) { audioLength = nBufferSize - nByteCount; } int i; for (i = 0; i < audioLength; I++) {// PCM data of 8 quantization is read from the real-time PCM audio queue each time. int s = ((int16_t *) buf + nByteCount)[i]; pcmbuf[i] = s << 8; } nByteCount += nInputSamples; // Use 8 binary bits to represent a sampling quantization point (analog-to-digital conversion)} nByteCount += nInputSamples; Pcmbuf is the converted PCM stream data, audioLength is the input sample number obtained when calling faacEncOpen, bitbuf is the encoded data buff, Int byteslen = faacEncEncode(Audio_encode_handle, pcmbuf, audioLength, bitbuf, nMaxOutputBytes); if (byteslen < 1) { continue; } add_aac_body(bitbuf, byteslen); } if (bitbuf) free(bitbuf); if (pcmbuf) free(pcmbuf); (*env)->ReleaseByteArrayElements(env, buffer_, b_buffer, 0); }Copy the code

The consumer thread keeps fetching RTMPPacket from the list and sending it to the server:

void *push_thread(void *arg) { JNIEnv *env; JNIEnv (*javaVM)->AttachCurrentThread(javaVM, &env); RTMP * RTMP = RTMP_Alloc(); if (! RTMP) {LOGE(" RTMP initialization failed "); goto end; } RTMP_Init(rtmp); rtmp->Link.timeout = 5; // Set the streaming media address RTMP_SetupURL(RTMP, rtmp_path); RTMP_EnableWrite(RTMP); // establish a connection if (! RTMP_Connect(RTMP, NULL)) {LOGE("%s", "RTMP connection failed "); throwNativeError(env, CONNECT_FAILED); goto end; } // start_time = RTMP_GetTime(); if (! RTMP_ConnectStream(RTMP, 0)) {LOGE("%s", "RTMP ConnectStream failed"); throwNativeError(env, CONNECT_FAILED); goto end; } is_pushing = TRUE; Add_aac_sequence_header (); While (is_pushing) {// Send pthread_mutex_lock(&mutex); pthread_cond_wait(&cond, &mutex); RTMPPacket *packet = queue_get_first(); if (packet) { queue_delete_first(); RTMP ->m_stream_id; Int I = RTMP_SendPacket(RTMP, packet, TRUE); //TRUE is put into the librtmp queue instead of immediately sending if (! I) {LOGE("RTMP disconnected "); RTMPPacket_Free(packet); pthread_mutex_unlock(&mutex); goto end; } else { LOGI("%s", "rtmp send packet"); } RTMPPacket_Free(packet); } pthread_mutex_unlock(&mutex); } end: LOGI("%s", "free resource "); free(rtmp_path); RTMP_Close(rtmp); RTMP_Free(rtmp); (*javaVM)->DetachCurrentThread(javaVM); return 0; }Copy the code

3. The drainage

This can be done through third-party libraries such as QLive SDK or Vitamio.

Vitamio based drainage:

private void init(){ mVideoView = (VideoView) findViewById(R.id.live_player_view); mVideoView.setVideoPath(SPUtils.getInstance(this).getString(SPUtils.KEY_NGINX_SER_URI)); mVideoView.setMediaController(new MediaController(this)); mVideoView.requestFocus(); mVideoView.setOnPreparedListener(new MediaPlayer.OnPreparedListener() { @Override public void onPrepared(MediaPlayer mp) {mp. SetPlaybackSpeed (1.0 f); }}); }Copy the code

Down here, high energy! Fortunately from a bytedance god there to get his own blood collated “582 pages Android NDK seven modules learning treasure”, from the principle to the actual combat, everything!

Adhering to the principle of good things to share, of course, today to show a, try this “582 pages Android NDK seven modules learning treasure guide” can also let you get twice the result with half the effort! This treasure book mainly involves the following aspects:

  • NDK module development
  • JNI module
  • Native development tools
  • Linux programming
  • Underlying image processing
  • Audio and Video development
  • Machine learning

All notes are free to share,Have a friend who needs a full edition of notes【 Click on me 】Get it for free!

1. NDK module development

Main Contents:

  • C++ and C# data types summary
  • C and C++ memory structure and management
  • C and C++ preprocessing commands and typedef naming existing types
  • C and C++ structure, common body
  • C and C++ Pointers
  • C/C++ multithreading
  • C/C++ functions and initializers list

JNI module

Main Contents:

  • Static and dynamic registration developed by JNI

Static registration, dynamic registration, JNINativeMethod, data type mapping, jNI function default parameters

  • JNI developed method signature and Java communication

Android NDK develops JNI type signature and method signature, and JNI implements communication between Java and C/C ++

  • Local, global, and weak global references for JNI development

Iii. Native development tools

Main Contents:

  • Compilers, packaging tools, and analyzers

Top 10 Most Popular React Native App Development editors and React-Native Packaging processes

  • Static and dynamic libraries

  • CPU architecture and precautions

ABI management, CPU processing, NEON support

  • Build scripts and build tools

Environment setup, NDK project, Cmake, Makefile

  • Cross compilation and migration

FFmpeg compilation, FFmpeg+LIBX264+FACC cross-compilation, FFmpeg+LIBX264+FACC cross-compilation, FFmpeg+LIBX264+FACC cross-compilation, FFmpeg+LIBX264+FACC cross-compilation, FFmpeg cross-compilation, X264 FAAC cross-compilation, solve all the porting problems

  • AS builds the NDK project

Configure NDK environment, establish APP project, generate. H header file, create C file, implement native method, jni.h file

Fourth, Linux programming

  • Linux environment construction, system management, permission system and tool use (VIM, etc.)

Linux environment construction, Linux system management operation (25 commands)

  • Shell scripting

Shell script, write simple Shell script, process control statement, plan task service program

Fifth, the bottom picture processing

  • PNG/JPEG/WEBP image processing and compression

Four image formats, recommended several image processing sites, Squoosh online lossless image compression tool, JPG/webP/PNG/ interconversion

  • Wechat picture compression

Calculate the original width height, calculate the approximate width height, obtain the target image by sampling for the first time, and approach the target size by cycle

  • GIF synthesis principle and implementation

GIF image analysis, GIF image synthesis (sequence image synthesis GIF image)

Vi. Audio and video development

  • Multimedia system

Camera and mobile phone screen acquisition, image raw data format YUV420(NV21 and YV12, etc.), audio acquisition and playback system, codec MediaCodec, MediaMuxer multiplexing and MediaExtractor

  • FFmpeg

Ffmpeg module introduction, audio and video decoding, audio and video synchronization, I frame,B frame,P frame decoding principle, X264 video coding and FAAC audio coding, OpenGL rendering and NativeWindow rendering

  • Streaming media protocol

RTMP, P2P WebRtc for audio and video calls

  • OpenGL ES filter development beauty effect

Gaussian blur, high contrast retention, strong light processing, fusion

  • Analysis and implementation of Douyin video effect

Process list, video shooting, video editing, and video export

  • Audio and video transmission principle

Variable speed entrance analysis, audio variable speed, video variable speed

Machine learning

  • Opencv

  • Image preprocessing

Gray scale and binarization, corrosion and expansion, face detection, ID card recognition

The last

Due to the space limitation, the detailed information of the document is too comprehensive, too many details, so only part of the knowledge point screenshots out of the rough introduction, each small node has more detailed content!

In addition to the above I also sorted out the following series of learning progress information:

Android Development Seven modules core Knowledge Notes

2246 pages of the latest Android high Frequency interview questions

All notes are free to share,Have a friend who needs a full edition of notes【 Click on me 】Get it for free!