preface

On Android systems, Camera output images are generally in NV21(YUV420SP series) format. When we want to process video, we face two problems

Question 1

Image rotation problem

  • Rear lens: need to rotate 90°
  • Front lens: need to rotate 270° before mirror processing

Question 2

When we tried to hardwire H.264 with MediaCodec after the camera rotation, we found the color bias

This is because the COLOR_FormatYUV420SemiPlanar format of MediaCodec is NV12, not NV21. Although they are both YUV420SP series, their arrays are different, they store the data of Y first. NV21 is VU alternate storage, and NV12 is UV alternate storage

- NV21: yyyy yyyy vu vu
- NV12: yyyy yyyy uv uv
Copy the code

In order to solve this problem, there are many solutions on the Internet. We can use it in Java layer for data operation. However, after testing, it is found that 1080p is recorded on Samsung S7 Edge

  • Rotation and mirroring: 20ms
  • Turn NV21 NV12:16 ms

It takes about 40ms, which is barely enough to record 25 frames, and can feel a noticeable lag when using OpencV for face recognition or filter processing

Libyuv is Google’s open source library for NV21 data processing on mobile. It provides rotation, cropping, mirroring, scaling, and more

Let’s see how libyuv is compiled and used

Environment.

The operating system

MacOS Mojave version 10.14.5

Libyuv

Chromium.googlesource.com/libyuv/liby…

git clone https://chromium.googlesource.com/libyuv/libyuv
Copy the code

The NDK version

NDK16

Cmake version

➜  ~ cmake -version
cmake version 3.14.5
Copy the code

Compile the script

Libyuv provides cmakelists. TXT, so we can generate the Makefile directly with cmake, and then compile the Makefile with make

ARCH=arm
ANDROID_ARCH_ABI=armeabi-v7a
NDK_PATH=/Users/sharrychoo/Library/Android/ndk/android-ndk-r16b
PREFIX=`pwd`/android/${ARCH}/${CPU}

# cmake to participate
cmake -G"Unix Makefiles" \
	-DANDROID_NDK=${NDK_PATH} \
    -DCMAKE_TOOLCHAIN_FILE=${NDK_PATH}/build/cmake/android.toolchain.cmake \
    -DANDROID_ABI=${ANDROID_ARCH_ABI} \
    -DANDROID_NATIVE_API_LEVE=16 \
    -DCMAKE_INSTALL_PREFIX=${PREFIX} \
	-DANDROID_ARM_NEON=TRUE \
    ..
    
# Generate dynamic library
make 
make install
Copy the code

Output so library

Iii. Code writing

We will copy the so library and header file to AS, then can carry on the code to write, here to write a Libyuv tool class, easy to use later

1) Java code

This section uses NV21 to I420 as an example

** @author Sharry <a href="[email protected]">Contact me.</a> * @version 1.0 * @since 2019-07-23 */ public class LibyuvUtil {static {system.loadlibrary ("smedia-camera"); Public static native void convertNV21ToI420(byte[] SRC, byte[] DST, int width, int height); public static native void convertNV21ToI420(byte[] SRC, byte[] DST, int width, int height); . }Copy the code

2) Native implementation

This section uses the NV21 to I420 as an example

namespace libyuv_util { void convertI420ToNV12(JNIEnv *env, jclass, jbyteArray i420_src, jbyteArray nv12_dst, int width, int height) { jbyte *src = env->GetByteArrayElements(i420_src, NULL); jbyte *dst = env->GetByteArrayElements(nv12_dst, NULL); LibyuvUtil::I420ToNV12(SRC, DST, width, height); LibyuvUtil::I420ToNV12(SRC, DST, height); Env ->ReleaseByteArrayElements(i420_src, SRC, 0); env->ReleaseByteArrayElements(nv12_dst, dst, 0); } } void LibyuvUtil::NV21ToI420(jbyte *src, jbyte *dst, int width, Int height) {// NV21 parameter jint src_y_size = width * height; jbyte *src_y = src; jbyte *src_vu = src + src_y_size; // I420 parameter jint dst_y_size = width * height; jint dst_u_size = dst_y_size >> 2; jbyte *dst_y = dst; jbyte *dst_u = dst + dst_y_size; jbyte *dst_v = dst + dst_y_size + dst_u_size; /** * <pre> * int NV21ToI420(const uint8_t* src_y, * int src_stride_y, * const uint8_t* src_vu, * int src_stride_vu, * uint8_t* dst_y, * int dst_stride_y, * uint8_t* dst_u, * int dst_stride_u, * uint8_t* dst_v, * int dst_stride_v, * int width, * int height); * </pre> * <p> * stride_y: * </pre> * <p> * stride_y: * </pre> * <p> * stride_y: Y * stride_u = width * stride_u = width * stride_u YUV420 samples Y:U:V = 4:1:1, from the perspective of overall storage, the number of Y components is four times that of U/V *, but from the perspective of row width Y, it uses width/2 U * stride_v: */ libyuv::NV21ToI420((uint8_t *) src_y, width, (uint8_t *) src_vu, width, (uint8_t *) dst_u, width >> 1, (uint8_t *) dst_v, width >> 1, width, height ); }Copy the code

As you can see, the method is also very simple to call. All you need to do is pass in relevant parameters, among which there is a very important parameter, stride span, which describes the number of bytes occupied by the color component in a row of pixels

  • YUV420 series
    • NV21
      • Y: the span is width
      • VU: the span is width
    • I420P(YU12):
      • Y: the span is width
      • U: width/2
      • V: span width/2
  • ABGR: spans 4 *width

If you are not familiar with the common color space, please click here to check it out

conclusion

Use libyuv to rotate the image transcoding and other operations, the time is as follows

  • Rotation mirror: 5~8ms
  • Turn NV21 NV12:0 ~ 3 ms

As you can see, it’s almost three times faster than Java code, which is good enough for smooth recording

The author will be commonly used YUV operation into a demo click to view, if necessary can directly copy the code to use