The previous chapter introduced RGB and YUV formats. This chapter will introduce how to collect YUV data

I. Video related concepts

1.1 the resolution

The size of an image can be measured by the number of horizontal and vertical pixels, for example, 1080*960, where 1080 means the number of horizontal pixels is 1080, and 960 means the number of vertical pixels is 960

1.2 frame rate

How many images do you have in 1s? For example, 30fps means 30 images in 1s, that’s 30 frames, 60FPS means 60 frames in 1s, and even higher, 90fps, 120fps. The higher the frame rate, the smoother the picture

According to the visual retention characteristics of the human eye, 24fps will be regarded as smooth by the human eye, while a higher frame rate will give people a feeling of being on the scene

Of course, the higher the frame rate, the better, need to consider the display refresh rate, frame rate may be too high but unnecessary waste of resources

1.3 Bit rate/bit rate

The bit rate, also known as bit rate, is in BPS, which represents the number of bits in the video. The calculation formula is:


Bit rate = File size ( k b ) / The length ( s ) Bit rate = File size (KB)/Duration (s)

2. Video collection

Capture video, in the phone can be captured through the camera, so Android also provides a number of classes to operate the camera

  • Camera

    The operation is relatively simple, and there are many ways to operate the camera, such as turning it on and off, zooming, taking a picture, etc. However, Google has deprecating this, and instead recommends using the CameraX, or Camera2

  • Camera2

    Camera2 adds a lot of features compared to Camera, such as Api architecture has been greatly optimized, it is more advanced than Camera, it can get the information of each frame, etc., etc., however, Camera2 has become more complex to use than Camera

  • CameraX

    As a member of the JetPack family, CameraX is a new camera frame after Camea2. It may be that Google decided that while Camera2 is powerful, it is really a lot of trouble to use, so it launched CameraX in order to make the operation more simple while keeping the function

Although Google suggested that we use CameraX or Camera2, the video series will use Camera for nothing but simplicity

2.1 Camera

The use of the camera can be put into the child thread to use

First, the first step, of course, is to turn on the camera

Open the camera

camera = Camera.open(id);
Copy the code

To open the Camera, use the Camera static method open(). This method needs to pass in an INT id with two optional values

  • Camera.CameraInfo.CAMERA_FACING_BACK

    After taken

  • Camera.CameraInfo.CAMERA_FACING_FRONT

    proactive

After obtaining the camera instance, the camera is not currently on

Next, you need to configure some parameters of the camera, such as the preview size

Get camera parameters

Camera.Parameters parameters = camera.getParameters();
Copy the code

To set the parameters of the camera, you need to call this method first, and then you need to set some parameters on the parameters

Gets the full preview size

List<Camera.Size> previewSizeList = parameters.getSupportedPreviewSizes();
Copy the code

By calling the parameters. GetSupportedPreviewSizes () gets the current camera support preview sizes, because we introduced into the camera preview size must is supported by the camera

In general, we usually want the camera to be the same size as the screen, or the same scale as the screen, so that the display will be better, so we use an algorithm to find the best size

/** * Find the most suitable size */
public static Camera.Size findTheBestSize(List<Camera.Size> sizeList, int screenW, int screenH) {
    if (sizeList == null || sizeList.isEmpty()) {
        throw new IllegalArgumentException();
    }
    Camera.Size bestSize = sizeList.get(0);
    for (Camera.Size size : sizeList) {
        int width = size.height;
        int height = size.width;
        float ratioW = (float) width / screenW;
        float ratioH = (float) height / screenH;
        if (ratioW == ratioH) {
            bestSize = size;
            break; }}return bestSize;
}
Copy the code

One parameter is passed directly into the preview size supported by the camera, and the next two parameters are the screen size

The purpose of the algorithm is to iterate over each preview size, but if the scale of the preview size is the same as the scale of the screen size, it will return, and if the scale is still not found, it will return the size supported by the first camera

In this way, an optimal size is obtained

Setting preview size

parameters.setPreviewSize(width, height);
Copy the code

The purpose of this chapter is to sample video data, so we need to set the callback format for the camera preview data

Setting the Preview format

parameters.setPreviewFormat(ImageFormat.NV21);
Copy the code

Camera Settings

camera.setParameters(parameters);
Copy the code

After making a series of Settings on parameters, we can set it to the camera

Set rotation Angle

camera.setDisplayOrientation(CameraUtils.getDisplayOrientation(activity, id));
Copy the code

Set the rotation Angle, using the CameraUtils. GetDisplayOrientation (activity, id) method

/** * Get camera rotation Angle */
public static int getDisplayOrientation(Activity activity, int facing) {
    Camera.CameraInfo info = new Camera.CameraInfo();
    Camera.getCameraInfo(facing, info);
    int rotation = activity.getWindowManager().getDefaultDisplay()
            .getRotation();
    int degrees = 0;
    switch (rotation) {
        case Surface.ROTATION_0:
            degrees = 0;
            break;
        case Surface.ROTATION_90:
            degrees = 90;
            break;
        case Surface.ROTATION_180:
            degrees = 180;
            break;
        case Surface.ROTATION_270:
            degrees = 270;
            break;
    }
    int result;
    if (info.facing == Camera.CameraInfo.CAMERA_FACING_FRONT) {
        result = (info.orientation + degrees) % 360;
        result = (360 - result) % 360;  // compensate the mirror
    } else {  // back-facing
        result = (info.orientation - degrees + 360) % 360;
    }
    return result;
}
Copy the code

Set up the data cache and callback

The callback format for the preview data was previously set to NV21, which is a YUV420 format

Camera provides addCallbackBuffer and setPreviewCallbackWithBuffer method, used to receive the preview data and set the callback, it can reduce the object creation

camera.addCallbackBuffer(nv21Data);
camera.setPreviewCallbackWithBuffer(this);
Copy the code

After setting the callback, we need to override the onPreviewFrame method

@Override
public void onPreviewFrame(byte[] data, Camera camera) {
    Log.d(TAG, "onPreviewFrame: " + nv21Data.length);
    camera.addCallbackBuffer(nv21Data);
    if (fos == null) {
        return;
    }
    try {
        fos.write(nv21Data);
    } catch(IOException e) { e.printStackTrace(); }}Copy the code

In this method, each time you call back to preview the data, you first call the addCallbackBuffer method, which reuses the corresponding object

The data is then written to the file

Set SurfaceTexture to enable preview

The SurfaceTexture needs to be set to enable the preview function, so that the preview callback data can be retrieved

try {
    camera.setPreviewTexture(new SurfaceTexture(0));
} catch (IOException e) {
    e.printStackTrace();
}
camera.startPreview();
Copy the code

A new surfaceTexture is set here, giving the effect of sampling the video without preview

You can also use SurfaceView or TextureView to display a preview

Stop the preview and release the resource

When we stop recording, we need to stop previewing and release resources

private void release(a) {
    if(camera ! =null) {
        camera.stopPreview();
        camera.release();
        camera = null;
    }
    if(fos ! =null) {
        try {
            fos.close();
            fos = null;
        } catch(IOException e) { e.printStackTrace(); }}}Copy the code

Third, making

YuvRecord

YuvActivity