Recently, A new library for Google’s Jetpack component was released: CameraX.

CameraX is the official library for Camera development, and will be maintained and updated by Google. This is really good news for Camera developers and Camera programmers

Original address: glumes.com/post/androi…

CameraX introduction

After I fork it, I add the operation of rendering with OpenGL black and white filter. The specific address is as follows:

Github.com/glumes/came…

There is no official mention of how the CameraX library does OpenGL thread rendering. Read on to find out

For more on CameraX, I recommend looking at the video transcript of the Google I/O conference rather than the documentation

www.youtube.com/watch?v=kuv…

As mentioned in the video, there are a lot of applications that are starting to connect to CameraX, such as Camera360 and Tik Tok.

Brief Introduction to Camera Development

Regarding the development of Camera, I have also written related articles before 🤔

Size and orientation issues in Android camera development

Android Camera model and API interface evolution

For a simple working Camera application (Demo level), focus on two aspects: preview and shooting.

Both preview and shot images are affected by resolution and orientation. The most necessary function of Camera is to provide two sets of resolution for preview and shooting, so it is necessary to set different scenes.

On the plus side, for the best image quality, choose the highest resolution of the same scale.

The preview image will eventually be displayed on the Android Surface, so when choosing the resolution, we should consider the width to height ratio of the Surface, and avoid the phenomenon of image stretching caused by proportion mismatch.

In addition, if you want to do beauty and filter applications, you need to put the Camera preview image into the OpenGL rendering thread, and then OpenGL to do the image-related operations, there is no Camera anything. When the picture is taken, the image content can be obtained by OpenGL, and the image content can also be obtained by Camera, and then OpenGL will do off-screen processing ~~~

As for other Camera development functions, such as focusing, exposure, white balance, HDR and other operations, not all cameras can support it, and it is not difficult to develop it as a feature of Camera based on the above. This is also a Camera development engineers to master the content of advanced ~~

CameraX development practices

CameraX is currently 1.0.0-alpha01, with the following dependencies:

    // CameraX
    def camerax_version = "1.0.0 - alpha01"
    implementation "androidx.camera:camera-core:${camerax_version}"
    implementation "androidx.camera:camera-camera2:${camerax_version}"
Copy the code

CameraX is backward compatible with Android 5.0 (API Level 21), and it is packaged based on Camera 2.0 API, which solves the compatibility problems of most phones on the market

CameraX is much simpler than Camera 2.0. We only care about what we need. Unlike Camera 2.0, we have to maintain the CameraSession session. And with CameraX tied to Jetpack’s flagship Lifecycle, when should the camera be turned on and when should the camera be released

The overhand CameraX focuses on three areas:

  • Image Preview
  • Image analysis
  • Image Capture

preview

For both preview and image analysis and photography, CameraX uses a builder mode to build the parameter Config class. The Config class creates the preview, parser, and photography classes and passes them through the binding life cycle.

// // Apply declared configs to CameraX using the same lifecycle owner
CameraX.bindToLifecycle(
               lifecycleOwner: this, preview, imageCapture, imageAnalyzer)
Copy the code

Lifecycle can be bound to an Activity or Fragment can be bound to a Fragment.

When unbinding is required:

// Unbinds all use cases from the lifecycle and removes them from CameraX.
 CameraX.unbindAll()
Copy the code

If you’ve read the previous article on size and orientation issues in Android camera development, you’ll be familiar with the preview configuration.

CameraX determines whether the current Camera supports it and selects the one that best matches it.

fun buildPreviewUseCase(a): Preview {
    val previewConfig = PreviewConfig.Builder()
        / / aspect ratio
        .setTargetAspectRatio(aspectRatio)
        / / rotation
        .setTargetRotation(rotation)
        / / resolution
        .setTargetResolution(resolution)
        // Front and rear cameras
        .setLensFacing(lensFacing)
        .build()
    
    // Create the Preview object
    val preview = Preview(previewConfig)
    // Set the listener
    preview.setOnPreviewOutputUpdateListener { previewOutput ->
        // PreviewOutput returns a SurfaceTexture
        cameraTextureView.surfaceTexture = previewOutput.surfaceTexture
    }

    return preview
}
Copy the code

By the builder pattern to create the Preview objects, and must give the Preview Settings object callback OnPreviewOutputUpdateListener interface.

The camera preview image stream is returned by SurfaceTexture, and in the project example, by replacing the SurfaceTexture returned by CameraX with the SurfaceTexture from TextureView, This enables the TextureView control to display Camera preview content.

In addition, the orientation of the device needs to be taken into account. When the device landscape changes to portrait, the TextureView rotates accordingly.

preview.setOnPreviewOutputUpdateListener { previewOutput ->
    cameraTextureView.surfaceTexture = previewOutput.surfaceTexture

    // Compute the center of preview (TextureView)
    val centerX = cameraTextureView.width.toFloat() / 2
    val centerY = cameraTextureView.height.toFloat() / 2

    // Correct preview output to account for display rotation
    val rotationDegrees = when (cameraTextureView.display.rotation) {
        Surface.ROTATION_0 -> 0
        Surface.ROTATION_90 -> 90
        Surface.ROTATION_180 -> 180
        Surface.ROTATION_270 -> 270
        else -> return@setOnPreviewOutputUpdateListener
    }

    val matrix = Matrix()
    matrix.postRotate(-rotationDegrees.toFloat(), centerX, centerY)

    // Finally, apply transformations to TextureView
    cameraTextureView.setTransform(matrix)
}
Copy the code

TextureView rotating set in the same OnPreviewOutputUpdateListener interface to complete.

Image analysis

The imageAnalyzer parameter is not required in the bindToLifecycle method.

ImageAnalysis can help us to do some image quality analysis, we need to implement the Analyze method of the ImageAnalysis.Analyzer interface.

fun buildImageAnalysisUseCase(a): ImageAnalysis {
    // Parser configures the builder of Config
    val analysisConfig = ImageAnalysisConfig.Builder()
        // Width to height ratio
        .setTargetAspectRatio(aspectRatio)
        / / rotation
        .setTargetRotation(rotation)
        / / resolution
        .setTargetResolution(resolution)
        // Image rendering mode
        .setImageReaderMode(readerMode)
        // Image queue depth
        .setImageQueueDepth(queueDepth)
        // Set the thread for the callback
        .setCallbackHandler(handler)
        .build()
    
    // Create the parser ImageAnalysis object
    val analysis = ImageAnalysis(analysisConfig)
    
    // setAnalyzer passes in a class that implements the Analyze interface
    analysis.setAnalyzer { image, rotationDegrees ->
        // For some image information available, see ImageProxy class methods
        val rect = image.cropRect
        val format = image.format
        val width = image.width
        val height = image.height
        val planes = image.planes
    }

    return analysis
}
Copy the code

In the image analyzer configuration, there are ImageReaderMode and ImageQueueDepth Settings.

ImageQueueDepth specifies the number of images in the camera pipeline. Increasing the number of ImageQueueDepth can have an impact on the camera’s performance and memory usage

ImageReaderMode has two modes:

  • ACQUIRE_LATEST_IMAGE
    • In this mode, the latest image in the image queue is obtained, and the queue is emptied of existing old images.
  • ACQUIRE_NEXT_IMAGE
    • In this mode, the next image is obtained.

In the analyze method of image analysis, ImageProxy class can get some image information, and do analysis based on these information.

shooting

Shooting also has a Config parameter builder class, and the parameters set are not much different from the preview, but also image width to height ratio, rotation direction, resolution, in addition to the flash configuration items.

fun buildImageCaptureUseCase(a): ImageCapture {
    val captureConfig = ImageCaptureConfig.Builder()
        .setTargetAspectRatio(aspectRatio)
        .setTargetRotation(rotation)
        .setTargetResolution(resolution)
        .setFlashMode(flashMode)
        // Shooting mode
        .setCaptureMode(captureMode)
        .build()
    
    // Create an ImageCapture object
    val capture = ImageCapture(captureConfig)
    cameraCaptureImageButton.setOnClickListener {
        // Create temporary file
        val fileName = System.currentTimeMillis().toString()
        val fileFormat = ".jpg"
        val imageFile = createTempFile(fileName, fileFormat)
        
        // Store captured image in the temporary file
        capture.takePicture(imageFile, object : ImageCapture.OnImageSavedListener {
            override fun onImageSaved(file: File) {
                // You may display the image for example using its path file.absolutePath
            }

            override fun onError(useCaseError: ImageCapture.UseCaseError, message: String, cause: Throwable?). {
                // Display error message}})}return capture
}
Copy the code

In the configuration related to image shooting, there is also a CaptureMode setting.

It has two options:

  • MIN_LATENCY
    • In this mode, the shooting speed is relatively fast, but the image quality is compromised
  • MAX_QUALITY
    • In this mode, the shooting speed is a bit slower, but the image quality is good

OpenGL rendering

The above is about the simple application aspects of CameraX, more concerned with how to use CameraX to do OpenGL rendering to achieve beauty. Filters and other effects.

Remember in the Preview image Preview setOnPreviewOutputUpdateListener method, returns a SurfaceTexture, camera images of the flow is returned by it.

To render an OpenGL thread, first create an OpenGL rendering environment based on EGL, and then use the attachToGLContext method of SurfaceTexture. SurfaceTexture is added to the OpenGL thread.

AttachToGLContext takes a texture ID, which must be of OES type.

Then draw the texture ID to the Surface corresponding to OpenGL. This can be regarded as two different threads allowing, one Camera preview thread and one OpenGL drawing thread.

If you don’t quite understand, I suggest you look at the code address provided above:

Github.com/glumes/came…

You can also follow my wechat official account [paper Talk], which has some articles about OpenGL learning and practice ~~~

The expansion of CameraX

If you watched the Google I/O video, you know about the extended nature of CameraX.

The video mentions that Google is also working with huawei, Samsung, LG, Motomora and others to get some of their system camera capabilities, such as HDR.

But considering the current situation, it may be difficult to continue the cooperation with Huawei…

But expect more new features from CameraX

reference

  1. www.youtube.com/watch?v=kuv…
  2. Proandroiddev.com/android-cam…

The article recommended

  1. Understand the sampling and format of YUV

  2. OpenGL EGL usage practices

  3. OpenGL depth testing with precision values for those things

Feel the article is good, welcome to pay attention to and forward wechat public number: [paper shallow talk], get the latest article push ~~~