Introduction to the

  • CameraX is a Jetpack support library designed to help you simplify camera application development.
  • It provides a consistent and easy-to-use API Surface that works on most Android devices and is backward compatible to Android 5.0
  • Based on Camera2, that is, the encapsulation of Camera2

Simple design ideas

Divide the whole photography process into three parts, each corresponding to a use case

  • Preview (Preview)
  • ImageAnalysis

Do not use the preview data stream, use the relatively light ImageReader read YUV data, calculation, the common application scenario is to take photos to fetch words

  • ImageCapture (ImageCapture)

One other thing to talk about is the concept of sessions, which actually comes from Camera2

  • Session

CameraX implements the default lifecycle callback through Lifecycle, which is to reopen the session on onStart (openSession()) and close the session on onStop

Let’s look at how these three use cases are used

Implementation steps

  • Write XML
 <PreviewView
            android:id="@+id/camera_view"
            android:focusable="true"
            android:clickable="true"
            android:layout_width="match_parent"
            android:layout_height="match_parent" />
Copy the code

PreviewView is a view used to display a preview interface, internally implemented with surface

  • Create and configure use cases
private fun createCameraXUseCases(a) {
    MyLog.d("yyyyyy"."createCameraUseCases start")
    val metrics = DisplayMetrics().also { mBinding.cameraView.display.getRealMetrics(it) }
    val ration = 1f
    val applyWidthPixels = (metrics.widthPixels * ration).toInt()
    val applyHeightPixel = (metrics.heightPixels * ration).toInt()
    val screenAspectRatio = aspectRatio(applyWidthPixels, applyHeightPixel)
    val resolution = Size(applyWidthPixels, applyHeightPixel)
    valrotation = (context? .getSystemService(Context.WINDOW_SERVICE)as WindowManager).defaultDisplay.rotation
    // Preview use-case configuration, where extensive testing has resulted in the most compatible configuration
    mPreview = Preview.Builder().setMaxResolution(resolution)
// .setTargetAspectRatio(screenAspectRatio)
        .setTargetResolution(resolution)
        .setTargetRotation(rotation)
        .build()
    mImageCapture = ImageCapture.Builder()
        .setMaxResolution(resolution)
        // Try screen AspectRation, fix sm-j600g preview stretch
        .setTargetAspectRatio(screenAspectRatio)
        .setTargetRotation(rotation)
        .setCaptureMode(ImageCapture.CAPTURE_MODE_MINIMIZE_LATENCY)
        .build()
    // The reason for this configuration is that the preview screen is consistent with the camera capture screenmImageCapture? .setCropAspectRatio(Rational(applyWidthPixels, applyHeightPixel)) mImageAnalysis = ImageAnalysis.Builder().setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST).setMaxResolution(resolution)// .setTargetAspectRatio(screenAspectRatio)
        .setTargetResolution(resolution)
        .setTargetRotation(rotation)
        .build()
}
Copy the code
  • Binding camera
private fun bindCameraXUseCases(owner: LifecycleOwner, selector: CameraSelector.vararg useCases: UseCase?). {
    try {
        val processCameraProvider = mCameraProviderFuture.get(a)as ProcessCameraProvider
        processCameraProvider.unbindAll()
        mCamera = processCameraProvider.bindToLifecycle(this, selector, *useCases)
        / / here is instantiated PreviewViewImplementationmPreview? .setSurfaceProvider(mBinding.cameraView.surfaceProvider) }catch (e: Exception) {
        mCameraSelector = null
        e.printStackTrace()
    }
}
Copy the code
  • call
 private fun startLaunchCamera(a) {
        ProcessCameraProvider.getInstance(requireContext()).also { mCameraProviderFuture = it }.addListener(Runnable {
            // Use case creation must be done in the callback
            createCameraXUseCases()
            launchCamera()
        }, ContextCompat.getMainExecutor(requireContext()))
    }

Copy the code

Let’s talk about how these steps work

The principle of

Namely PreviewView write XML

A PreviewView is a custom View that only displays the camera’s View. There are two internal implementations: SurfaceView and TextureView. The default is the better SurfaceView, and of course if your phone doesn’t support SurfaceView, choose TextureView. . Of course you can use the PreviewView setPreferredImplementationMode (ImplementationMode) specified is implemented. The logic is shown below:

One conclusion can be drawn from the above figure: When the preferred mode is set to SURFACE_VIEW, the PreviewView will follow your Settings as much as possible (using the SurfaceView); When the preferred mode is set to TEXTURE_VIEW, PreviewView ensures that TEXTURE_VIEW is always used. An instance of the concrete implementation class PreviewViewImplementation mImplementation assignment need to invoke the Preview. SetSurfaceProvider (PreviewView. GetSurfaceProvider ()).

In addition to the above points, PreviewView has two other features that I won’t cover here

  • Camera focus: call the API directly
  • Preview screen drag

Use case configuration and rationale

All three use cases need to be configured, and the configuration of the three use cases is almost a set of methods through the builder pattern. The common methods are shown in the figure below

You can set the maximum resolution for three use cases, the target aspect ratio, the target resolution, and the target rotation Angle. CameraX needs this information to calculate the final resolution and final rotation Angle for the output of each use case. The source logic is shown below

Preview (Preview)

Previewing the configuration is very important, and improper configuration often leads to the following situations

  • Preview deformation, such as stretching, etc
  • The photographed image is not consistent with the preview image

How to configure it depends on your requirements, the scale of the preview box, etc. Our product requires a full-screen preview box, so I used setTargetResolution, Parameter is the resolution of the screen, the specific calculation logic in SupportedSurfaceCombination getSupportedOutputSizes, specific calculation logic diagram presented here

From the figure above, we can give some instructive explanations and conclusions for practice

  • If you do not set AspectRatio, CameraX will calculate the AspectRatio of the target based on the resolution you give the target, and finally sort the target by AspectRatio as an axis, as shown in the figure below

  • CameraX outputs each use case as it calculates the appropriate resolutionSupportedOutputSizesCartesian product, if the use case supports too many outputSizes, the cartesian product will be very large. So we can setmaxsizeTo minimize this space footprint
  • The final step in the output is to filter the cartesian product, which is a combination of possible configurations for the use case, as shown below

This combination is used as the basic unit in the screening process, so they are actually coupled. Further, assuming that the preview and image analysis configurations remain unchanged, only changing the target aspect ratio or target resolution of the image taken may lead to changes in the output resolution of the preview and image analysis.

ImageAnalysis

Analysis picture is a byte stream of real-time image acquisition for analysis. Common scenes include AR, flower recognition, screen word extraction, etc. The parameter configuration of this use case can be followed by the Preview use case, which mainly wants to explain the following points

  • Producer-consumer issues:

Since it is to obtain pictures in real time, there must be the problem of producer and consumer. Here, the byte stream of each picture is the producer, and the consumer is the analysis of these pictures, which are time-consuming operations. How to balance the speed of production and consumption? Google provides two common methods in the producer/consumer model, blocking mode and non-blocking mode, the former takes the oldest frame, the latter takes the latest frame, depending on your needs, this configuration is through setBackpressureStrategy, as shown in the following code:

val imageAnalysis = ImageAnalysis.Builder()
    .setTargetResolution(Size(1280.720))
    // Use this function to configure mode STRATEGY_BLOCK_PRODUCER or STRATEGY_KEEP_ONLY_LATEST
    .setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
    .build()
Copy the code
  • CameraX image analysis returns the image format YUV_420_888, not JPEG, and does not allow you to configure, of course, there will be a hook solution, because YUV_420_888 is not compatible with 5.0.1 some phones, let’s look at it first. Successfully configured using point 1imageAnalysis
imageAnalysis.setAnalyzer(mCameraExecutor, image -> {
    // Get an image of type ImageProxy
    image.close();
});
Copy the code

Follow the source code, it is easy to see the following classes

final class ImageAnalysisBlockingAnalyzer extends ImageAnalysisAbstractAnalyzer {
    @Override
    public void onImageAvailable(@NonNull ImageReaderProxy imageReaderProxy) {
        // 1. Default is AndroidImageReaderProxy
        ImageProxy image = imageReaderProxy.acquireNextImage();
        if (image == null) {
            return;
        }
        // 2. Call image
        ListenableFuture<Void> analyzeFuture = analyzeImage(image);

        // Callback to close the image only after analysis complete regardless of success
        Futures.addCallback(analyzeFuture, new FutureCallback<Void>() {
            @Override
            public void onSuccess(Void result) {
                // No-op. Keep blocking the image reader until user closes the current one.
            }

            @Override
            public void onFailure(Throwable t) { image.close(); } }, CameraXExecutors.directExecutor()); }}Copy the code
  • At note 1ImageReaderProxyIs one interface, and there are four implementations in CameraX, as shown below

The implementation class is identified as createPipeline() in ImageAnalysis.java, which contains the following key code

 SessionConfig.Builder createPipeline(@NonNull String cameraId,
            @NonNull ImageAnalysisConfig config, @NonNull Size resolution) {
        // ...
        // 1. Determine the maximum number of imageReader queues to be cached
        int imageQueueDepth =
                getBackpressureStrategy() == STRATEGY_BLOCK_PRODUCER ? getImageQueueDepth()
                        : NON_BLOCKING_IMAGE_DEPTH;
        SafeCloseImageReaderProxy imageReaderProxy;
        // 2. Determine which ImageReaderProxy to use
        if(config.getImageReaderProxyProvider() ! =null) {
            imageReaderProxy = new SafeCloseImageReaderProxy(
                    config.getImageReaderProxyProvider().newInstance(
                            resolution.getWidth(), resolution.getHeight(), getImageFormat(),
                            imageQueueDepth, 0));
        } else {
            / / 3
            imageReaderProxy =
                    new SafeCloseImageReaderProxy(ImageReaderProxys.createIsolatedReader(
                            resolution.getWidth(),
                            resolution.getHeight(),
                            getImageFormat(),
                            imageQueueDepth));
        }
        // ...
        return sessionConfigBuilder;
    }

Copy the code

Note 1 identifies the maxImages parameter for ImageRader. This can be configured through the use case setImageQueueDepth(n: Int ImageReadProxy is used to specify which ImageReadProxy to use. Here you can implement an ImageReadProxy. The hook processing I mentioned before is implemented here. So the configuration could look like this

val imageAnalysis = ImageAnalysis.Builder()
    .setTargetResolution(Size(1280.720))
    // Use this function to configure mode STRATEGY_BLOCK_PRODUCER or STRATEGY_KEEP_ONLY_LATEST
    .setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
    .setImageQueueDepth(4)
    .setImageReaderProxyProvider(object : ImageReaderProxyProvider {
        override fun newInstance(width: Int, height: Int, format: Int, queueDepth: Int, usage: Long): ImageReaderProxy {
            
        }
    }).build()
Copy the code

Of course, if you don’t have a custom ImageReaderProxy, the code in note 3 will default to AndroidImageReaderProxy. This class reimplements all the functionality of ImageReader using the simplest proxy mode. Several other agents use caputure

Now let’s go back to the ImageAnalysisBlockingAnalyzer this code,

final class ImageAnalysisBlockingAnalyzer extends ImageAnalysisAbstractAnalyzer {
    @Override
    public void onImageAvailable(@NonNull ImageReaderProxy imageReaderProxy) {
        // 1. Default is AndroidImageReaderProxy
        ImageProxy image = imageReaderProxy.acquireNextImage();
        if (image == null) {
            return;
        }
        // 2. Call image
        ListenableFuture<Void> analyzeFuture = analyzeImage(image);

        // Callback to close the image only after analysis complete regardless of success
        Futures.addCallback(analyzeFuture, new FutureCallback<Void>() {
            @Override
            public void onSuccess(Void result) {
                // No-op. Keep blocking the image reader until user closes the current one.
            }

            @Override
            public void onFailure(Throwable t) { image.close(); } }, CameraXExecutors.directExecutor()); }}Copy the code

Note 2 indicates the callback after the Image is obtained. Here, the Image type is ImageProxy, and its implementation is naturally corresponding to AndroidImageProxy, which uses decorator mode. After analyzeImage is analyzed, the final callback to the developer is SettableImageProxy, as shown in the following code

ListenableFuture<Void> analyzeImage(ImageProxy imageProxy) { // ... future = CallbackToFutureAdapter.getFuture( completer -> { executor.execute(() -> { if (! IsClosed ()) {/ / to get information about the image ImageInfo ImageInfo = ImmutableImageInfo. Create (imageProxy. GetImageInfo () getTagBundle (),  imageProxy.getImageInfo().getTimestamp(), mRelativeRotation); Analyzer. Analyze (new SettableImageProxy(imageProxy, imageInfo)); completer.set(null); } else { completer.setException(new OperationCanceledException("Closed " + "before analysis")); }}); return "analyzeImage"; }); / /... return future; }Copy the code

SettableImageProxy has more information about ImageInfo than AndroidImageProxy, which developers will need to use

  • The returnedImageThe use of
    • Clipping: useImageProxy.setCropRect(Rect rect)SetCropRect is also used internally
    • Convert bitmap byte streams: CameraX source code provides such a utility class androidx.camera.core.internal.utils.ImageUtil
Pictures were taken
use
// Use the previously constructed use casemImageCapture? .takePicture(mCameraExecutor,object : OnImageCapturedCallback() {
    override fun onCaptureSuccess(image: ImageProxy) {
        // As with the analyze image use case, return image
    }

    override fun onError(exception: ImageCaptureException) {
        super.onError(exception)
    }
})
Copy the code

Two notes:

  • If you want to use YUV_420_888, this configuration is relatively obvious for the speed of taking photos, because the speed of the byte stream is relatively fast. You can usesetBufferFormatConfiguration, but with some risk.
  • There is also a mCameraExecutor parameter, which is an instance of the thread pool in which time-consuming operations within takePicture run.
tailoring

In addition, in order to ensure the consistency between the preview interface and the photo taking interface, clipping needs to be set as follows

  // The reason for this configuration is that the preview screen is consistent with the camera capture screen
mImageCapture.setCropAspectRatio(Rational(applyWidthPixels, applyHeightPixel))
Copy the code

This means that if the picture captures a resolution different from the one you set, it needs to be cropped to the same scale.

rotating

Due to inconsistencies in camera hardware, some phone images may differ from the Angle you want (the Angle you want in your use case configuration), but that doesn’t matter. CameraX returns an image that contains how many degrees you should rotate to achieve your desired Angle, as shown in the following code:

val bitmap = BitmapUtils
            .decodeByteArrayInBitmap(ImageUtil.imageToJpegByteArray(image), sCameraBitmap, inSampleSize).rotateBitmap(image.imageInfo.rotationDegrees.toFloat())
Copy the code

The rotation of the image information through this image. The imageInfo. RotationDegrees