preface

I always want to study image processing and realize some simple camera preview and photo taking functions. Learn about CameraX, by the way. CameraX + OpenGL: CameraX + OpenGL: CameraX + OpenGL: CameraX + OpenGL: CameraX + OpenGL: CameraX + OpenGL: CameraX + OpenGL

The code in this article uses CameraX version 1.0.0-RC03 and OPPO R15 as the test device.

The use of CameraX

For a simple introduction to using CameraX, see the official documentation or Getting Started with CameraX. There will be a more detailed introduction. Since the CameraX configuration is not complex to use, this article will simply post code for this section, paving the way for future extensions to OpenGL.

Configure a preview resolution of 640 * 480, rear camera preview:

// MainActivity.kt
private fun setUpCamera(a) {
    val cameraProviderFuture = ProcessCameraProvider.getInstance(this)
    cameraProviderFuture.addListener({
        val cameraProvider = cameraProviderFuture.get(a)val preview: Preview = Preview.Builder()
            .setTargetResolution(Size(480.640))
            .setTargetRotation(this.display.rotation)
            .build()

        // It is used when taking photos
        val imageCapture = ImageCapture.Builder()
            .setCaptureMode(ImageCapture.CAPTURE_MODE_MAXIMIZE_QUALITY)
            .setTargetRotation(this.display.rotation)
            .build()

        preview.setSurfaceProvider(previewView)

        cameraProvider.unbindAll()

        val camera = cameraProvider.bindToLifecycle(
                        this as LifecycleOwner,
                        CameraSelector.DEFAULT_BACK_CAMERA,
                        imageCapture,
                        preview)

        // Control flash, switch camera, etc.
        val cameraInfo = camera.cameraInfo
        val cameraControl = camera.cameraControl
    }, ContextCompat.getMainExecutor(this))}Copy the code

The binding of the preview screen to the view occurs when:

preview.setSurfaceProvider(previewView)
Copy the code

PreviewView is provided by Jetpack, that is, the original helps developers do the camera preview package compatibility:

<androidx.camera.view.PreviewView
    android:layout_width="0dp"
    android:layout_height="0dp"
    app:layout_constraintBottom_toBottomOf="parent"
    app:layout_constraintLeft_toLeftOf="parent"
    app:layout_constraintRight_toRightOf="parent"
    app:layout_constraintTop_toTopOf="parent" />
Copy the code

SurfaceProvider (SurfaceView/TextureView); SurfaceProvider (SurfaceProvider/SurfaceView/TextureView);

Back to the subject of this article: Camera preview with OpenGL, which means rendering the preview frame with OpenGL. The key to resolving this problem is to implement the Preview.SurfaceProvider interface with OpenGL and CameraX.

What does OpenGL do in each lifecycle?

OpenGL initialization

init {
    setEGLContextClientVersion(2)
    setRenderer(cameraRender)
    
    renderMode = RENDERMODE_WHEN_DIRTY
}
Copy the code

Let’s take a look at the lifecycle of OpenGL. What does it do in the Renderer interface’s glSurfaceView.renderer callbacks

public interface Renderer {
    void onSurfaceCreated(GL10 gl, EGLConfig config);

    void onSurfaceChanged(GL10 gl, int width, int height);

    void onDrawFrame(GL10 gl);
}
Copy the code

onSurfaceCreated

A SurfaceTexture needs to be created when onSurfaceCreated using OpenGL’s API, which will be drawn later.

override fun onSurfaceCreated(gl: GL10? , config:EGLConfig?).{ gl? .let { it.glGenTextures(textures.size, textures,0)
        surfaceTexture = SurfaceTexture(textures[0])
        screenFilter = ScreenFilter(context)
    }
}
Copy the code
  • textures[0]As for theOpenGL TexturetheA unique identifier
  • To create aSurfaceTexture
  • ScreenFilterAs with theOpenGLtheGLSLScript binding logic, its initialization will execute:Vertex coordinates,Create memory space for texture coordinates.Vertex shader, chip shader program creation, andCreate variable mappings inside GLSL.
// ScreenFilter.kt
init {
    vertexBuffer = ByteBuffer.allocateDirect(4 * 4 * 2)
        .order(ByteOrder.nativeOrder())
        .asFloatBuffer()
    vertexBuffer.clear()
    vertexBuffer.put(VERTEX)

    textureBuffer = ByteBuffer.allocateDirect(4 * 2 * 4)
        .order(ByteOrder.nativeOrder())
        .asFloatBuffer()
    textureBuffer.clear()
    textureBuffer.put(TEXTURE)

    val vertexShader = OpenGLUtils.readRawTextFile(context, R.raw.camera_vert)
    val textureShader = OpenGLUtils.readRawTextFile(context, R.raw.camera_frag)

    program = OpenGLUtils.loadProgram(vertexShader, textureShader)

    vPosition = GLES20.glGetAttribLocation(program, "vPosition")
    vCoord = GLES20.glGetAttribLocation(program, "vCoord")
    vTexture = GLES20.glGetUniformLocation(program, "vTexture")
    vMatrix = GLES20.glGetUniformLocation(program, "vMatrix")}Copy the code

onSurfaceChanged

During onSurfaceChanged, as the width and height are determined, we can start the camera preview mentioned earlier and set the Window size for OpenGL.

override fun onSurfaceChanged(gl: GL10? , width: Int, height: Int) { setUpCamera() gl? .glViewport(0, 0, width, height) }Copy the code

onDrawFrame

OnDrawFrame, as the name implies, is to draw the latest preview frame of the current frame using OpenGL.

override fun onDrawFrame(gl: GL10?). {
    val surfaceTexture = this.surfaceTexture
    if (gl == null || surfaceTexture == null) return
    gl.glClearColor(0f.0f.0f.0f) gl.glClear(GLES20.GL_COLOR_BUFFER_BIT) surfaceTexture.updateTexImage() screenFilter? .onDrawFrame(textures[0])}// ScreenFilter.kt
fun onDrawFrame(textureId: Int): Int {
    // Use the shader program
    GLES20.glUseProgram(program)
    // Pass values to the shader program
    // Pass values to vertex coordinates
    vertexBuffer.position(0)
    GLES20.glVertexAttribPointer(vPosition, 2, GLES20.GL_FLOAT, false.0, vertexBuffer)
    / / activation
    GLES20.glEnableVertexAttribArray(vPosition)
    // Pass values to texture coordinate data
    textureBuffer.position(0)
    GLES20.glVertexAttribPointer(vCoord, 2, GLES20.GL_FLOAT, false.0, textureBuffer)
    GLES20.glEnableVertexAttribArray(vCoord)

    // Bind the sampler in the fragment shader
    // Activate the layer
    GLES20.glActiveTexture(GLES20.GL_TEXTURE0)
    // Image data
    GLES20.glBindTexture(GLES11Ext.GL_SAMPLER_EXTERNAL_OES, textureId)
    // Pass parameters
    GLES20.glUniform1i(vTexture, 0)

    // When the parameters are passed, tell OpengL to start drawing
    GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0.4)

    / / unbundling
    GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 0)
    return textureId
}
Copy the code
  • surfaceTexture.updateTexImage(): Updates to the latest frame in the image stream.
  • screenFilter? .onDrawFrame(textures[0]): Passes the parameters (including the latest frame) to the shader program and tells it to draw.

Conclusion:

  • onSurfaceCreated:SurfaceTextureThe creation of as well as withOpenGLBinding of the environment,OpenGLTo initialize the shader program
  • onSurfaceChanged: Camera initialization (PS: this timing can also be advanced), setOpenGLWindow size (PS: width and height can also be saved first, later set window size when drawing)
  • onDrawFrameRefresh:SurfaceTextureAnd the use ofOpenGLLet me plot it.

SurfaceTexture is bound with Preview

Back to implementing the Preview.SurfaceProvider interface, bind the camera to Preview the output. The next step is to bind the SurfaceTexture created when onSurfaceCreated to the Preview frame, which represents the live Preview frame. Ps: This situation is similar to using TextureView as a camera preview.

Implement Preview.SurfaceProvider and rewrite onSurfaceRequested:

override fun onSurfaceRequested(request: SurfaceRequest){ surfaceTexture? .setOnFrameAvailableListener(this)
    valsurface = Surface(surfaceTexture) request.provideSurface(surface, executor) { surface.release() surfaceTexture? .release() } }Copy the code

The surfaceTexture is created when onSurfaceCreated. At this point, the camera output Preview is bound to the SurfaceTexture by CameraX Preview, and the Preview frame updates are reflected in the SurfaceTexture.

Here will set SurfaceTexture OnFrameAvailableListener, as in a new preview frame refresh, drawn in a timely manner of OpenGL

override fun onFrameAvailable(surfaceTexture: SurfaceTexture?). {
    cameraView.requestRender()
}
Copy the code

shader

  • Vertex shader
// Vertex coordinates
attribute vec4 vPosition;
// Texture coordinates
attribute vec4 vCoord;
// The pixel to pass to the pixel shader
varying vec2 aCoord;

void main() {
    gl_Position = vPosition;
    aCoord = vCoord.xy;
}
Copy the code
  • Chip shader
#extension GL_OES_EGL_image_external: require //SurfaceTexture //float data is a precision mediump float; // The coordinates of the sampling point varying VEC2 aCoord; // samplerExternalOES vTexture; Void main() {// variables receive pixel values // texture2D: Gl_FragColor = texture2D(vTexture, aCoord); }Copy the code

At this point, the preview is complete and looks like the following image. Two obvious problems:

  1. The preview screen has a rotation problem. The output of the camera is in landscape mode by default, and the preview screen does not rotate.
  2. Low preview resolution

Preview screen rotation

Image rotation can be achieved by changing the Angle of the texture coordinates, a simple and crude way is to adjust the order of the texture coordinates to achieve. However, the more common approach is to obtain the transformation matrix of the SurfaceTexture.

In onDrawFrame, you can get the transformation matrix after updateTexImage. Then in the vertex shader, the transformation matrix is multiplied by the texture coordinates, and the result is fed to the fragment shader

surfaceTexture.updateTexImage() surfaceTexture.getTransformMatrix(textureMatrix) screenFilter? .setTransformMatrix(textureMatrix)// ScreenFilter#onDrawFrame
GLES20.glUniformMatrix4fv(vMatrix, 1.false, mtx, 0)
Copy the code

Vertex shaders:

// Vertex coordinates
attribute vec4 vPosition;
// Texture coordinates
attribute vec4 vCoord;

uniform mat4 vMatrix;
// The pixel to pass to the pixel shader
varying vec2 aCoord;

void main() {
    gl_Position = vPosition;
    aCoord = (vMatrix * vec4(vCoord.x, vCoord.y, 1.0.1.0)).xy;
}
Copy the code

After rotation:

Preview resolution adjustment

The Preview.SurfaceProvider introduction shows the use of SurfaceTexture in conjunction with OpenGL in the sample code. I found an example of CameraX with OpenGL preview from AndroidCode Search. SurfaceTexture#setDefaultBufferSize is called to set the size. So we have:

override fun onSurfaceRequested(request: SurfaceRequest) {
    // Request. Resolution can be set to the initial Preview resolution of 640 * 480
    valresetTexture = resetPreviewTexture(request.resolution) ? :return
    valsurface = Surface(resetTexture) request.provideSurface(surface, executor) { surface.release() surfaceTexture? .release() } }@WorkerThread
private fun resetPreviewTexture(size: Size): SurfaceTexture? {
    return this.surfaceTexture? .let { surfaceTexture -> surfaceTexture.setOnFrameAvailableListener(this)
        surfaceTexture.setDefaultBufferSize(size.width, size.height)
        surfaceTexture
    }
}
Copy the code

Final effect (PS: clarity may not be obvious from the screenshot)

The tail

One other point worth noting is that since the preview resolution is set at 4:3, the SurfaceView should also have a 4:3 aspect ratio. It can be set by onMeasure or by XML layout. Otherwise it might feel like the picture is being stretched…

The last

In this article, it is summarized that if you use CameraX and OpenGL for Camera preview, of course, using Camera/Camera2 API is also feasible. The focus is on the acquisition and binding of preview images. Finally, demo: xcyoung/ Opengl-camera.

Refer to the article

The official documentation

Getting started with CameraX (Take photos, store display, switch front and rear cameras, flashlight, flash, gesture scaling, double click to zoom in and out)

CameraX and OpenGL fusion (CameraX preview data OpenGL rendering)

Tiktok split screen effect: CameraX combined with OpenGL

OpenGL Get Started to Give up 3– Show the camera preview with OpenGL