Welcome to follow the public account: Sumsmile/focus on image processing mobile development veteran ~~

SurfaceTexture (SurfaceTexture, SurfaceTexture, SurfaceTexture, Camera Preview, OpenGL access, EGL, frame buffer, shader)

Effect of the demo

Algorithm reference shadertoy effect

SurfaceTexture introduction

SurfaceTexture, SurfaceView, GLSurfaceView, TextureView

SurfaceTexture is associated with three classes, which can be confusing if you don’t notice. Remember that SurfaceTexture cannot be displayed directly, but can be simply referred to as “an intermediate tool for producing textures”, usually in conjunction with other views or function modules.

Notice the SurfaceView Texture difference

All views in a window will eventually combine an image and draw it on the screen, except SurfaceView, which has its own independent window, so the performance of SurfaceView is better, but the operation is not as convenient as TextureView, and it does not support rotation, translation and other operations.

Developer Training SurfaceFlinger for details refer to the implementation principle of SurfaceView

SurfaceView TextureView

For details on the differences, see SurfaceView, GLSurfaceView, SurfaceTexture, and TextureView

SurfaceTexture works

The Android development documentation provides a brief description of the SurfaceTexture workflow

The images collected by the camera are processed by SurfaceTexture and then transported to OpenGLES, processed by GL and then transported to SurfaceView for display, and finally synthesized by SurfaceFlinger. Or by OpenGLES processing, sent to MediaCodec coding, coding into a video file.

Extract the core logic of SurfaceTexture as shown below. SurfaceTexture maintains a queue that stores image buffers, converts data from Camera, Decoder, etc., into textures, and sends them to OpenGL for processing. This is a typical production-consumption pattern.

Note: When handling Texture in a GL environment, the key is “gles11ext.gl_texture_external_oes”. What is TEXTURE_EXTERNAL_OES?

As shown in the figure above, image data collected by camera and others cannot be directly used in OpengL, but need to be processed in the way of expanding texture, which is actually transformed by EglImageKHR, which is done by EGL. My understanding is that the image production is not in GL thread, and the texture cannot be shared with OpenGL thread. In addition, the image data collected is in YUV format, which needs to be converted to ordinary RGB.

EglImageKHR for details.

SurfaceTexture code analysis

Code analysis reference for Android SurfeceTexture

Take a look at the SurfaceTexture code to get a feel for it.

The SurfaceTexture constructor calls nativeInit to initialize

// constructor
// singleBufferMode whether singleBufferMode is a single buffer. Default is false
public SurfaceTexture(int texName, boolean singleBufferMode) {
   mCreatorLooper = Looper.myLooper();
   mIsSingleBuffered = singleBufferMode;
   / / nativeInit native method
   nativeInit(false, texName, singleBufferMode, new WeakReference<SurfaceTexture>(this));
}
Copy the code

The focus is on the nativeInit method, which generates producers and consumers and creates image buffer queues. Calling updateTexImage reads data from the buffer queue into TextureId binding texture memory

// frameworks\base\core\jni\SurfaceTexture.cpp
// texName creates the texture name for the application
WeakThiz is a weak reference to the SurfaceTexture object
static void SurfaceTexture_init(JNIEnv* env, jobject thiz, jboolean isDetached, jint texName, jboolean singleBufferMode, jobject weakThiz)
{
    sp<IGraphicBufferProducer> producer;
    sp<IGraphicBufferConsumer> consumer;
    // Create IGraphicBufferProducer and IGraphicBufferConsumer
    
    BufferQueue::createBufferQueue(&producer, &consumer);

    if (singleBufferMode) {
        consumer->setMaxBufferCount(1);
    }

    sp<GLConsumer> surfaceTexture;
    / / isDetached to false
    if (isDetached) {
        ....
    } else {
        // Encapsulate consumer and texName as the GLConsumer class object surfaceTexture
        surfaceTexture = new GLConsumer(consumer, texName,
                GL_TEXTURE_EXTERNAL_OES, true,! singleBufferMode); }...// The CTX ->onFrameAvailable method is triggered when frame data is received
    surfaceTexture->setFrameAvailableListener(ctx);
    SurfaceTexture_setFrameAvailableListener(env, thiz, ctx);
}

Copy the code

The principle of this first, we come to a section of practice, will be learned in front of the combination, to achieve a dynamic effect.

SurfaceTexture preview + GL effects

Complete project: Complete project CameraDemo

Github.com/ChinaZeng/O…

Note that replace the logic in fragment_shader_screen. GLSL, there are no effects by default

Code structure:

Code description:

Refer to the picture above and make some explanations to help you understand.

CameraEglSurfaceView

CameraEglSurfaceView inherits from GLSurfaceView as the default render screen.

CameraFboRender

Create surfaceTexture and bind texture ID cameraRenderTextureId to Renderer.


In the onSurfaceListener() callback, open camera Preview to preview the generated data bound to the surfaceTexture


The onDrawFrame() callback updates the texture to cameraRenderTextureId

First draw: call OpenGL ES API draw, draw cameraRenderTextureId Texture to FrameBuffer, actually draw to FrameBuffer associated Texture(fboTextureId). Note that this is an empty drawing and nothing is done in the Shader. If you have any creative ideas, you can change the Shader to do it.

The buffer is stored in the BufferQueue of the SurfaceTexture and updated to textureId with a call to updateTexImage().

In this case, there are two draws, the first to the FrameBuffer (off-screen rendering) and the second to the GLSurfaceView. This is done to show that you can take textures and render them into different buffers.

In fact, as long as you want, you can render to the FrameBuffer many times in the middle like connecting a train, and each layer can do special processing. This is the core logic of GPUImage. It connects different special effect filters together to make any cool effect like building blocks, which is very good for decoupling and reuse. Of course, the separation into a separate filter also has performance defects, every time more than one drawing, there is a one-time loss of energy.

CameraRender

For the second rendering, the fboTextureId generated from the first rendering is used as input for the second rendering and is drawn to the default device, the CameraEglSurfaceView, to see the rendering effect.

Add special effects. The shader implementation is stored in the raw directory of the project and can be easily read by borrowing the Android API.

Core logic in the fragment shader, GLSL implementation reference: www.shadertoy.com/view/7dXXWs

Moving to your own project requires some modifications.

Note that the shaderToy implementation is based on OpenGL 3.0. Migrating to ES 2.0 has some API differences.

  • FragColor (3.0) – > gl_FragColor (2.0)
  • No in Out keyword (2.0)
  • Texture (3.0) – > texture2D (2.0)
  • .

The GLES uses external textures that need to be declared in the shader

// declare extension GL_OES_EGL_image_external: requireCopy the code

The rest of the code details, readers and friends to analyze it, patience to read the code, I believe you will have a lot of harvest.

Welcome to follow the public account: Sumsmile/focus on image processing mobile development veteran ~~

reference

[1] shadertoy effects: www.shadertoy.com/view/7dXXWs

[2] SurfaceView implementation Principle analysis: juejin.cn/post/684490…

[3] developer training: Google – developer – training. Dead simple. IO/android – dev…

[4] SurfaceFlinger Details: www.cnblogs.com/blogs-of-lx…

[5] SurfaceView, GLSurfaceView, SurfaceTexture, TextureView: blog.csdn.net/qq_34712399…

[6] android SurfaceTexture: source.android.com/devices/gra…

[7] eglimage in surfaceflinger: waynewolf. Making. IO / 2014/05/17 /…

[8] talk about Android SurfeceTexture: mp.weixin.qq.com/s/7kwrlxyct…

[9] CameraDemo: github.com/ChinaZeng/O…