One, foreword

In this article, let’s take a look at cats-OSS ‘Android-gpuImage. According to the author’s own explanation, the idea for the project came from the IOS GPUImage. The function of GPU Image is to use OpenGL to help us realize the primary Image processing, such as Gaussian blur, brightness, saturation, white balance and other basic filters. In addition, GPUImage helps us build a framework, which enables us to ignore various complicated steps in the process of using Open GL. As long as we focus on our own business, we can realize the functions we need by inheriting GPUImageFilter or combining other filters. For example, it can be used for beauty treatment of portrait, skin grinding, whitening and other functions. Well, let’s take a look at the image first.


The original image

Invert filter

Of course, limited by the author’s level and energy, the paper will not analyze the details of the algorithm, but mainly the architecture and logic of the framework itself.

Two, basic application

Here is mainly a brief reading of the official text.

1. Rely on

The latest version is 2.0.3

Dependencies repositories {jcenter ()} {implementation 'jp. Co. cyberagent. The android: gpuimage: 2.0.3'}Copy the code

2. With preview interface

Generally, it can be used in conjunction with the camera to achieve real-time filter function

@Override public void onCreate(final Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity); Uri imageUri = ... ; gpuImage = new GPUImage(this); gpuImage.setGLSurfaceView((GLSurfaceView) findViewById(R.id.surfaceView)); // this loads image on the current thread, should be run in a thread gpuImage.setImage(imageUri); gpuImage.setFilter(new GPUImageSepiaFilter()); // Later when image should be saved saved: gpuImage.saveToPictures("GPUImage", "ImageWithFilter.jpg", null); }Copy the code

3. Use GPUImageView

GPUImageView inherits from FrameLayout, and the rest is mainly a helper class that helps us integrate GpuImageFilter with SurfaceView/TextureView XML

<jp.co.cyberagent.android.gpuimage.GPUImageView android:id="@+id/gpuimageview" android:layout_width="match_parent" android:layout_height="match_parent" app:gpuimage_show_loading="false" app:gpuimage_surface_type="texture_view" /> <! -- surface_view or texture_view -->Copy the code

java code

@Override public void onCreate(final Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity); Uri imageUri = ... ; gpuImageView = findViewById(R.id.gpuimageview); gpuImageView.setImage(imageUri); // this loads image on the current thread, should be run in a thread gpuImageView.setFilter(new GPUImageSepiaFilter()); // Later when image should be saved saved: gpuImageView.saveToPictures("GPUImage", "ImageWithFilter.jpg", null); }Copy the code

4. No preview screen is displayed

As opposed to a preview interface, its technical name is off-screen rendering, which will be explained in more detail later when we analyze the code

public void onCreate(final Bundle savedInstanceState) { public void onCreate(final Bundle savedInstanceState) { Uri imageUri = ... ; gpuImage = new GPUImage(context); gpuImage.setFilter(new GPUImageSobelEdgeDetection()); gpuImage.setImage(imageUri); gpuImage.saveToPictures("GPUImage", "ImageWithFilter.jpg", null); }Copy the code

OpenGL’s native way of using it is very verbose and has a lot of process. Android doesn’t have a good SDK to improve the ecosystem and reduce developers’ work.

Third, source code analysis

1. Overview of the framework

frame


image.png

The above is a block diagram drawn from the perspective of input, processing and output. Although GPUImage involves some difficult image knowledge such as OpenGL, its encapsulated framework is relatively simple. As shown in the figure above, the input can be a Bitmap or a YUV format (generally the camera’s raw data format) and then rendered by the GPUImageRender in the GPUImage module. Before rendering, it is processed by the GPUImageFilter. Then really render to the GLSurfaceView/GLTextureView, which is on the screen. Alternatively, the result can be rendered to Buffer and saved to Bitmap by off-screen rendering.

The framework class diagram


GPUImage Main.jpg



GPUImage



GPUImageRenderer



GPUImageFilter

Through the above frame diagram and frame class diagram, we should have an overall cognition of GPUImage. Next, we will analyze the implementation principle in more detail according to the flow of this demo with preview interface.

2. Rendering implementation with preview interface

Initialization – Builds the GPUImage


GPUImage initializes. JPG

/** * Instantiates a new GPUImage object. * * @param context the context */ public GPUImage(final Context context) { if (! SupportsOpenGLES2 (Context)) {throw new IllegalStateException("OpenGL ES 2.0 is not supported on this phone."); } this.context = context; filter = new GPUImageFilter(); renderer = new GPUImageRenderer(filter); }Copy the code

The construction of GPUImage is very simple, that is, GPUImageFilter and GPUImageRender are constructed successively. GPUImageFilter is the base class of all filters, and it does not have any filter effects. It also defines multiple hook methods to complete the initialization, processing, and destruction lifecycle. As shown in the figure below.


image.png



Vertex shader script
Slice shader script

public GPUImageFilter() {
        this(NO_FILTER_VERTEX_SHADER, NO_FILTER_FRAGMENT_SHADER);
    }

    public GPUImageFilter(final String vertexShader, final String fragmentShader) {
        runOnDraw = new LinkedList<>();
        this.vertexShader = vertexShader;
        this.fragmentShader = fragmentShader;
    }
Copy the code

Shader scripting is a GLSL language similar in style to C, and can be found in a wiki. The two shaders are used to calculate vertex positions and color vertices in the OpenGL pipeline. For those of you who haven’t been exposed to OpenGL at all, this might be a little confusing, but don’t worry, just have this concept right here.

Next, create the GPUImageRenderer to see its constructor.

Public GPUImageRenderer(final GPUImageFilter filter) {// Receive filter this.filter = filter; RunOnDraw = new LinkedList<>(); runOnDrawEnd = new LinkedList<>(); / / create a vertex Buffer and assignment glCubeBuffer = ByteBuffer. AllocateDirect (CUBE. Length * 4). The order (ByteOrder. NativeOrder ()) .asFloatBuffer(); glCubeBuffer.put(CUBE).position(0); / / create a texture Buffer glTextureBuffer = ByteBuffer. AllocateDirect (TEXTURE_NO_ROTATION. Length * 4) .order(ByteOrder.nativeOrder()) .asFloatBuffer(); SetRotation (Rotation.NORMAL, false, false); }Copy the code

The GPUImageRenderer constructor basically builds its own runtime environment. The main ones are creating vertex buffers, creating texture buffers, and setting the rotation direction. Here, Buffer allocation involves Java NI/O, and its allocated memory space is in the native layer. And this is * 4 because float takes 4 bytes.

Let’s start with the definition of CUBE

Public static final float CUBE[] = {-1.0f, -1.0f,// -1.0f, -1.0f,// -1.0f,// -1.0f,// -1.0f,// -1.0f,// -1.0f,// -1.0f,// -1.0f,// -1.0f,// -1.0f,// -1.0f;Copy the code

It’s not just a bunch of meaningless numbers, it’s actually a 2 by 4 array of vertices, where 2 means 2 dimensional, that’s some point (x,y) in a 2 dimensional coordinate system; Four means there are four vertices. And if you look at the values of these numbers, they’re all between -1 and 1. And this is related to all the coordinate systems in OpenGL. OpenGL’s coordinate system is 3-dimensional. It is centered at the origin (0,0,0) and consists of three axes in different directions (x,y,z). The vertices defined here have no z coordinates, so the depth is zero. And the reason why it’s between minus 1 and 1 is because it’s normalized. OpenGL in the pipeline, after doing the NDC operation at the end, will map all the coordinates to between -1 and 1. The following is a common 3 – dimensional coordinate system.




image.png


And our numbers defined here can be viewed as the following coordinate system.




image.png

Eventually, we’ll take these four vertices and construct two triangles to form a face. On this formed surface, images are pasted onto this area in the form of textures.

Take a look at the TEXTURE_NO_ROTATION coordinates and other rotation angles

Public static final float TEXTURE_NO_ROTATION[] = {0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f,}; Public static final float TEXTURE_ROTATED_90[] = {1.0f, 1.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f,}; Public static final float TEXTURE_ROTATED_180[] = {1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f,}; Public static final float TEXTURE_ROTATED_270 [] = {0.0 f to 0.0 f to 0.0 f to 1.0 f to 1.0 f to 0.0 f to 1.0 f to 1.0 f,};Copy the code

Texture coordinate is another coordinate system, namely texture coordinate system. We are familiar with Android’s screen coordinates having the origin at the upper left corner, while texture coordinates have the origin at the lower left corner of the texture. And it’s between zero and one. Regardless of the width and height of the original image, all coordinates will be mapped to a number between 0 and 1. Compare the following texture coordinates. The resulting coordinates are TEXTURE_NO_ROTATION when no rotation is performed, and TEXTURE_ROTATED_90 when rotated 90 degrees counterclockwise. Same thing for the other two.




image.png

There are a lot of coordinate systems in OpenGL, and I can’t explain them in a few sentences. Here we simply describe the origin of vertex and texture coordinates according to the rules of the coordinate system. Do only appropriate expansion, not in detail. And we’ll talk more about OpenGL coordinates in a moment.

Then look at setRotation(), which has two more parameters to indicate whether the lateral and directional flip is performed, which is related to the camera Angle and imaging principle, so I won’t go into it here. Look at its further call to adjustImageScaling()

private void adjustImageScaling() { float outputWidth = this.outputWidth; float outputHeight = this.outputHeight; / / vertical screen case if (rotation = = rotation. ROTATION_270 | | rotation = = rotation. ROTATION_90) {outputWidth = this. OutputHeight; outputHeight = this.outputWidth; } // Float ratio1 = outputWidth/imageWidth; float ratio2 = outputHeight / imageHeight; float ratioMax = Math.max(ratio1, ratio2); int imageWidthNew = Math.round(imageWidth * ratioMax); int imageHeightNew = Math.round(imageHeight * ratioMax); float ratioWidth = imageWidthNew / outputWidth; float ratioHeight = imageHeightNew / outputHeight; // Get vertex data float[] cube = cube; // Get the texture coordinates of the corresponding Angle, Turn and turn according to the parameters of the corresponding float [] textureCords = TextureRotationUtil. GetRotation (rotation, flipHorizontal flipVertical); / / the texture coordinates according to the scaleType or vertex coordinates to calculate the if (scaleType = = GPUImage. ScaleType. CENTER_CROP) {float distHorizontal = (1-1 / ratioWidth) / 2; float distVertical = (1 - 1 / ratioHeight) / 2; textureCords = new float[]{ addDistance(textureCords[0], distHorizontal), addDistance(textureCords[1], distVertical), addDistance(textureCords[2], distHorizontal), addDistance(textureCords[3], distVertical), addDistance(textureCords[4], distHorizontal), addDistance(textureCords[5], distVertical), addDistance(textureCords[6], distHorizontal), addDistance(textureCords[7], distVertical), }; } else { cube = new float[]{ CUBE[0] / ratioHeight, CUBE[1] / ratioWidth, CUBE[2] / ratioHeight, CUBE[3] / ratioWidth, CUBE[4] / ratioHeight, CUBE[5] / ratioWidth, CUBE[6] / ratioHeight, CUBE[7] / ratioWidth, }; Glcubebuffer.clear (); glCubeBuffer.put(cube).position(0); glTextureBuffer.clear(); glTextureBuffer.put(textureCords).position(0); }Copy the code

Assuming that the scaleType here is CENTER_CROP, and assuming that the width and height of the image is 80 * 200 and the width and height of the viewport is 100 * 200, the resulting effect is shown below — note that the image beyond the orange wireframe is not visible, this is just to show the effect.




image.png

If instead of CENTER_CROP, CENTER_INSIDE, change the vertex position. The renderings are as follows. And if you’re interested, you can do it yourself.




image.png

The most important thing here is that through the calculation of adjustImageScaling() method, the vertex coordinates and texture coordinates are finally determined and sent to the corresponding Buffer, and the numbers in the two buffers will be sent to the pipeline of OpenGL for rendering.

Create association with GLSurfaceView — GPUImage#setGLSurfaceView()

/**
     * Sets the GLSurfaceView which will display the preview.
     *
     * @param view the GLSurfaceView
     */
    public void setGLSurfaceView(final GLSurfaceView view) {
        surfaceType = SURFACE_TYPE_SURFACE_VIEW;
        glSurfaceView = view;
        glSurfaceView.setEGLContextClientVersion(2);
        glSurfaceView.setEGLConfigChooser(8, 8, 8, 8, 16, 0);
        glSurfaceView.getHolder().setFormat(PixelFormat.RGBA_8888);
        glSurfaceView.setRenderer(renderer);
        glSurfaceView.setRenderMode(GLSurfaceView.RENDERMODE_WHEN_DIRTY);
        glSurfaceView.requestRender();
    }
Copy the code

This method sets the OpenGL version, image format, refresh mode, etc., and most importantly, sets the GPUImageRender to the GLSurfaceView, Render data into the Buffer of the GLSurfaceView in render’s onDrawFrame() hook method.

GLSurfaceView is defined by Android itself, and the framework also defines a GLTextureView, which is inherited from TextureView. Its main function is to emulate GLSurfaceView, create a GLThread and call render’s onDrawFrame() to refresh the View. SurfaceView and TextureView: SurfaceView and TextureView: SurfaceView and TextureView

SurfaceView is a View with its own Surface, and its rendering can be placed in a separate thread rather than the main thread. As a View in the App process, it is also in the view hierachy, but in the system WindowManagerService and SurfaceFlinger, it has its own Windows State and Surface, The simple idea is to have your own canvas, Buffer. Because it doesn’t morph or animate. TextureView, like ordinary View, belongs to the same View hierachy, WidnowState and Surface in App process, Windows ManagerService and SurfaceFlinger of the system. Introduced in 4.0, rendering was done by the main thread in the early days, and only after 5.0, when the render thread was added, was it exclusively rendered by the render thread. Of course, like normal Views, it supports morphing and animation. And, more importantly, it must render in Windows that support hardware acceleration, otherwise it will be blank.

The glSurfaceView. RequestRender () will wake thread for subsequent rendering.

Set /update image source — GPUImage#setImage()/updatePreviewFrame()

Set the image source, which can be a bitmap, file or URI. A more common scene is the camera’s preview frame — YUV raw data. Of course, YUV data also needs to be converted to the usual RGB data before it can be handed to Render. For details about YUV Formats, see YUV data Formats and Video Formats. You can also take a look at the picture below for a direct feeling. Y represents Luminance and Luma, and U and V represent Chrominance and Chroma.


image.png

Either setting the image directly or the raw YUV data is bound to the texture ID in OpenGL. Let’s take a look at that onPreviewFrame.

public void onPreviewFrame(final byte[] data, final int width, final int height) { if (glRgbBuffer == null) { glRgbBuffer = IntBuffer.allocate(width * height); } if (runOnDraw. IsEmpty ()) {runOnDraw(new Runnable() {@override public void run() GPUImageNativeLibrary.YUVtoRBGA(data, width, height, glRgbBuffer.array()); GlTextureId = OpenGlUtils. LoadTexture (glRgbBuffer, width, height, glTextureId); if (imageWidth ! = width) { imageWidth = width; imageHeight = height; adjustImageScaling(); }}}); }}Copy the code

GPUImageNativeLibrary. YUVtoRBGA () watch, take a look at OpenGlUtils. LoadTexture ().

public static int loadTexture(final IntBuffer data, final int width, final int height, final int usedTexId) { int textures[] = new int[1]; Gles20.glgentextures (1, textures, 0); if (usedTexId == NO_TEXTURE) { // Bind the Texture sampler to texture ID GLes20.glBindTexture (gles20.gl_texture_2D, Textures [0]); Gles20.gltexparameterf (glES20.gl_texTURE_2d, glES20.gl_texture_MAG_filter, glES20.gl_linear); GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR); GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_CLAMP_TO_EDGE); GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_CLAMP_TO_EDGE); GLES20. GlTexImage2D (GLES20.GL_TEXTURE_2D, 0, GLES20.GL_RGBA, width, height, 0, GLES20. GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, data); } else { GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, usedTexId); GLES20.glTexSubImage2D(GLES20.GL_TEXTURE_2D, 0, 0, 0, width, height, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, data); textures[0] = usedTexId; } return textures[0]; }Copy the code

Refer to the comments for additional details on this method. The main purpose of this method is to feed the image into OpenGL’s sample2D sampler, which is defined in the slice Shader script. InputImageTexture in the following definition.

public static final String NO_FILTER_FRAGMENT_SHADER = "" + "varying highp vec2 textureCoordinate; \n" + " \n" + "uniform sampler2D inputImageTexture; \n" + " \n" + "void main()\n" + "{\n" + " gl_FragColor = texture2D(inputImageTexture, textureCoordinate); \n" + "}";Copy the code

Set the Filter – GPUImage# setImageFilter ()

/** * Sets the filter which should be applied to the image which was (or will * be) set by setImage(...) . * * @param filter the new filter */ public void setFilter(final GPUImageFilter filter) { this.filter = filter; renderer.setFilter(this.filter); requestRender(); }Copy the code

Render’s setFilter is called and the render request is made again. Let’s take a closer look.

public void setFilter(final GPUImageFilter filter) { runOnDraw(new Runnable() { @Override public void run() { final GPUImageFilter oldFilter = GPUImageRenderer.this.filter; GPUImageRenderer.this.filter = filter; If (oldFilter! = null) { oldFilter.destroy(); } / / and then call fiter. IfNeedInit () to initialize GPUImageRenderer. Enclosing filter. IfNeedInit (); / / set the OpenGL program used by the context ID GLES20. GlUseProgram (GPUImageRenderer. This. Filter. GetProgram ()); / / update the viewport size GPUImageRenderer. This. Filter. OnOutputSizeChanged (outputWidth outputHeight); }}); }Copy the code

The main process here corresponds to the previous GPUImageFilter lifecycle flowchart. It first determines whether there are old filters and destroys them if there are. Gles20.gldeleteprogram (glProgId) destroys the ID of the program OpenGL is currently running. The subclasses of GPUImageFilter are then told to release the other used resources through the hook method onDestroy(). The important thing to understand here is the initialization process.

IfifNeedInit () is basically calling onInit()

public void onInit() {
        glProgId = OpenGlUtils.loadProgram(vertexShader, fragmentShader);
        glAttribPosition = GLES20.glGetAttribLocation(glProgId, "position");
        glUniformTexture = GLES20.glGetUniformLocation(glProgId, "inputImageTexture");
        glAttribTextureCoordinate = GLES20.glGetAttribLocation(glProgId, "inputTextureCoordinate");
        isInitialized = true;
    }
Copy the code

Create program ID, get vertex position property “position”, texture coordinate property “inputTextureCoordinate”, unity variable “inputImageTexture”. LoadProgram () loads the vertex and slice shader, creates the program, attaches the shader, and links the program. These are all steps that you must go through in OpenGL programming, so here’s just a glimpse. For the integrity of the article, the relevant code is also posted here.

public static int loadProgram(final String strVSource, final String strFSource) {
        int iVShader;
        int iFShader;
        int iProgId;
        int[] link = new int[1];
        iVShader = loadShader(strVSource, GLES20.GL_VERTEX_SHADER);
        if (iVShader == 0) {
            Log.d("Load Program", "Vertex Shader Failed");
            return 0;
        }
        iFShader = loadShader(strFSource, GLES20.GL_FRAGMENT_SHADER);
        if (iFShader == 0) {
            Log.d("Load Program", "Fragment Shader Failed");
            return 0;
        }

        iProgId = GLES20.glCreateProgram();

        GLES20.glAttachShader(iProgId, iVShader);
        GLES20.glAttachShader(iProgId, iFShader);

        GLES20.glLinkProgram(iProgId);

        GLES20.glGetProgramiv(iProgId, GLES20.GL_LINK_STATUS, link, 0);
        if (link[0] <= 0) {
            Log.d("Load Program", "Linking Failed");
            return 0;
        }
        GLES20.glDeleteShader(iVShader);
        GLES20.glDeleteShader(iFShader);
        return iProgId;
    }
Copy the code
public static int loadShader(final String strSource, final int iType) {
        int[] compiled = new int[1];
        int iShader = GLES20.glCreateShader(iType);
        GLES20.glShaderSource(iShader, strSource);
        GLES20.glCompileShader(iShader);
        GLES20.glGetShaderiv(iShader, GLES20.GL_COMPILE_STATUS, compiled, 0);
        if (compiled[0] == 0) {
            Log.d("Load Shader Failed", "Compilation\n" + GLES20.glGetShaderInfoLog(iShader));
            return 0;
        }
        return iShader;
    }
Copy the code

At this point, it can be said that the environment for rendering the image has been established. Such as determining vertex coordinates, zooming, creating OpenGL rendering environment and so on. Now let’s see how we can draw it.

Render – Render Filter

GLSurfaceView is used to refresh the view by GLThread calling onDrawFrame() to Render. So let’s look at GPUImageRenderer’s onDrawFrame().

@Override
    public void onDrawFrame(final GL10 gl) {
        GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
        runAll(runOnDraw);
        filter.onDraw(glTextureId, glCubeBuffer, glTextureBuffer);
        runAll(runOnDrawEnd);
        if (surfaceTexture != null) {
            surfaceTexture.updateTexImage();
        }
    }
Copy the code

As you can probably imagine, the main thing is to render by calling filter’s onDraw().

public void onDraw(final int textureId, final FloatBuffer cubeBuffer, Final FloatBuffer textureBuffer) {// Array ID gles20.gluseProgram (glProgId); runPendingOnDrawTasks(); if (! isInitialized) { return; } // Send the vertex buffer data to the property "position" and enable the property cubeBuffer.position(0); // The following 2 represents the size of each point, that is, a coordinate here only needs to take two representations (x,y). If is 3 means (x, y, z) GLES20. GlVertexAttribPointer (glAttribPosition, 2, GLES20 GL_FLOAT, false, 0, cubeBuffer); GLES20.glEnableVertexAttribArray(glAttribPosition); // send the textureBuffer data to the property "inputTextureCoordinate" and enable the property texturebuffer.position (0); GLES20.glVertexAttribPointer(glAttribTextureCoordinate, 2, GLES20.GL_FLOAT, false, 0, textureBuffer); GLES20.glEnableVertexAttribArray(glAttribTextureCoordinate); if (textureId ! Gles20.glactivetexture (gles20.gl_texture0); gles20.gl_texture0; GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureId); GLES20.glUniform1i(glUniformTexture, 0); } onDrawArraysPre(); Gles20.gldrawarrays (gles20.gl_triangle_strip, 0, 4); GLES20.glDisableVertexAttribArray(glAttribPosition); GLES20.glDisableVertexAttribArray(glAttribTextureCoordinate); GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 0); }Copy the code

The rendering process is just a few calls to the OpenGL method, the meaning of which is commented in the code. Subclasses Filter are also drawn using this onDraw(). The decision to render the filter for each filter is made in the vertex shader and fragment shader defined for each filter.

At this point, the process of rendering images from GPUImageFilter to GLSurfaceView has been analyzed. As mentioned earlier, with the GPUImage framework, we don’t need to deal with the tedious details of OpenGL. Generally, we just need to write our own shader, and GPUImage can do the rest.

3. Render off-screen

The so-called off-screen rendering means that the images rendered by Render are not sent to GLSurfaceView, but saved in a specific Buffer. Let’s take a look at the sequence diagram.




Off-screen render.jpg

Initialize the environment for off-screen rendering

Steps 1-4 are easy, so I won’t expand them. From getBitmapWithFilterApplied start ().

/** * Gets the given bitmap with current filter applied as a Bitmap. * * @param bitmap the bitmap on which the current filter should be applied * @param recycle recycle the bitmap or not. * @return the bitmap with filter applied */ public Bitmap getBitmapWithFilterApplied(final Bitmap bitmap, boolean recycle) { ...... GPUImageRenderer renderer = new GPUImageRenderer(filter); renderer.setRotation(Rotation.NORMAL, this.renderer.isFlippedHorizontally(), this.renderer.isFlippedVertically()); renderer.setScaleType(scaleType); PixelBuffer buffer = new PixelBuffer(bitmap.getWidth(), bitmap.getHeight()); buffer.setRenderer(renderer); renderer.setImageBitmap(bitmap, recycle); Bitmap result = buffer.getBitmap(); filter.destroy(); renderer.deleteImage(); buffer.destroy(); this.renderer.setFilter(filter); if (currentBitmap ! = null) { this.renderer.setImageBitmap(currentBitmap, false); } requestRender(); return result; }Copy the code

The omitted part is related to GLSurfaceView, mainly related to the destruction work. Constructing GPUImageRenderer was also analyzed earlier. Only pixelBuffer-related calls will be analyzed here. First look at its constructor.

public PixelBuffer(final int width, final int height) { this.width = width; this.height = height; int[] version = new int[2]; int[] attribList = new int[]{ EGL_WIDTH, this.width, EGL_HEIGHT, this.height, EGL_NONE }; // No error checking performed, minimum required code to elucidate logic // create egl10 = (egl10) eglContext.getegl (); // get default_display eglDisplay = egl10.eglgetDisplay (EGL_DEFAULT_DISPLAY); egl10.eglInitialize(eglDisplay, version); eglConfig = chooseConfig(); // Choosing a config is a little more int EGL_CONTEXT_CLIENT_VERSION = 0x3098; int[] attrib_list = { EGL_CONTEXT_CLIENT_VERSION, 2, EGL10.EGL_NONE }; EglContext = egl10.eglcreatecontext (eglDisplay, eglConfig, EGL_NO_CONTEXT, attrib_list); / / create a Buffer in the memory, the rendered image will be stored here eglSurface = egl10. EglCreatePbufferSurface (eglDisplay eglConfig, attribList); egl10.eglMakeCurrent(eglDisplay, eglSurface, eglSurface, eglContext); gl10 = (GL10) eglContext.getGL(); // Record thread owner of OpenGL context mThreadOwner = Thread.currentThread().getName(); }Copy the code

Key procedures are added in the comments. The main focus here is the call to eglCreatePbufferSurface(), which creates a buffer in video memory that is not associated with any Windows on the screen. Does the corresponding GLSurfaceView create a buffer in the window of the screen?

public EGLSurface createWindowSurface(EGL10 egl, EGLDisplay display,
                EGLConfig config, Object nativeWindow) {
            EGLSurface result = null;
            try {
                result = egl.eglCreateWindowSurface(display, config, nativeWindow, null);
            } catch (IllegalArgumentException e) {
                ......
            }
            return result;
        }
Copy the code

As the code for creating EGLSurface in GLSurfaceView shows, it does, except that it calls another method, eglCreateWindowSurface(). One thing to note in the argument passed here is the nativeWindow, which is really just the SurfaceHolder.

At this point, the environment needed for the off-screen rendering is created. Then, as before, set up the image and Filter for the GPUImageRenderer and prepare it for rendering.

Get render results

GetBitmap () is first called, which further calls Render’s onDrawFrame to render the image as desired by the filter into the EGLSurface created in PixelBuffer. The convertToBitmap() method is then called to convert the contents of the buffer in the EGLSurface into a bitmap.

ConvertToBitmap () is just a simple call, its further calls the native function GPUImageNativeLibrary. AdjustBitmap (bitmap) to perform the conversion operation.

JNIEXPORT void JNICALL Java_jp_co_cyberagent_android_gpuimage_GPUImageNativeLibrary_adjustBitmap(JNIEnv *jenv, jclass thiz, jobject src) { unsigned char *srcByteBuffer; int result = 0; int i, j; AndroidBitmapInfo srcInfo; AndroidBitmapInfo srcInfo; // Get info result = AndroidBitmap_getInfo(jenv, SRC, &srcInfo); if (result ! = ANDROID_BITMAP_RESULT_SUCCESS) { return; } // Assign the image SRC data pointer to srcByteBuffer result = AndroidBitmap_lockPixels(jenv, SRC, (void **) &srcByteBuffer); if (result ! = ANDROID_BITMAP_RESULT_SUCCESS) { return; } int width = srcInfo.width; int height = srcInfo.height; // Read image data from the current EGL runtime environment and save it in srcByteBuffer, GlReadPixels (0, 0, srcinfo.width, srcinfo.height, GL_RGBA, GL_UNSIGNED_BYTE, srcByteBuffer); int *pIntBuffer = (int *) srcByteBuffer; // OpenGL and Android Bitmap color space is inconsistent, need to do conversion here. Switch with the middle as the baseline. for (i = 0; i < height / 2; i++) { for (j = 0; j < width; j++) { int temp = pIntBuffer[(height - i - 1) * width + j]; pIntBuffer[(height - i - 1) * width + j] = pIntBuffer[i * width + j]; pIntBuffer[i * width + j] = temp; } } AndroidBitmap_unlockPixels(jenv, src); }Copy the code

This code may be somewhat familiar. When we complete the screenshot function, if there is a video, the screenshot will be black. There are a number of implementation tools available, and the underlying principle is this: read the image data from the current context’s buffer and save it to a Bitmap or create a bitmap. Since the order in OpenGL’s buffer is top left to bottom right, the order of image textures is bottom left to top right. Therefore, the data need to be swapped with the middle as the benchmark.

Above, is the general analysis of off-screen rendering.

Four, afterword.

Thank you for reading this article and hope you find it interesting. Of course, the analysis and reading of GPUImage need to have a certain basis of OpenGL, otherwise it will feel that the concepts in it are too many and abstract. In addition, this paper mainly analyzes an interpretation of GPUImage’s interface rendering or off-screen rendering process using filter. As I am only a novice in the field of graphics and images and know little about image processing algorithms, I did not analyze the specific algorithm implementation of Filter. For the analysis of the article, if there are mistakes or unclear places, also welcome to leave a message to discuss, will be very grateful.