Android OpenGL summary of professional terms understanding


In between, the boundary is big. Not only the collection of its subtle, its vast, and both human and truth, angry with high to coexist. In but not out, narrow; Out but not in, shallow. Therefore good scholars, good at deep thinking and summary. Learning is like this, creation is like this, life is like this.

Learning is not only to accept some new knowledge, but also to be good at summarizing, the new knowledge to learn into their own knowledge, in order to apply the knowledge; So the previous OpenGL learning encountered a summary of the noun.

OpenGL ES library


OpenGL ES library is a version library tailored for embedded devices. It exists in the form of NDK on Android. This library can be used for GPU image development and data to image process

EGL


The underlying driver implementation is different for different manufacturers. In order to enable OpenGL to be successfully executed on different manufacturers’ devices, EGL serves as a bridge in the middle to provide a rendering environment for OpenGL. EGL provides abstract display, rendering Context, and creating 3 different Window surfaces. OpenGL rendering operations must be provided with EGL rendering environment, Android common GLSurfaceView does not use, because its internal display of EGL environment, but other SurfaceView, TextureView need to implement their own EGL environment

GLSL


A programming language used to guide the GPU programmable pipeline, similar to C; It is mainly used to guide GPU pipeline how to draw primitives and colors; After writing the program needs to submit to OpenGL for compilation, link to use; Here we write vertex shaders and chip shaders

texture


The texture is mainly used to display the image data. After creating the texture, bind the image data textImage2D(). Finally, during rendering, first activate the texture, and then the Sampler of the fragment shader will take the color rendering from the texture data according to the texture coordinates passed in

FBO


Frame Buffer Object, the last step on the screen, is to submit all tasks to FBO, which outputs them to the screen. FBO is not used in daily development but is displayed on the screen because the default active FBO ID is 0. This ID is the screen FBO. So we can create an FBO with a non-0 ID so that the image we output to the FBO is not displayed on the screen;

So what’s the FBO for?

  • The image can be preprocessed, such as image beauty, and then output to the screen through other transceivers after processing
  • FBO breaks the screen size and is no longer limited by the default FBO screen size

How do I use FBO? FBO provides color attachment and depth attachment. We can attach to FBO using textures and renderBuffer. When we activate FBO, normal rendering operations will be output to FBO, and then we can extract images from pre-attached textures and renderBuffer

Android components related to OpenGL


SurfaceView


SurfaceCreated and surfaceDestroyed call-back methods must be used in the child thread to draw the double-buffering mechanism. After drawing, display the SurfaceHolder surfaceCreated and surfaceDestroyed methods. LockCanvas gets the canvas for drawing, and unlockCanvasAndPost submits the update pseudocode examples:

public class MyView extends SurfaceView 
					implements SurfaceHolder.Callback
					,Runnable {
	@Override
    public void surfaceCreated(SurfaceHolder holder) {
        running=true;
        new Thread(this).start();
 
    }
 
    @Override
    public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {}@Override
    public void surfaceDestroyed(SurfaceHolder holder) {
 		running=false;
    }
	@Override
    public void run(a) {
        while(shouldDrawing){
                Canvas=surfaceHolder.lockCanvas();
                //drawingsurfaceHolder.unlockCanvasAndPost(mCanvas); }}}Copy the code

GLSurfaceView


GLSurfaceView integrates with SurfaceView, integrating SurfaceView features, the difference is that it uses OpenGL environment to render, its internal has realized EGL environment, the use method is mainly:

  • Set the Render renderer
  • Set rendering mode
glSurfaceView.setRenderer(renderer);
glSurfaceView.setRenderMode(GLSurfaceView.RENDERMODE_CONTINUOUSLY);
Copy the code

RENDERMODE_WHEN_DIRTY Trigger Manual flush RENDERMODE_CONTINUOUSLY flusher at a fixed interval

  • Render, Render specific process
public class GLRender implements GLSurfaceView.Renderer {
	// Called when the surfaceView is created
    @Override
    public void onSurfaceCreated(GL10 gl, EGLConfig config) {}@Override
    public void onSurfaceChanged(GL10 gl, int width, int height) {}// Call back when surfaceView is drawn
    @Override
    public void onDrawFrame(GL10 gl) {}}Copy the code

TextureView


TextureView can be used to display streams of images, either from the camera or OpenGL rendered images; The parameters of the method mainly through TexttureView onSurfaceTextureAvailable SurfaceTexture surface texture

TextureView works with a Camera Camera


public void onSurfaceTextureAvailable(SurfaceTexture surface, int width, int height) {
          mCamera = Camera.open();
          try {
              mCamera.setPreviewTexture(surface);
              mCamera.startPreview();
          } catch (IOException ioe) {
              // Something bad happened}}Copy the code

TextureView works with OpenGL


  • EGL creates Windows Surface and OpenGL rendering environments
  • Bind Makecurrent with TextureView’s SurfaceTexture
  • OpenGL normal rendering rendering
  • SwapBuffer for display

SurfaceTexture


The SurfaceTexture SurfaceTexture is essentially a texture encapsulation class. Its internal texture can be used as an output object of an image stream, either from the camera or from a decoded video stream.

quiz


Do you understand the above noun explanation? Or are you OK with teaching yourself OpenGL? Let’s have a quiz! How is the camera beauty process implemented? Just need a general understanding can, do not need to be too detailed, first do not see the answer behind!

Problem analysis:

First, we need to get preview data from the camera, and it can’t be displayed on the screen right away, which requires FBO. Then, to get the camera data for beauty processing, here processing needs GLSL beauty algorithm; Finally, display the modified data on the default screen FBO; Note also that camera preview data is generally in YUV format, while OpenGL can only display RGB images, we need to set the SurfaceTexture texture specific OES format for internal automatic conversion

My project address

Join the public account, we grow together!