This article was first published on wechat public number — interesting things in the world, handling reprint please indicate the source, otherwise will be held responsible for copyright. Wechat account: A1018998632, QQ group: 859640274

Making the address

Hello everyone, the New Year has officially begun, and I am here to greet you in your old age. The recent articles need to involve a lot of knowledge, so the update is relatively slow, please bear with me. Recently, I applied for a wechat public account: interesting things in the world, the TWO-DIMENSIONAL code is at the bottom of the article, and my articles will be published on this public account in the future. Jane book, nuggets will be synchronized in a few hours, so I hope you can follow my public account, a lot of dry goods waiting for you here. **Android drawing mechanism and Surface family source code full parsing, ** can get the PDF version of this article.

This article is divided into the following sections, which can be read on demand

  • 1. Overview of The Android drawing mechanism
  • 2. Source code analysis of Android drawing mechanism
  • 3. Full analysis of Surface family source code
  • 4. To summarize

Read the instructions

  • 1. Enter the wechat official account to send messages about the interesting things in the world: **Android drawing mechanism and Surface family source code parsing, ** can get the PDF version of this article.
  • 2. The source version of this article is Android 7.0. It is recommended to read this article with the source code
  • 3. Recommend an Android source reading website: Android source reading website
  • 4. Since many Java and c++ layers have the same class names, Surface. LockCanvas represents Java layer calls, and Surface::lockCanvas represents c++ layer calls
  • 5. Some abbreviations: SF — >SurfaceFlinger, WMS — >WindowManagerService, MessageQueue — >MQ, GL — >OpenGL, BQ — >BufferQueue, ST — >SurfaceTexture, TV — >T ExtureView, SV – > SurfceView
  • 6. This article is an important pre-knowledge of video editing SDK development, so it is recommended to read it thoroughly

Overview of The Android drawing mechanism

Before diving into the various sources, we need to understand the overall architecture of the Android drawing mechanism, as well as some concepts

1.Android screen refresh mechanism

Figure 1 is an abstract representation of what the Android screen looks like, and I’ll explain it here:

  • 1. First of all, the horizontal axis of the figure is time, while the vertical axis represents CPU processing, GPU processing and screen display respectively from bottom to top. These three steps are the process from the code we wrote to the image displayed on the screen.
  • 2. We all know that one of the obvious criteria for keeping your phone ticking is to display the next frame of image on the screen every certain number of ms. ** In this case the time is controlled by the bottom layer, which is the interval between the two Vsyncs in the figure — 16ms.
  • 3.Android has introduced the following features to keep data on the screen every time16msLet’s refresh.
    • 1. A fixed pulse signal, VSync, is guaranteed by the bottom layer. Every time the VSync signal comes, the CPU starts drawing code (such as view.draw). We give this data to the GPU and let it draw the image on a memory buffer. When the GPU has drawn it will display the image on the screen.
    • 2. Three buffers. A, B, and C in the figure represent three memory buffers. Because not every data processing of CPU and GPU can be completed within 16ms, in order not to waste time and not to discard the work completed before. The CPU and GPU work is sequentially reflected in memory buffers A, B, and C. The screen takes the currently prepared buffer each time. The problem with triple-buffering versus double-buffering is that there’s a 16ms delay when our actions finally show up on the screen, which may be one reason why Android doesn’t follow ios as well
  • 4. Two simple conclusions can be drawn:
    • 1. The UI thread is so busy that the CPU does not process data properly in 16ms, resulting in frame loss.
    • 2. The image to be drawn is too complex, so that the GPU 16MS fails to draw a good image, which will also result in frame loss.

2.Android image rendering method

Here’s a question: What tools can we use to draw images on the screen during development? You can answer a lot of questions: View, Drawable, XML, SV, GLSurfaceView, TV, Canvas, etc. There are only two commonly used drawing mechanisms on Android, and the things listed above evolved from those two. I’m just going to summarize this in this video.

There are two common drawing mechanisms on Android:

  • 1.Skia Graphics library: The official website of Skia, Skia is an open source two-dimensional graphics library, providing a variety of commonly used apis, and can run on a variety of software and hardware platforms. Where Canvas is used when hardware acceleration is not enabled, the underlying Skia library will be called. Among the methods listed above: View, Drawable, XML, SV, Canvas all end up using Skia libraries. In addition, Skia ultimately uses CPU to draw the final image, so the efficiency is low.
  • 2.GL: GL is a cross-language and cross-platform application programming interface for rendering 2D and 3D vector graphics. Any Canvas that is used when hardware acceleration is enabled will eventually be called into the GL ES library. When hardware acceleration is not enabled, GLSurfaceView and TV finally use GL ES. In addition, GL uses GPU to draw images, so the efficiency is relatively high.
  • 3. In the following chapters I will analyze the call chains of the above various drawing methods from source code.
  • 4. Vulkan was actually added to Android after 7.0. Vulkan is a low-overhead, cross-platform API for high-performance 3D graphics. Like GL ES, Vulkan offers a variety of tools for creating high-quality real-time graphics in applications. But it’s rarely used, so we won’t discuss it in this article.

3. Producers and consumers in Android drawing

There are a number of producers and consumers in android’s drawing mechanism, and in this section I’ll introduce the concepts associated with this mechanism.

  • 1.BQ: As shown in Figure 2, BQ is a storeMemory blockIn the queue.
    • 1.producersIt can use two apis,queuedequeue.
      • 1.**dequeue: ** When a block of memory is needed to draw an image, a block of memory can be drawn from the end of the queue.
      • 2.**queue: ** This memory can be added to the head of the queue when the image is drawn
    • 2.** Consumer: ** It can also use two apis,acquirerelease.
      • When you need a block of memory that has already been drawn and then processed, you can extract a block of memory from the head of the queue.
      • 2.**release: ** When the image memory is processed, the memory can be reset and put back to the end of the queue

  • 2. Producer of image memory:
    • 1.Surface: Surface is the producer of BQ. When we use lockCanvas and unlockCanvasAndPost, we first take out a piece of memory from BQ, then call Canvas/GL API to draw the memory, and finally store it back to BQ.
    • 2. We know that Surface is a producer, so things that indirectly or directly use Surface, such as View, SV and GLSurfaceView, are all producers.
  • 3. Image memory consumers:
    • 1.SF: It has its own process, which is created when the Android system starts up just like any other Service. Its job is to take memory buffers from multiple sources, synthesize them, and send them to the display device. Most apps typically have three layers on the screen: a status bar at the top of the screen, a navigation bar at the bottom or side, and the app’s interface. So the memory generated by the Surface of the application will be consumed by it.
    • 2.**ST: ** This thing is often used in TV. It can be used in our application. It creates its own BQ at creation time. We can use ST to create a Surface and then use the Surface to supply image memory to BQ. ST can then consume the image memory. It can use GL to reprocess the consumed image memory, and then have the processed image memory displayed on the screen via something like a GLSurfaceView.

Second, Android drawing mechanism source code analysis

This chapter we come from the source analysis of View is how to draw to the screen above, the front of the measure, layout, draw and so on framework layer things I will not focus on the analysis, I mainly analyze CPP layer things.

In fact, the main source process is in Figure 3, I say the following things are complementary to Figure 3 and illustration. It is also strongly recommended to read this section in conjunction with the Android source code.

  • 1. First we entry is ViewRootImpl. ScheduleTraversals. Those of you who have seen the source code should know that operations such as Invalidate, requestLayout, etc. that need to change the display of the View will eventually be called up to this method.
    • 1. In this way the main call to approach is the Choreographer. PostCallback this method introduced into mTraversalRunnable, said they would call this at some point in the Runnable run (). We talked about VSync in chapter 1 and Choreographer is the implementation of VSync at the code level.
      • ScheduleVsyncLocked — >scheduleVsyncLocked — >scheduleVsyncLocked Will call to DisplayEventReceiver. NativeScheduleVsync apply to the native layer under a VSync signal.
  • 2.16ms later, VSync signal will be triggered upward from the native layer. Here, a MSG is added to the LOOP of UI. This invocation chain is: DisplayEventReceiver. DispatchVsync — — > DisplayEventReceiver. OnVsync – > Choreographer. DoFrame.
    • 1. The run() of mTraversalRunnable passed in postCallback will be called at doFrame. We know that the viewrootimpl.dotraversal method is called in run().
      • 1. This is called to ViewRootImpl. PerformTraversals. I think you should be familiar with this method, this method is called measure, layout, draw methods. I’m not going to talk about what’s already been analyzed.
      • 2. Let’s look directly at the Viewrotimpl. draw method, where there are two ways to draw. As we said in the first chapter. ifHardware acceleration is not enabledThen use the Skia library to draw the image on the CPU, and the entry method here is drawSoftware. ifHardware acceleration is enabledSo GL ES is used to draw the image on the GPU, where the entry method is threadedrenderer.draw.
        • 1. DrawSoftware: The simple way to do this is to create a Surface, and then lockCanvas gets a Canvas, and then call the Canvas in the View at each level and finally call the Skia library to draw an image in the image memory provided on the Surface, Call unlockCanvasAndPost to commit image memory after drawing. I will analyze the detailed process in Chapter 3 when I analyze the Surface.
        • 2. Threadedrenderer. draw: This method is a hardware-accelerated image rendering portal, which ultimately calls the GL API. Before we dive in, let’s take a look at the stages of hardware-accelerated drawing:
          • 1. Five stages of hardware-accelerated drawing:
            • 1.APP uses Canvas recursion to construct commands and data required by GL rendering in UI thread
            • 2. The CPU shares the prepared data with the GPU
            • 3.CPU informs GPU rendering, which generally does not block to wait for GPU rendering to finish because of low efficiency. After the CPU notifies, it returns to continue with other tasks. Of course, using glFinish can block execution.
            • 4. SwapBuffers, and inform SF layer composition
            • 5.SF starts to compose layers. If the GPU rendering task submitted before is not finished, it waits for GPU rendering to complete and then compose (Fence mechanism)
          • 2. Hardware accelerated drawing code analysis:
            • 1. Let’s look at the updateRootDisplayList call from draw() :
              • 1. This method first invocation chain is such updateViewTreeDisplayList – > the updateDisplayListIfDirty – > View. The draw. Figure 4, for those of you who are smart, is a recursive operation. A View’s draw is recursed to its child views. Each View then calls the Canvas API and stores the drawing operations in the Canvas. So where does this Canvas come from? A RenderNode is created internally at each View creation. This object can create a DisplayListCanvas to use as a Canvas for each View to draw on. When each child View calls draw(Canvas, ViewGroup, long), it will get the DisplayListCanvas passed from parentView, and then after this view.draw (Canvas) call, Store the DisplayListCanvas operations in the RenderNode of this View. Finally call the parentView DisplayListCanvas. DrawRenderNode deposited into this View RenderNode parentView RenderNode. This recurses, and eventually all drawing operations are stored in the RenderNode of the RootView. The RootView now contains a DrawOp tree.
              • 2. We’ll go back to updateRootDisplayList and then we’ll give the DrawOp tree of RootView to the RenderNode of ViewRootImpl for later operations.
            • 2. Back in draw(), the next important method called here is nSyncAndDrawFrame:
              • 1. This method will eventually calls to c + + layer RenderProxy: : syncAndDrawFrame method. So before we look at the call chain here. Let me introduce the concept of render threads since Android 5.0, and then call chains.
                • Renderthread. CPP is an event-driven Loop similar to the UI thread. It also has queues, which store drawFrametask.cpp objects. Renderproxy. CPP is the Java layer proxy for RenderThread. CPP. ThreadRender sends all requests for the RenderThread to renderproxy. CPP, which then submits tasks to renderthread. CPP.
                • 2. Understand the concept of rendering thread, we’ll explain invocation chain: syncAndDrawFrame – > DrawFrameTask: : drawFrame – > DrawFrameTask: : postAndWait. What we’re doing here is simply inserting a drawframetask.cpp into the renderTheas.cpp queue and blocking the current UI thread.
            • 3. After the UI thread is blocked, the renderer thread calls the drawFrametask.run method that was last inserted into the queue.
              • 1. Run () starts with a call to syncFrameState, which is used to synchronize various data in the Java layer.
                • 1. The first step is to use mLayers::apply to synchronize data. This mLayers is reflected as TV in Java layer. This analysis, which we will focus on in Chapter 3, will be skipped here.
                • 2. The second step is to call CanvasContext: : prepareTree to the front in the Java build DrawOp tree synchronization to c + + layer, so that subsequent run OpengGL command. Here the key invocation chain is: CanvasContext: : prepareTree – > RenderNode: : prepareTree – > RenderNode: : prepareTreeImpl. Rendernode. Java already builds a DrawOp tree. But just before the call RenderNode: : RenderNode setStagingDisplayList temporary existence: : mStagingDisplayListData. Because the Java layer in the running process will also appear several meausre, layout, and data may change. So by the time we get to this point, the data is already defined, so we can start synchronizing the data. PrepareTreeImpl synchronizes data in three main steps:
                  • 1. Call pushStagingDisplayListChanges synchronous current RenderNode. CPP attributes, namely give mDisplayListData mStagingDisplayListData assignment
                  • 2. Call prepareSubTree to recursively process renderNode. CPP.
                  • 3. There is a synchronization success and synchronization failure problem. Generally, the data will be successfully synchronized. But in RenderNode: : there will be a step in the prepareSubTree is to put the Bitmap RenderNode used in texture, once the Bitmap is too large or too many here so synchronization will fail. Note that the synchronization failure is only related to the Bitmap; other DrawOp data will be synchronized anyway.
                • 3. If the synchronization succeeds, the UI Thread will wake up, otherwise it will not wake up.
              • 2. After the data has been synchronized by run(), it calls canvasContext.draw, which has three main operations:
                • MEglManager: : beginFrame, actually is to mark the current context, and apply for memory, because there may be multiple Windows of a process, which is multiple EglSurface, so the first thing we need to deal with the tag which, then apply to the SF, memory, Here EglSurface is the producer and SF is the consumer.
                • 2. According to the DrawOp tree of RenderNode, recursively call GL API in OpenGLRender to draw.
                • 3. The data drawn by swapBuffers will be submitted to SF for synthesis. It is worth noting that the GPU may not complete the current rendering task at this time, but in order to improve efficiency, the rendering thread can not be blocked here.
              • 3. When all drawing operations are submitted to GPU through GL, if the previous data synchronization fails, the UI Thread needs to wake up at this time.

3. Full analysis of Surface family source code

In the previous chapter, we covered the entire drawing mechanism of Android, but the Surface part of it was only briefly covered. So in this chapter I will parse the source code of each member of the Surface family.

1. Creation and drawing of Surface

(1). The creation of a Surface

** Here we take the creation process of View as an example to describe the creation process of Surface in this process. The Surface creation process is shown in Figure 5. **

  • 1. The first thing we need to know is that viewrotimPL is created using the New Surface to create an empty shell of the Java layer. The empty shell will be initialized later.
  • 2. Then the call chain of the entry looks like this: ViewRootImpl. PerformTraversals — — > ViewRootImpl. RelayoutWindow — — > WindowManagerService. RelayoutWindow – > WindowManagerServic E.c. with our fabrication: reateSurfaceControl.
    • 1. CreateSurfaceControl this method first will WindowStateAnimator. CreateSurfaceLocked – > new WindowSurfaceController – > new SurfaceControlWithBackground – > new SurfaceControl – > SurfaceControl. NativeCreate — — > android_view_SurfaceControl: : to create a SurfaceControl nativeCreate. CPP This thing is the parameter that initializes the Surface.
      • 1. The first step is to create a of nativeCreate SurfaceComposerClient. CPP it actually is the agent SF, in process, we can through this kind of SF operations. In call new SurfaceComposerClient. CPP constructor, first will trigger onFirstRef, here we will use ComposerService: : getComposerService () to obtain the SF services. Then the remote invocation ComposerService: : the createConnection namely SF: : the createConnection to create a ISurfaceComposer as SurfaceComposerClient with SF The Binder mechanism is used here.
      • 2. Once SurfaceComposerClient is created, Can invoke SurfaceComposerClient: : createSurface — > Client: : createSurface to create Surface of SF, process request. The SF process is also event-driven, so a call to SF::postMessageSync in Client::createSurface sends a message to the event queue calling the SF::createLayer method. CreateLayer will eventually call createNormalLayer. This method returns a IGraphicBufferProducer to the Surfacecontrol.cpp. Remember the producer-consumer model we talked about earlier? ** The IGraphicBufferProducer here is the producer separated from the BQ of SF. We can use this IGraphicBufferProducer to add the memory of the image drawn on the Surface to the BQ of SF.
    • 2. After creating a Surfacecontrol. Java in createSurfaceControl, the next step is to initialize Surface.java. Here it is easier, through the call chain: SurfaceController. GetSurface (Surface) – > Surface. The copyFrom (SurfaceControl) – > Surface. NativeCreateFromSurfaceControl – > Surf AceControl: : getSurface. Create a surf. CPP by taking the IGraphicBufferProducer from the Surfacecontrol. CPP as an argument and giving it to Surf. Java.

(2). The Surface rendering

In Chapter 2, we talked about how drawing a View can be divided into software drawing and hardware drawing depending on whether it is hardware accelerated or not. At that time we analyzed the hardware rendering, but we skipped the software rendering. In fact, the difference between software rendering and hardware rendering lies in the use of CPU or GPU for rendering calculation. This section of the Surface rendering is drawing software, namely ViewRootImpl. The contents of the drawSoftware.

We all know that Surface can be used to draw images through Canvas through lockCanvas and unlockCanvasAndPost apis. In this section, I will talk about the drawing process of Surface through these two apis. The entire process is shown in Figure 6.

  • 1. First we call the chain from the lockCanvas entry: LockCanvas – > nativeLockCanvas – > Surface: : lock – > Surface: : dequeueBuffer, This is ultimately going to use the BufferQueueProducer that we got when we created the Surface to ask SF for a blank chunk of image memory. After the image memory is obtained, the memory is passed into new Skbitmap. CPP to create the corresponding object. After all, the Skia library is used to draw the image. Here skbitmap. CPP is passed to skcanvas. CPP and the skcanvas. CPP object is the canvas. Java operation object in the c++ layer.
  • 2. Above we get a Canvas object through lockCanvas. When we call various apis of Canvas, we will eventually use Skia library of c++ layer to draw image memory through CPU.
  • 3. When we finish drawing, we can call unlockCanvasAndPost to inform SF to synthesize the image. The call chain is: UnlockCanvasAndPost – > nativeUnlockCanvasAndPost – > Surface: : queueBuffer, In contrast to the lockCanvas in here is the final by BufferQueueProducer: : queueBuffer store image back to the queue, In addition here also called IConsumerListener: : onFrameAvailable to notify SF, process to refresh the image, invocation chain is: OnFrameAvailable – > SF. SignalLayerUpdate – > MQ. Invalidate – > MQ. Handler. DispatchInvalidate — > MQ. Handler. The handleMessage — > onM EssageReceived: Since the SF process is eventdriven, the SurfaceFinger is triggered to refresh the image by looper + events similar to UI Threads. Note: The IConsumerListener here is layer.cpp created when createNormalLayer.

2. Creation and use of SurfaceView

Sv.updatesurface creates or updates the Surface. Drawing on SV also calls Surface’s two apis. I’m just going to briefly compare View to SV here.

  • 1. Principle: A typical Activity contains multiple views that form a tree of Hierachy views, and only the top-level DecorView, the root View, is visible to the WMS. This DecorView has a corresponding Windows State in THE WMS. Accordingly, there are corresponding layers in SF. SV comes with a Surface, which has its own WindowState in WMS and its own Layer in SF.
  • 2. Benefits: The Surface is still in View Hierachy on the App side, but it is separated from the host window on the Server side (WMS and SF). The advantage of this is that rendering of the Surface can be done in a separate thread with its own GL context. This is useful for performance-related applications such as games, videos, etc., because it does not affect the main thread’s response to events.
  • 3. Disadvantages: Because the Surface is not in the Hierachy of View, its display is not controlled by the attributes of View, so it cannot be translated, zooming and other transformations, nor can it be placed in other viewgroups, and some features in View cannot be used.

3. Creation and use of SurfaceTexture

ST we may not use it much, but it is the core component of TV. It can convert the image stream provided by the Surface into GL’s texture, and then reprocess the content so that the final processed content can be given to other consumers for consumption. In this section, we will parse ST at the source level.

Figure 7 is an overview of ST combined with Surface, SV, TV, and other components, which I will briefly explain here:

  • 1. The far left represents the generation mode of original image stream: Video(local/network Video stream), Camera(Video stream shot by Camera), Software/Hardware Render(image stream drawn by Skia/GL).
  • 2. The original image stream on the far left can be delivered to Surface, which is the producer of BQ, and GLConsumer(the embodiment of Java layer is ST) is the consumer of BQ.
  • 3.GLConsumer gets the original image stream of the Surface, which can be converted to texture by GL. Eventually it will be consumed in a reasonable way.
  • 4. The way to consume texture in GLConsumer is to use SV, TV, etc to display it on screen or elsewhere.

I will follow the flow in Figure 8 to explain the creation and use of ST

  • 1. Let’s start with the creation of St. Java, which is the yellow box in the figure. Note that ST creation requires passing in the Id of the Texture created by gles20.glgentextures. The image memory provided by Surface will eventually be attached to the Texture as well.

    • 1. Created by the invocation chain: SurfaceTexture nativeInit – > SurfaceTexture: : SurfaceTexture_init.
    • The key steps to SurfaceTexture_init are as follows:
      • 1. Create IGraphicBufferProducer and IGraphicBufferConsumer. These two classes are the producer and consumer of BQ.
      • 2. Create a BQ by calling bq.createBufferQueue to pass in the producers and consumers created above.
      • 3. Create a GLConsumer this thing is equivalent to the IGraphicBufferConsumer wrapper class which is ST’s final implementation class in c++ layer.
      • 4. Call SurfaceTexture_setSurfaceTexture to assign the st. Java mSurfaceTexture to the GLConsumer created above.
      • 5. Call SurfaceTexture_setProducer to also assign values to mProducer of ST.
      • 6. Call SurfaceTexture_setFrameAvailableListener add a Java layer to GLConsumer OnFrameAvailableListener as surveillance image flow callback. The OnFrameAvailableListener is called st. setOnFrameAvailableListener added.
  • 2. After ST initialization, we now need a Surface to provide image stream for ST as a producer.

    • 1. Remember surf. Java has a constructor that takes ST as an argument? From this function, we can call chains are as follows: the new Surface (ST) – > Surface. NativeCreateFromSurfaceTexture – > android_view_Surface: :

      NativeCreateFromSurfaceTexture.

    • 2. The key steps of nativeCreateFromSurfaceTexture is as follows:

      • Call SurfaceTexture_getProducer to retrieve the c++ layer pointer to the mProducer object stored in st. Java.
      • 2. Create the Surface. CPP object from the IGraphicBufferProducer object obtained above.
  • 3. Once we have created a Surface that is responsible for providing image streams, we can use its two apis to draw images to provide image streams.

    • 1. LockCanvas and unlockCanvasAndPost are two apis that I will not parse again here.
    • 2. According to the front on the Surface of drawing We know after call unlockCanvasAndPost will trigger a IConsumerListener onFrameAvailable callback. In the View drawing process we know that this callback will eventually trigger the image composition of SF. So who is the implementation class for this callback? You guessed it right and the implementation class is the OnFrameAvailableListener of the Java layer that was set in GLConsumer, We’re going to pass in this IGraphicBufferProducer when we create surf. CPP and it’s going to go through the call chain: BQ: : ProxyConsumerListener: : onFrameAvailable — > ConsumerBase: : onFrameAvailable – > FrameAvailableListener: : onFrameAvailable – > JNISurfaceTextureContext: : onFrameAvailable — > OnFrameAvailableListener. OnFrameAvailable calls to the Java layer.
    • 3. Of course the OnFrameAvailableListener callback thing is written by ourselves.
      • 1. We know that the ultimate purpose of SurfaceTexture is to bind the image stream to the texture we originally defined. SurfaceTexture just provides the updateTexImage method to refresh the texture.
      • 2. Do we just call updateTexImage in OnFrameAvailableListener? It’s not necessarily true here. Since we know that Surface drawing can be done on any thread, the OnFrameAvailableListener callback is also triggered on any thread. The texture update needs to be done in the GL environment where the texture was created. In addition, there is a one-to-one correspondence between GL environments and threads in Android. So we can only call updateTexImage in the callback if the Surface drawing thread is the same as the GL environment thread.
      • 3. We’re going to call updateTexImage anyway, so let’s see how it works inside. First invocation chain is such: SurfaceTexture updateTexImage – > SurfaceTexture: : nativeUpdateTexImage – > GLConsumer: : updateTexImage. GLConsumer: : updateTexImage important steps are as follows:
        • 1. Call acquireBufferLocked to obtain the latest image memory in the BQ header.
        • 2. Call glBindTexture to bind the obtained image memory to the texture.
        • 3. Call updateAndReleaseLocked to reset the previously acquired image memory, and then put it back at the end of the BQ
      • The SurfaceTexture turns the stream of images we draw on the Surface into a texture. You can leave it up to the TV or GLSurfaceView to continue the texture rendering and processing.

4.TextureView and creation and use

It can project content streams directly into a View and can be used for things like video previews. Unlike SV, it does not create a separate window in the WMS, but as a normal View in the View Hierachy, so it can be moved, rotated, scaled, animated, and so on, just like any other normal View. It is worth noting that the TV must be in the hardware accelerated window. It displays content stream data that can come from the App process or a remote process. In this section, we’ll examine it from the source code. Because TV requires hardware acceleration, it is ultimately drawn by the RenderThread, and in chapter 1 we discussed the roles of classes threadrender. Java, renderproxy. CPP, renderthread. CPP, etc. So this section on rendering threads will be used without further ado.

As before, the rest of the analysis in this section follows Figure 9

  • 1. Since TV can be viewed as a normal View, we can directly start from the DRAW method of TV. There are these key steps in DRAW:
    • 1. Call getHardwareLayer to get the hardware acceleration layer:
      • 1. ThreadRender. CreateTextureLayer – > ThreadRender. NCreateTextureLayer – > RenderProxt: : createTextureLayer – > new Deferredlayeruper. CPP: Here the first call chain creates a DeferredLayEruper. CPP object, which will be used later to update ST’s texture.
      • 2. ThreadRender, creating the DeferredLayerUpdater createTextureLayer. CPP need to call HardwareLayer. AdoptTextureLayer will ThreadRender, Java, and DeferredLayErater. CPP are wrapped in the Java layer for later use.
      • 3. After the HardwareLayer is created, a ST will be created if the ST is not already created. The creation process is described in the previous section and will not be repeated here. Similarly, nCreateNativeWindow is called to create an ANativeWindow in the c++ layer (surface.java in the Java layer). The main purpose of this Surface is to enable TV to provide lockCanvas and unlockCanvasAndPost apis.
      • 4. A series of operations, in addition to initialize the ST here is called HardwareLayer. SetSurfaceTexture for DeferredLayerUpdater ST. Because the texture of ST is updated by the DeferredLaydater later.
      • 5. ApplyUpdate will be called, but this is the first time to create applyUpdate.
      • 6. Then call st. setOnFrameAvailableListener (mUpdateListener) for the graphic flow monitoring, add here mUpdateListener implementation in TV, the realization of here we call to say again.
      • 7. This ST initialization is already finished, so I can call mListener. Call the callback onSurfaceTextureAvailable external code.
    • 2. Call DisplayListCanvas. DrawHardwareLayer above the Surface of the current View Hierachay add draw a separate layer (it’s a simple drawing of the area). ** Note: The layer here is a different concept from the layer SV exists in SF we said earlier. **
  • 2.TVNow that it’s created, we can feed it an image stream. We’ll use lockCanvas and unlockCanvasAndPost to provide this.Note: ST and TV can not only use the above two apis to provide image streams, but also convert the Surface to EGLSurface using GL to provide image streams, and more commonly, the image streams provided by the camera, which I won’t expand here and leave to the reader to explore.
    • 1. First of all, I will not repeat the source flow and callback flow of the two APIS lockCanvas and unlockCanvasAndPost, which are included in the analysis of ST and Surface.
    • 2. Let’s directly look at the implementation code of OnFrameAvailableListener in TV.
      • 1. Call updateLayer and set mUpdateLayer to true, indicating that the TV needs to be refreshed next time the VSync signal comes. Note that the TV is in View Hierarchy, so it needs to be refreshed along with the View.
      • 2. Call invalidate to trigger the ViewRootImpl redraw process.
    • 3.16ms later the VSync signal comes and after a series of methods the call returns to tv.draw. At this point, since the ST has been created, the main code is the tv.applyUpdate method.
      • 1. The first call HardwareLayer. NPrerare — — > RenderProxt: : pushLayerUpdate will create DeferredLayerUpdater in front RenderProxt. CPP.
      • 2. Call HardwareLayer. NUpdateSurfaceTexture mUpdateTexImage DeferredLayerUpdater in the set to true. Why not just update the texture here? Since the thread is still UI Thread, the texture update needs to be done in the render thread.
    • 4. Let’s recall the analysis in Chapter 2Android Drawing mechanismSource code, when we talked about when the View Hierarchy draw operation is completed, after a series of calls will enter the rendering thread. Here we go, we see RenderProxt directly: : syncAndDrawFrame – > DrawFrameTask: : drawFrame – > DrawFrameTask: : syncFrameState is running in the rendering thread. It will trigger our storage in RenderProxy DeferredLayerUpdater: in the CPP: apply method, this method has the key step
      • 1. GLConsumer: : attachToContext, to bind the current ST and texture.
      • 2. GLConsumer: : updateTexImage, like speaking in front of the ST will update to the texture image flow
      • 3.LayerRenderer::updateTextureLayer
  • 3. By now, the source code of TV from creation to image stream conversion has been parsed, and the rest is to use GL for real rendering as described in The Android rendering mechanism.

Four,

This article is killing me!! New Year has two thirds of the time spent on this, I feel that this article is really the author of the most dry goods ever written an article, I hope to be able to help you. In addition to see in my so hard, I hope you can conveniently pay attention to my public number: interesting things in the world, two-dimensional code is at the bottom of the article, thank you!!

Serial articles

  • 1. Write a Douyin app from scratch — start
  • 4. Copied a Douyin App from scratch — log and buried point and preliminary back-end architecture
  • 5. Copied a Douyin App from scratch — App architecture update and network layer customization
  • 6. Write a Douyin App from scratch — start with audio and video
  • 7. Write a Douyin App from scratch — a minimalist video player based on FFmpeg
  • 8. Write a Douyin App from scratch — build a cross-platform video editing SDK project

No angst peddling, no clickbait. Share some interesting things about the world. Topics include but are not limited to: science fiction, science, technology, the Internet, programmers, computer programming. The following is my wechat public number: Interesting things in the world, dry goods waiting for you to see.

reference

  • 1.Android rendering optimization — system display principle
  • 2. Graphical overview of Android
  • 3. The Choreographer parsing
  • 4. Understand how Android hardware acceleration works
  • 5.Android Hardware Acceleration (ii) -RenderThread and OpenGL GPU rendering
  • SurfaceTexture, TextureView, SurfaceView and GLSurfaceView for Android 5.0(Lollipop)