An overview of the

Note: This article is based on the Android 10 source code, for the sake of brevity, the source code may be omitted. If the content of the article is wrong, welcome to point out, common progress! Leave a “like” if you think it’s good

  • The blog link

Having parsed Android-SurfaceFlinger startup and workflow and Android-Choreographer how it works, Now that you know how Vsync signals control SurfaceFlinger for composing Layer data and Choreographer for starting the drawing process of your App, Another Android – View rendering process posted from a Choreographer. PostCallback receives Vsync signal after call ViewRootImpl. PerformTraversals began to View the measure, Layout, draw process code. Then there is the question of how the data drawn in the view.draw method flows into the SurfaceFlinger process after the View has been drawn. This will cover the workflow of the Surface and the logic of the BufferQueue processing the graphics cache. Several important methods related to the BufferQueue will be analyzed later, which will not affect the Surface process parsing.

I will divide into two articles to analyze the surface-related content. This article mainly looks at the Surface creation process and the logic related to hardware and software drawing, and the next one will talk about the Dual buffering and SurfaceView parsing of Android-Surface.

Surface to create

WMS.addWindow

From the principle of android-Window mechanism, the wm. addView method is called with Viewrootimpl. setView method, and then the WMS. AddWindow method is called with Binder. In this method creates a WindowState object, and then call the attach method, the final call to the Session. The windowAddedLocked method, Surface create can start parsing from here:

void windowAddedLocked(String packageName) {
    if (mSurfaceSession == null) {
        mSurfaceSession = newSurfaceSession(); }}// SurfaceSession.java
private long mNativeClient; // SurfaceComposerClient*

/** Create a new connection with the surface flinger. */
public SurfaceSession(a) {
    mNativeClient = nativeCreate();
}

// frameworks/base/core/jni/android_view_SurfaceSession.cpp
static jlong nativeCreate(JNIEnv* env, jclass clazz) {
    SurfaceComposerClient* client = new SurfaceComposerClient();
    client->incStrong((void*)nativeCreate);
    return reinterpret_cast<jlong>(client);
}
Copy the code

The SurfaceSession constructor returns a SurfaceComposerClient pointer from the nativeCreate method, which represents a connection to the SurfaceFlinger.

The following function is called when it is first used:

void SurfaceComposerClient::onFirstRef(a) { sp<ISurfaceComposerClient> conn = (rootProducer ! =nullptr)? sf->createScopedConnection(rootProducer) : sf->createConnection(a);if(conn ! =0) {
        mClient = conn;
    }
    // ...
}

sp<ISurfaceComposerClient> SurfaceFlinger::createConnection(a) {
    return initClient(new Client(this)); // The initClient method just calls initCheck
}
Copy the code

That is, a Client object is created that implements the ISurfaceComposerClient interface.

To summarize the related operations in WMS above: WMS creates a WindowState object representing a Window on the client side, and then calls the WindowState.attach method to create a SurfaceSession object. SurfaceSession represents a connection to SurfaceFlinger that creates a SurfaceComposerClient object, which then creates a SurfaceComposerClient object.

Create a Surface

A ViewRootImpl object corresponds to a Surface object, which has the following code in its source code:

// ViewRootImpl.java
public final Surface mSurface = new Surface();
Copy the code

But now the Surface object has nothing, and the constructor is empty. According to the analysis in the Android-Choreographer workflow, the Viewrootimpl. setView method calls Choreographer’s logic, And then perform after Vsync signal arrival ViewRootImpl. PerformTraversals method:

private void performTraversals(a) {
    relayoutWindow(params, viewVisibility, insetsPending)
    // measure, layout, draw
}

private int relayoutWindow(...). throws RemoteException {
    // Notice that the last parameter, mSurface, is the Surface object created earlier
    // The WMS. RelayoutWindow method is calledmWindowSession.relayout(mWindow, ... , mSurface);// ...
}

// WMS
public int relayoutWindow(Session session, ... , Surface outSurface) {
    result = createSurfaceControl(outSurface, result, win, winAnimator);
    // ...
}

private int createSurfaceControl(Surface outSurface, int result, WindowState win, WindowStateAnimator winAnimator) {
    WindowSurfaceController surfaceController = winAnimator.createSurfaceLocked(win.mAttrs.type, win.mOwnerUid);
    if(surfaceController ! =null) {
        surfaceController.getSurface(outSurface);
    } else {
        outSurface.release();
    }
    return result;
}
Copy the code

The createSurfaceLocked method above creates a WindowSurfaceController object, And then in the constructor of the WindowSurfaceController we create a SurfaceControl object (mSurfaceControl, instantiated by the constructor, not pasted) that, as the name suggests, is used to maintain the Surface, Let’s look at the constructor of this class:

private SurfaceControl(...). {
    // Returns the native SurfaceControl pointermNativeObject = nativeCreate(session, name, w, h, format, flags, parent ! =null ? parent.mNativeObject : 0, windowType, ownerUid);
}

// frameworks/base/core/jni/android_view_SurfaceControl.cpp
static jlong nativeCreate(...). {
    // Client is the SurfaceComposerClient object created above
    sp<SurfaceComposerClient> client(android_view_SurfaceSession_getClient(env, sessionObj));
    sp<SurfaceControl> surface;
    client->createSurfaceChecked(String8(name.c_str()), w, h, format, &surface, flags, parent, windowType, ownerUid);
    returnreinterpret_cast<jlong>(surface.get()); } status_t SurfaceComposerClient::createSurfaceChecked(... , sp<SurfaceControl>* outSurface, ...) {// ...
    sp<IGraphicBufferProducer> gbp;
    err = mClient->createSurface(name, w, h, format, flags, parentHandle, windowType, ownerUid, &handle, &gbp);
    if (err == NO_ERROR) {
        *outSurface = new SurfaceControl(this, handle, gbp, true /* owned */);
    }
    return err;
}

status_t Client::createSurface(...) {
    // ...
    flinger->createLayer(name, client, w, h, format, flags, windowType, ownerUid, handle, gbp, parent);
}
Copy the code

The entity corresponding to the Surface in SurfaceFlinger is the Layer object, and several layers are created in the createLayer method. You can see that a Layer object corresponding to the Surface is created by SurfaceFlinger while creating the SurfaceControl object.

Then the WindowSurfaceController. GetSurface method:

void getSurface(Surface outSurface) {
    outSurface.copyFrom(mSurfaceControl);
}

// Surface.java
public void copyFrom(SurfaceControl other) {
    // The native SurfaceControl pointer returned above
    long surfaceControlPtr = other.mNativeObject;
    long newNativeObject = nativeGetFromSurfaceControl(surfaceControlPtr);

    synchronized (mLock) {
        if(mNativeObject ! =0) { nativeRelease(mNativeObject); } setNativeObjectLocked(newNativeObject); }}private void setNativeObjectLocked(long ptr) {
    if(mNativeObject ! = ptr) { mNativeObject = ptr;if(mHwuiContext ! =null) { mHwuiContext.updateSurface(); }}}// frameworks/base/core/jni/android_view_Surface.cpp
static jlong nativeGetFromSurfaceControl(JNIEnv* env, jclass clazz, jlong surfaceControlNativeObj) {
    // CTRL is the SurfaceControl object created earlier
    sp<SurfaceControl> ctrl(reinterpret_cast<SurfaceControl *>(surfaceControlNativeObj));
    sp<Surface> surface(ctrl->getSurface());
    if(surface ! = NULL) { surface->incStrong(&sRefBaseOwner); }return reinterpret_cast<jlong>(surface.get());
}

sp<Surface> SurfaceControl::getSurface() const
{
    Mutex::Autolock _l(mLock);
    if (mSurfaceData == 0) {
        return generateSurfaceLocked();
    }
    return mSurfaceData;
}

sp<Surface> SurfaceControl::generateSurfaceLocked() const
{
    // mGraphicBufferProducer is the GBP object created above
    mSurfaceData = new Surface(mGraphicBufferProducer, false);
    return mSurfaceData;
}
Copy the code

Above by nativeGetFromSurfaceControl methods return a pointer to a native layer Surface created, and assigned to the Java layer Surface mNativeObject attributes of objects.

IGraphicBufferProducer

This is the IGraphicBufferProducer GBP object. This is a very important object. Let’s see how it was created:

status_t SurfaceFlinger::createLayer(const String8& name, constsp<Client>& client, ... , sp<IGraphicBufferProducer>* gbp, ...) {
    switch (flags & ISurfaceComposerClient::eFXSurfaceMask) {
        case ISurfaceComposerClient::eFXSurfaceNormal:
            // Take BufferLayer as an example
            result = createBufferLayer(client, uniqueName, w, h, flags, format, handle, gbp, &layer);
            break;
        // ...}}status_t SurfaceFlinger::createBufferLayer(const sp<Client>& client,
        const String8& name, uint32_t w, uint32_t h, uint32_t flags, PixelFormat& format,
        sp<IBinder>* handle, sp<IGraphicBufferProducer>* gbp, sp<Layer>* outLayer)
{
    sp<BufferLayer> layer = new BufferLayer(this, client, name, w, h, flags);
    status_t err = layer->setBuffers(w, h, format, flags);
    if (err == NO_ERROR) {
        *handle = layer->getHandle(a); *gbp = layer->getProducer(a);/ / for GBP
        *outLayer = layer;
    }
    return err;
}

sp<IGraphicBufferProducer> BufferLayer::getProducer(a) const {
    return mProducer;
}

void BufferLayer::onFirstRef(a) {
    // Creates a custom BufferQueue for SurfaceFlingerConsumer to use
    sp<IGraphicBufferProducer> producer;
    sp<IGraphicBufferConsumer> consumer;
    BufferQueue::createBufferQueue(&producer, &consumer, true);
    mProducer = new MonitoredProducer(producer, mFlinger, this);
    mConsumer = new BufferLayerConsumer(consumer, mFlinger->getRenderEngine(), mTextureName, this);
    // ...
}
Copy the code

MProducer is an instance of MonitoredProducer, which is a decorator class to which the actual functions are delegated:

void BufferQueue::createBufferQueue(sp<IGraphicBufferProducer>* outProducer,
        sp<IGraphicBufferConsumer>* outConsumer, bool consumerIsSurfaceFlinger) {
    sp<BufferQueueCore> core(new BufferQueueCore());
    sp<IGraphicBufferProducer> producer(new BufferQueueProducer(core, consumerIsSurfaceFlinger));
    sp<IGraphicBufferConsumer> consumer(new BufferQueueConsumer(core));
    *outProducer = producer;
    *outConsumer = consumer;
}
Copy the code

You can see that producer is a BufferQueueProducer object and consumer is a BufferQueueConsumer object.

summary

In the Java layer, the ViewRootImpl instance holds a Surface object. The mNativeObject property in the Surface object points to the Surface object created in the native layer. The Surface of the native Layer corresponds to the Layer object in the SurfaceFlinger, which holds the BufferQueueProducer pointer in the Layer. In the later drawing process, the Surface will request the graph cache through this producer. The content drawn on the Surface is stored in this cache. Finally, the SurfaceFlinger will fetch the cache data through the BufferQueueConsumer consumer. And the composite rendering is sent to the display.

Hardware acceleration & software drawing

An overview of the

Resterization: rasterization. Rasterization separates Button, TextView and other components into different pixels for display, which is a time-consuming operation. GPU can speed up the rasterization operation.

The UI of the Android system can be divided into two steps from drawing to displaying on the screen:

  1. Android APP process: Draws the UI to a graphics buffer GraphicBuffer and tells SurfaceFlinger to compose it.
  2. SurfaceFlinger process: Compose GraphicBuffer data and hand it to the screen cache for display. This step itself is accomplished by HardWare (OpenGL and HardWare Composer).

Therefore, hardware acceleration generally refers to the process of accelerating graphics rendering to GraphicBuffer by GPU in APP process. As a hardware, GPU user space cannot be used directly. GPU manufacturers will implement a set of API drivers to call its related functions according to OpenGL specifications.

Software rendering

  • When the App updates part of the UI, the CPU traverses the View Tree to calculate the dirty areas that need to be redrawn, and then draws all areas in the View hierarchy that intersect the dirty areas, so the software draws to the View that does not need to be redrawn.
  • The drawing process of software drawing is carried out in the main thread, which may cause stalling.
  • Software rendering writes the content to a Bitmap, and during subsequent rendering, the pixel content of the Bitmap is filled into the Surface cache.
  • Software drawing using Skia library.

Hardware rendering

  • When the App updates part of the UI, the CPU calculates the dirty areas, but instead of immediately executing a draw command, it records the drawXXX function as a DrawOp in a DisplayList. It is then handed over to a separate Render thread using the GPU for hardware-accelerated rendering.
  • Only the dirty parts of the View object that need to be updated need to be recorded or updated, and the View object that does not need to be updated can reuse the instructions previously recorded in the DisplayList.
  • Hardware acceleration is done in a separate Render thread, taking the load off the main thread and improving response time.
  • Hardware rendering is done on the GPU using OpenGL, a cross-platform graphics API that provides a standard software interface for 2D/3D graphics processing hardware. I heard that in the new version of Android, Google is gradually letting Skia take over OpenGL, implementing indirect unified calls.
  • Hardware acceleration has several drawbacks: compatibility (some rendering functions do not support speed), memory consumption, power consumption (GPU power consumption), etc.
  • Android 3.0(API 11) supports hardware acceleration, and Android 4.0(API 14) enables hardware acceleration by default.

For a simple use of OpenGl ES, see android-Opengl-ES Notes and official documentation.

Configuring Hardware Acceleration

  • Application: Added to the Application tag in the Manifest fileandroid:hardwareAccelerated="boolean"
  • Activity: Added to the Activity tag in the Manifest fileandroid:hardwareAccelerated="boolean"
  • Window: getWindow().setFlags(WindowManager.LayoutParams.FLAG_HARDWARE_ACCELERATED, WindowManager.LayoutParams.FLAG_HARDWARE_ACCELERATED)
  • View: setLayerType(View.LAYER_TYPE_HARDWARE/*View.LAYER_TYPE_SOFTWARE*/, mPaint)

Determine whether hardware acceleration is supported:

// If hardware acceleration is not enabled for Application or Activity, return false
view.isHardwareAccelerated()

// If setLayerType is not set, it is affected by Application and Activity
// If setLayerType is set, the return value is affected by the setLayerType argument
canvas.isHardwareAccelerated()
Copy the code

In the process of drawing through VRImpl. EnableHardwareAcceleration method to judge whether need to open the hardware acceleration:

private void enableHardwareAcceleration(WindowManager.LayoutParams attrs) {
    mAttachInfo.mHardwareAccelerated = false;
    mAttachInfo.mHardwareAccelerationRequested = false;

    // Don't enable hardware acceleration when the application is in compatibility mode
    if(mTranslator ! =null) return;

    // Try to enable hardware acceleration if requested
    final booleanhardwareAccelerated = (attrs.flags & WindowManager.LayoutParams.FLAG_HARDWARE_ACCELERATED) ! =0;

    if (hardwareAccelerated) {
        // Create hardware-accelerated renderer}}Copy the code

Software rendering

From the Android-View drawing process know that software drawing from VRImpl. DrawSoftware method start.

VRImpl.drawSoftware

private boolean drawSoftware(Surface surface, AttachInfo attachInfo, int xoff, int yoff,
        boolean scalingRequired, Rect dirty, Rect surfaceInsets) {

    // Draw with software renderer.
    final Canvas canvas;
    canvas = mSurface.lockCanvas(dirty);
    canvas.setDensity(mDensity);

    try {
        dirty.setEmpty();
        mView.draw(canvas);
    } finally {
        surface.unlockCanvasAndPost(canvas);
    }
    return true;
}
Copy the code

The above software drawing can be divided into three steps:

  1. Apply for a graphic cache from SurfaceFlinger using the Surface.lockCanvas method
  2. Call the view. draw method to write the draw data to the cache
  3. Through Surface. UnlockCanvasAndPost method will be populated with data on buffer into the queue and notify SurfaceFlinger synthesis

Surface.lockCanvas

public Canvas lockCanvas(Rect inOutDirty) throws Surface.OutOfResourcesException, IllegalArgumentException {
    synchronized (mLock) {
        if(mLockedObject ! =0) {
            // refuse to re-lock the Surface.
            throw new IllegalArgumentException("Surface was already locked");
        }
        mLockedObject = nativeLockCanvas(mNativeObject, mCanvas, inOutDirty);
        returnmCanvas; }}Copy the code

Here the native method nativeLockCanvas is called to get the mLockedObject pointer. As mentioned above, the mNativeObject points to the Surface object created by the Native layer.

// frameworks/base/core/jni/android_view_Surface.cpp
static jlong nativeLockCanvas(JNIEnv* env, jclass clazz, jlong nativeObject, jobject canvasObj, jobject dirtyRectObj) {
    sp<Surface> surface(reinterpret_cast<Surface *>(nativeObject));

    // Create native Rect objects based on Dirty areas
    Rect dirtyRect(Rect::EMPTY_RECT);
    Rect* dirtyRectPtr = NULL;

    if (dirtyRectObj) {
        // Initialize dirtyRect and dirtyRectPtr according to the Java layer dirtyRectObj
    }

    ANativeWindow_Buffer outBuffer;
    // Call the lock method to extract a free graph cache and assign it to outBuffer
    status_t err = surface->lock(&outBuffer, dirtyRectPtr);
    // Create SkImageInfo based on outBuffer
    SkImageInfo info = SkImageInfo::Make(outBuffer.width, outBuffer.height, convertPixelFormat(outBuffer.format),
                                         outBuffer.format == PIXEL_FORMAT_RGBX_8888 ? kOpaque_SkAlphaType : kPremul_SkAlphaType,
                                         GraphicsJNI::defaultColorSpace());
    / / set the bitmap
    SkBitmap bitmap;
    ssize_t bpr = outBuffer.stride * bytesPerPixel(outBuffer.format);
    bitmap.setInfo(info, bpr);
    if (outBuffer.width > 0 && outBuffer.height > 0) {
        bitmap.setPixels(outBuffer.bits);
    } else {
        // be safe with an empty bitmap.
        bitmap.setPixels(NULL);
    }
    // Set the Native bitmap to the Native Canvas
    Canvas* nativeCanvas = GraphicsJNI::getNativeCanvas(env, canvasObj);
    nativeCanvas->setBitmap(bitmap);
    if (dirtyRectPtr) {
        nativeCanvas->clipRect(dirtyRect.left, dirtyRect.top, dirtyRect.right, dirtyRect.bottom, SkClipOp::kIntersect);
    }
    / / return
    sp<Surface> lockedSurface(surface);
    lockedSurface->incStrong(&sRefBaseOwner);
    return (jlong) lockedSurface.get(a); }Copy the code

The nativeLockCanvas method above creates a Rect object based on the dirty area and calls Surface ->lock:

status_t Surface::lock(ANativeWindow_Buffer* outBuffer, ARect* inOutDirtyBounds) {
    ANativeWindowBuffer* out;
    status_t err = dequeueBuffer(&out, &fenceFd); // Assign a GraphicBuffer to out
    sp<GraphicBuffer> backBuffer(GraphicBuffer::getSelf(out));
    / / lock GraphicBuffer
    status_t res = backBuffer->lockAsync(GRALLOC_USAGE_SW_READ_OFTEN | GRALLOC_USAGE_SW_WRITE_OFTEN, newDirtyRegion.bounds(), &vaddr, fenceFd);

    if(res ! =0) {
        err = INVALID_OPERATION;
    } else {
        mLockedBuffer = backBuffer;
        outBuffer->width  = backBuffer->width;
        outBuffer->height = backBuffer->height;
        outBuffer->stride = backBuffer->stride;
        outBuffer->format = backBuffer->format;
        outBuffer->bits   = vaddr;
    }
    return err;
}

int Surface::dequeueBuffer(android_native_buffer_t** buffer, int* fenceFd) {
    // ...
    status_t result = mGraphicBufferProducer->dequeueBuffer(&buf, &fence, reqWidth, reqHeight,
        reqFormat, reqUsage, &mBufferAge, enableFrameTimestamps ? &frameTimestamps : nullptr);
    // ...
    *buffer = gbuf.get(a);// ...
}
Copy the code

MGraphicBufferProducer is the BufferQueueProducer of the graph buffer in the Layer, A GraphicBuffer buffer is drawn from the BufferQueue by the BufferQueueProducer at drawing time.

Summary: The surf. lockCanvas method uses the bufferQueuProducer to fetch a GraphicBuffer from the BufferQueue (used to create bitmaps in the Canvas). And locks the Surface, and then returns the address of the Surface to the mLockedObject property in the Java layer Surface.

View.draw

See the principle of Android-View drawing, taking drawLines as an example:

// BaseCanvas
public void drawLines(@Size(multiple = 4) @NonNull float[] pts, int offset, int count, @NonNull Paint paint) {
    nDrawLines(mNativeCanvasWrapper, pts, offset, count, paint.getNativeInstance());
}

// {"nDrawLines", "(J[FIIJ)V", (void*) CanvasJNI::drawLines}
// nDrawLines corresponds to SkiaCanvas::drawLines
void SkiaCanvas::drawLines(const float* points, int count, const SkPaint& paint) {
    this->drawPoints(points, count, paint, SkCanvas::kLines_PointMode);
}

void SkiaCanvas::drawPoints(const float* points, int count, const SkPaint& paint, SkCanvas::PointMode mode) {
    // ...
    // mCanvas is the SkCanvas type pointer
    mCanvas->drawPoints(mode, count, pts.get(), paint);
}

void SkCanvas::drawPoints(PointMode mode, size_t count, const SkPoint pts[], const SkPaint& paint) {
    this->onDrawPoints(mode, count, pts, paint);
}

void SkCanvas::onDrawPoints(PointMode mode, size_t count, const SkPoint pts[],
                            const SkPaint& paint) {
    // ...
    while (iter.next()) {
        iter.fDevice->drawPoints(mode, count, pts, looper.paint()); }}Copy the code

In the new version of Android, Google began to gradually let Skia take over OpenGL to achieve indirect unified call, so both software drawing and hardware acceleration are using SkiaCanvas objects of native layer. Then go to fDevice->drawPoints to actually draw (or build).

// This should be the Device class drawn by software... Yeah, that's kind of the idea
void SkBitmapDevice::drawPoints(SkCanvas::PointMode mode, size_t count, const SkPoint pts[], const SkPaint& paint) {
    // Draw in Bitmap
    BDDraw(this).drawPoints(mode, count, pts, paint, nullptr);
}
Copy the code

Surface.unlockCanvasAndPost

public void unlockCanvasAndPost(Canvas canvas) {
    synchronized (mLock) {
        checkNotReleasedLocked();
        if(mHwuiContext ! =null) { // Hardware drawing goes here
            mHwuiContext.unlockAndPost(canvas);
        } else { // Software drawingunlockSwCanvasAndPost(canvas); }}}private void unlockSwCanvasAndPost(Canvas canvas) {
    // ...
    try {
        nativeUnlockCanvasAndPost(mLockedObject, canvas);
    } finally {
        nativeRelease(mLockedObject);
        mLockedObject = 0; }}Copy the code

Here by before locking Surface layer address call native nativeUnlockCanvasAndPost method:

static void nativeUnlockCanvasAndPost(JNIEnv* env, jclass clazz, jlong nativeObject, jobject canvasObj) {
    sp<Surface> surface(reinterpret_cast<Surface *>(nativeObject));

    // detach the canvas from the surface
    Canvas* nativeCanvas = GraphicsJNI::getNativeCanvas(env, canvasObj);
    nativeCanvas->setBitmap(SkBitmap());

    // unlock surface
    status_t err = surface->unlockAndPost(a); }status_t Surface::unlockAndPost(a) {
    int fd = - 1;
    status_t err = mLockedBuffer->unlockAsync(&fd);
    err = queueBuffer(mLockedBuffer.get(), fd);
    mPostedBuffer = mLockedBuffer;
    mLockedBuffer = 0;
    return err;
}
Copy the code

The mLockedBuffer->unlockAsync method is used to unlock GraphicBuffer. Then look at the queueBuffer method:

int Surface::queueBuffer(android_native_buffer_t* buffer, int fenceFd) {
    // ...
    int i = getSlotFromBufferLocked(buffer);
    // ...
    IGraphicBufferProducer::QueueBufferOutput output;
    IGraphicBufferProducer::QueueBufferInput input(timestamp, isAutoTimestamp,
            static_cast<android_dataspace>(mDataSpace), crop, mScalingMode,
            mTransform ^ mStickyTransform, fence, mStickyTransform, mEnableFrameTimestamps);
    // ...
    status_t err = mGraphicBufferProducer->queueBuffer(i, input, &output);
    // ...
    return err;
}
Copy the code

The mGraphicBufferProducer->queueBuffer is the mGraphicBufferProducer that sends the graphics buffer filled with drawn data to the BufferQueue. After queueBuffer is called, the sf. signalLayerUpdate method is called:

void SurfaceFlinger::signalLayerUpdate(a) {
    mEventQueue->invalidate(a); }void MessageQueue::invalidate(a) {
    mEvents->requestNextVsync(a); }Copy the code

This brings us back to the Android-SurfaceFlinger startup and drawing process, which is used to request the next Vsync signal.

summary

Software drawing can be simply divided into the following three steps:

  1. Surface. Through BufferQueueProducer lockCanvas method. From BufferQueue dequeueBuffer function to retrieve a graphic buffer GraphicBuffer (used to create the Bitmap in the Canvas Object) and locks the Surface, and returns the address of the Surface to the mLockedObject property in the Java layer Surface. Surface’s dual buffering logic is also involved in this method, which will be explained later.
  2. Call the view. draw method to draw the content to the corresponding Bitmap of the Canvas. In fact, the GraphicBuffer above is filled with drawing data.
  3. Surface. UnlockCanvasAndPost method by calling the locked Surface – > unlockAndPost method unlock Surface and by queueBuffer function will be populated with data of graphic GraphicBuffer cache area Store it in the BufferQueue and notify the SurfaceFlinger for synthesis (request Vsync signal).

There are two important functions in BufferQueueProducer:

  • DequeueBuffer: The producer requests a free GraphicBuffer from the BufferQueue.
  • QueueBuffer: BufferQueueProducer Writes graphicBuffers filled with data into the BufferQueue queue

There’s a couple of important ways that we’ll talk about BufferQueue and producer/consumer in a future article, but if there’s a producer there’s a consumer job, The SurfaceFlinger process is where the consumer uses the BufferQueueConsumer to synthesize the GraphicBuffer from the BufferQueue, If you are interested in the specific logic, you can read the source code by yourself after referring to the Android-SurfaceFlinger startup and workflow.

As for Canvas in software drawing, its drawing target is a Bitmap object, and the content drawn will be filled into the GraphicBuffer held by Surface.

Hardware rendering

Reference: www.jianshu.com/p/40f660e17…

The threadedrenderer.draw method is used to draw hardware from the Androidview drawing process.

ThreadedRenderer.draw

void draw(View view, AttachInfo attachInfo, DrawCallbacks callbacks, FrameDrawingCallback frameDrawingCallback) {
    final Choreographer choreographer = attachInfo.mViewRootImpl.mChoreographer;
    choreographer.mFrameInfo.markDrawStart();
    // Build the View's DrawOp tree
    updateRootDisplayList(view, callbacks);
    // Notify the RenderThread thread to draw
    int syncResult = nSyncAndDrawFrame(mNativeProxy, frameInfo, frameInfo.length);
    // ...
}
Copy the code

Hardware rendering can be divided into two phases: the build phase and the render phase.

The construction phase

private RenderNode mRootNode;

private void updateRootDisplayList(View view, DrawCallbacks callbacks) {
    // ...
    // Get the DisplayListCanvas with RenderNode
    DisplayListCanvas canvas = mRootNode.start(mSurfaceWidth, mSurfaceHeight);
    try {
        // ...
        // Build and cache all drawops using DisplayListCanvas
        canvas.drawRenderNode(view.updateDisplayListIfDirty());
        // ...
    } finally {
        // Store the view-built DrawOp into RenderNode to complete the buildmRootNode.end(canvas); }}// View.java
public RenderNode updateDisplayListIfDirty(a) {
    final RenderNode renderNode = mRenderNode;
    if ((mPrivateFlags & PFLAG_DRAWING_CACHE_VALID) == 0| |! renderNode.isValid() || (mRecreateDisplayList)) {// ...
        final DisplayListCanvas canvas = renderNode.start(width, height);
        try {
            if (layerType == LAYER_TYPE_SOFTWARE) {// Force software drawing
                buildDrawingCache(true);
                Bitmap cache = getDrawingCache(true);
                if(cache ! =null) {
                    canvas.drawBitmap(cache, 0.0, mLayerPaint); }}else {
                // Recurse the subview directly if you don't draw it
                if ((mPrivateFlags & PFLAG_SKIP_DRAW) == PFLAG_SKIP_DRAW) {
                    dispatchDraw(canvas);
                } else {// Call draw to recurse subviews if ViewGroupdraw(canvas); }}}finally {
            // Cache build OprenderNode.end(canvas); }}return renderNode;
}
Copy the code

The Canvas we create here is an instance of the DisplayListCanvas type. After calling the view. draw method, we use the DisplayListCanvas to draw, using drawLines as an example:

// DisplayListCanvas
public final void drawLines(@Size(multiple = 4) @NonNull float[] pts, int offset, int count, @NonNull Paint paint) {
    nDrawLines(mNativeCanvasWrapper, pts, offset, count, paint.getNativeInstance());
}

// Canvas. Draw can be implemented here...
void SkGpuDevice::drawPoints(SkCanvas::PointMode mode, size_t count, const SkPoint pts[], const SkPaint& paint) {
    // ...
    // The fRenderTargetContext will build the DrawOp
    fRenderTargetContext->drawPath(this->clip(), std::move(grPaint), GrAA(paint.isAntiAlias()), this->ctm(), path, style);
}
Copy the code

In the View above. UpdateDisplayListIfDirty method will traverse all of the children in the View and through DisplayListCanvas construct a DrawOp tree, the recursive DrawOp construction finished, The renderNode.end method is called:

public void end(DisplayListCanvas canvas) {
    long displayList = canvas.finishRecording();
    // Cache displayList in RenderNode of native layer
    nSetDisplayList(mNativeRenderNode, displayList);
    canvas.recycle();
}
Copy the code

The renderNode. end method is used to cache displayList in the RenderNode of the native layer. After the updateDisplayListIfDirty method iterates through the child View and returns RenderNode with the displayList cached, ThreadedRenderer through DisplayListCanvas. DrawRenderNode methods will return before RenderNode into ThreadedRenderer internal RenderNode, Rendernode. end is also used to cache the displayList in the Native RenderNode.

Rendering phase

Application memory

The software draws a GraphicBuffer by using the Surf. lockCanvas method with BufferQueueProducer. For hardware-accelerated memory, take a look at this code (performTraversals should be familiar) :

private void performTraversals(a) {
    // ...
    if(mAttachInfo.mThreadedRenderer ! =null) {
        try {
            hwInitialized = mAttachInfo.mThreadedRenderer.initialize(mSurface);
            if (hwInitialized && (host.mPrivateFlags & View.PFLAG_REQUEST_TRANSPARENT_REGIONS) == 0) { mSurface.allocateBuffers(); }}catch (OutOfResourcesException e) {
            handleOutOfResourcesException(e);
            return; }}// ...
}

// Surface.java
public void allocateBuffers(a) {
    synchronized(mLock) { checkNotReleasedLocked(); nativeAllocateBuffers(mNativeObject); }}// frameworks/base/core/jni/android_view_Surface.cpp
static void nativeAllocateBuffers(JNIEnv* /* env */ , jclass /* clazz */, jlong nativeObject) {
    sp<Surface> surface(reinterpret_cast<Surface *>(nativeObject));
    surface->allocateBuffers();
}

void Surface::allocateBuffers() {
    // Call BufferQueueProducer to allocate buffers
    mGraphicBufferProducer->allocateBuffers(reqWidth, reqHeight, mReqFormat, mReqUsage);
}
Copy the code

The hardware-accelerated approach to requesting memory calls is different, but it also appears to queue a free buffer from the BufferQueue using the BufferQueueProducer producer in the Layer.

As you can see, hardware acceleration takes more time to request SurfaceFlinger memory allocation than software drawing. Hardware acceleration is designed in this way to pre-allocate memory, avoid reapplying during rendering, prevent the CPU from wasting previous build work if the memory allocation fails, and also simplify the work of the rendering thread.

The render thread binds the Surface

Then take a look at how the Render thread binds to the target Surface drawing interface (since there can be multiple Surface drawing interfaces at the same time, it needs to bind to a Render context), Before seen from the above application memory called the ThreadedRenderer. The initialize method:

// ThreadedRenderer
boolean initialize(Surface surface) throws OutOfResourcesException { boolean status = ! mInitialized; mInitialized =true;
    updateEnabledState(surface);
    nInitialize(mNativeProxy, surface);
    return status;
}

// frameworks/base/core/jni/android_view_ThreadedRenderer.cpp
static void android_view_ThreadedRenderer_initialize(JNIEnv* env, jobject clazz, jlong proxyPtr, jobject jsurface) {
    RenderProxy* proxy = reinterpret_cast<RenderProxy*>(proxyPtr);
    sp<Surface> surface = android_view_Surface_getSurface(env, jsurface);
    proxy->initialize(surface);
}

void RenderProxy::initialize(const sp<Surface>& surface) {
    // Send a message to the Render thread, using the CanvasContext->setSurface method
    mRenderThread.queue().post([ this, surf = surface ]() mutable { mContext->setSurface(std::move(surf)); });
}
Copy the code

When initialized above, the CanvasContext binds the currently rendered Surface to the Render thread using the setSurface method.

Apply colours to a drawing

When the rendering thread is bound to the Surface and the Surface memory is allocated and the DrawOp tree is constructed, we can look at the rendering process, starting with the nSyncAndDrawFrame method above.

// frameworks/base/core/jni/android_view_ThreadedRenderer.cpp
static int android_view_ThreadedRenderer_syncAndDrawFrame(JNIEnv* env, jobject clazz, jlong proxyPtr, jlongArray frameInfo, jint frameInfoSize) {
    RenderProxy* proxy = reinterpret_cast<RenderProxy*>(proxyPtr);
    env->GetLongArrayRegion(frameInfo, 0, frameInfoSize, proxy->frameInfo());
    return proxy->syncAndDrawFrame(a); }int RenderProxy::syncAndDrawFrame(a) {
    return mDrawFrameTask.drawFrame(a); }int DrawFrameTask::drawFrame(a) {
    postAndWait(a);return mSyncResult;
}

void DrawFrameTask::postAndWait(a) {
    AutoMutex _lock(mLock);
    mRenderThread->queue().post([this] () {run(a); }); mSignal.wait(mLock);
}

void DrawFrameTask::run(a) {
    // ...
    if (CC_LIKELY(canDrawThisFrame)) { // CanvasContext
        context->draw(a); }else {
        // wait on fences so tasks don't overlap next frame
        context->waitOnFences();
    }
}
Copy the code

Then all the drawops are drawn to the GraphicBuffer via OpenGL, and the SurfaceFlinger is told to compose them.

summary

Hardware acceleration can be viewed in two phases:

  1. Construction phase: Abstract the View into a RenderNode node, and each of its draw operations (drawLine…) Is abstracted to a DrawOp operation, which exists in the corresponding OpenGL draw command and holds the data needed to draw. This stage will recursively traverse all views and convert drawing operations into DrawOp via Canvas.drawXXX and store it in the DisplayList. According to the ViewTree model, this DisplayList is named List, but it actually looks more like a tree.
  2. Draw phase: Draw the above DrawOp data with a separate Render thread depending on the GPU.

The hardware-accelerated memory requisition, like the software drawing, uses the BufferQueueProducer in the Layer to queue up the BufferQueue and a free GraphicBuffer to render the data. SurfaceFlinger will then be notified of the composition. The difference is that the hardware acceleration algorithm may be more reasonable than software rendering, and a separate Render thread is used to reduce the burden of the main thread.

conclusion

  • The mNativeObject property in the Surface object of the Java layer points to the Surface object created in the native layer.
  • The Surface corresponds to the Layer object in the SurfaceFlinger, which holds the BufferQueueProducer pointer in the Layer, With this producer object, you can apply to the BufferQueue at drawing time for a free GraphicBuffer, where the content drawn on the Surface is stored.
  • The SurfaceFlinger uses the BufferQueueConsumer consumer to fetch the GraphicBuffer data from the BufferQueue for composite rendering and sending it to the display.

Software rendering

Software drawing may draw to a view that does not need to be redrawn, and the drawing process is carried out in the main thread, which may cause lag. It writes the content to a Bitmap, essentially filling the graphics cache requested by Surface.

Software drawing can be divided into three steps:

  1. Surf. lockCanvas — dequeueBuffer queues a buffer from the BufferQueue.
  2. View.draw — Draw content.
  3. Surface. UnlockCanvasAndPost — — queueBuffer will populate the data buffer in BufferQueue queue, then notice to SurfaceFlinger Vsync signal (request) of synthesis.

Hardware rendering

Hardware rendering records the drawing function as a DrawOp in a DisplayList and then hands it off to a separate Render thread using the GPU for hardware-accelerated rendering. It only needs to record or update the dirty parts of the View objects that need to be updated, and the View objects that do not need to be updated can reuse the instructions recorded in the previous DisplayList.

Hardware drawing can be divided into two stages:

  1. Construction phase: Draw the View operation (drawLine…) Abstracts it into a DrawOp operation and stores it in a DisplayList.
  2. Drawing stage: first allocate cache (same as software drawing), then bind Surface to Render thread, and finally Render DrawOp data by GPU.

Hardware-accelerated memory requisition, like software drawing, is done by using the BufferQueueProducer in the Layer to queue up data from the BufferQueue and a free GraphicBuffer to render data, SurfaceFlinger will then be notified of the composition. The difference is that the hardware acceleration algorithm may be more reasonable than software rendering, and a separate Render thread is used to reduce the burden of the main thread.

If there is a problem in the source code analysis, welcome to point out, because some of the logic of the place I also know a little about, for correction. Here is a diagram to summarize the process of drawing Android hardware and software: