Standing on the shoulders of giants, summarize the principles of realization and optimization and carton monitoring, at one go, gas flow unobstructing, and what bicycle ~huaixiao~

Image graphics rendering principle

Graphics rendering mainly uses GPU parallel computing capability to achieve graphics rendering and display on every pixel of the screen. The most common rendering process is rasterization, the process of converting data into visible pixels. GPU and related drivers to implement the graphics processing of OpenGL and DirectX model, in fact, OpenGL is not a function API but a standard, formulated the related function API and its implementation of the function library to be implemented by a third party, usually by the graphics card manufacturers to provide.

The GPU rendering process is shown in the figure below:

Mainly include: Vertex shaders (contains the 3 d coordinate system conversion, set each vertex attribute value), shape (figure) assembly (formation of the basic graphics), the geometry shader (construct new vertices to create other shapes, such as the above another triangle), rasterizer (will shape mapped to corresponding pixels generate section of the screen, The fragment contains all the data for the pixel structure), the fragment shader (discarding and coloring pixels beyond the view), and test and blend (determining the pixel position, such as whether it is behind other pixels and whether to discard and mix transparency).

More vertices and color attributes are required to make graphics more realistic, which adds to the performance overhead, and textures are often used to represent details to improve production and execution.

A texture is a 2D image (there are even 1D and 3D textures), which can be used as direct input to * the fifth stage (the segment shader)* of the graphics rendering pipeline;

GPU contains several processing cores to realize concurrent execution, and it uses level 2 cache (L1 and L2 cache) internally. The architectural model of GPU and CPU includes the following two forms: separated and coupled, as shown in the figure below:

  • Separable structure

    The CPU and GPU have their own storage system and are connected through the PCI-E bus. The disadvantage of this structure is that PCI-E has low bandwidth and high latency compared with pCI-E and pCI-E, and data transmission becomes the performance bottleneck. Currently it is widely used, such as PC, smart phone and so on.

  • Coupled structure

    The CPU and GPU share memory and cache. AMD’s APU is based on this structure and is currently used mainly in game consoles such as the PS4.

The graphical display structure of the screen is as follows:

The CPU submits the graphics data to the GPU through the BUS. After rendering, the GPU converts the data into a number of frames and submits it to the frame buffer. The video controller reads the data from the frame buffer frame by frame through the VSync signal and submits it to the screen controller for final display on the screen. To solve the problem of frame buffer efficiency (both read and write are an inefficient concurrent process), the double buffering mechanism is adopted. In this case, the GPU prerenders a frame into a buffer for the video controller to read. After the next frame is rendered, the GPU will directly point the pointer of the video controller to the second buffer, as shown in the picture below:

Double-buffering mechanism while improve the efficiency but also introduced the torn picture problem, namely when the video controller has not read the complete, the screen contents had shown a half hour, the GPU will be a new frame content submitted to the frame buffer and exchange, the two buffers for video controller will be put in the second half of a new frame of data displayed on the screen, screen tearing phenomenon, the following figure:

To solve this problem, Gpus usually have a mechanism called VSync (also known as V-sync). When VSync is enabled, the GPU waits for a VSync signal from the display before rendering a new frame and updating the buffer. This solves the tear phenomenon and increases the smoothness of the image, but requires more computing resources and some delays.

IOS devices will always use dual caching, with vsync enabled. For Android devices, it wasn’t until version 4.1 that Google introduced this mechanism, which is currently triple cache + vSYNC.

caton

After the arrival of VSync signal, the system graphics service will notify the App through CADisplayLink and other mechanisms, and the App main thread starts to calculate the display content in the CPU, such as view creation, layout calculation, picture decoding, text drawing and so on. Then the CPU will submit the calculated content to the GPU, which will transform, synthesize and render the content. The GPU then submits the render result to the frame buffer and waits for the next VSync signal to be displayed on the screen. Due to the VSync mechanism, if the CPU or GPU does not complete the submission within a VSync period, the frame is discarded until the next opportunity to display, while the display remains unchanged. That’s why the interface is stuck.

Image display

Graphics rendering technology stack

Overall Graphics rendering technology stack: The App uses Core Graphics, Core Animation, Core Image and other frameworks to draw visual content. These frameworks are also dependent on each other. All of these frameworks need to be drawn by OpenGL to call GPU, and finally the content is displayed on the screen. The structure is shown in the figure below:

Introduction to the framework:

  • UIKit

    UIKit itself does not have the ability to image on the screen. It is mainly responsible for the response to user operation events (UIView inherits from UIResponder), and the event response is generally passed through the view tree traversal layer by layer.

  • Core Animation

    Core Animation is a composite engine whose job it is to combine the different visual content on the screen as quickly as possible. These visual content can be broken down into separate layers (CALayer), which are stored in a system called a layer tree. Essentially, the CALayer is the basis for everything the user can see on the screen.

  • Core Graphics

    Core Graphics is based on the Quartz advanced Graphics engine and is primarily used to draw images at run time. Developers can use this framework to handle path-based drawing, conversion, color management, off-screen rendering, patterns, gradients and shadows, image data management, image creation and image masks, and PDF document creation, display and analysis.

  • Core Image

    Core Image is the opposite of Core Graphics, which is used to create images at run time, while Core Image is used to process images created before run. The Core Image framework has a series of off-the-shelf Image filters to efficiently process existing images.

  • OpenGL(ES)

    OpenGL ES (OpenGL for Embedded Systems, or GLES) is a subset of OpenGL.

  • Metal

    Metal is similar to OpenGL ES in that it is a set of third-party standards implemented by Apple. Most developers don’t use Metal directly, but all do indirectly. Core Animation, Core Image, SceneKit, SpriteKit and other rendering frameworks are all built on top of Metal. When debugging an OpenGL program on a real machine, the console prints a log with Metal enabled. Based on this, you can guess that Apple has implemented a mechanism to seamlessly bridge OpenGL commands to Metal, which does the actual hardware interaction.

UIView and CALayer relationship

Every view control in UIKit has an associated CALayer, or backing layer, inside it. Because of this one-to-one correspondence, views are presented in the form of view trees, and their corresponding layers are also presented in the form of layer trees.

The view’s job is to create and manage layers so that when a child view is added or removed from the hierarchy, its associated layer does the same in the layer tree, ensuring that the view tree and the layer tree are structurally consistent.

The purpose of apple’s adoption of this structure is to ensure common use of the CALayer at the bottom of the iOS/Mac platform and avoid duplication of code and separation of responsibilities. After all, multi-touch is fundamentally different from mouse and keyboard based interaction.

CALayer

A CALayer is basically the same as a texture. It’s essentially an image, so a CALayer also contains a contents property that points to a cache called the backing Store, which holds a Bitmap. In iOS, the images stored in this cache are called host images.

Bitmaps (Known as bitmaps in Taiwan), also known as Raster graphics, are images represented by pixel-array/dot-matrix dots. A bitmap can also refer to a data structure that represents a dense set in a finite domain, where each element appears at least once and no other data is associated with the element. Widely used in indexing, data compression, etc., bitmap pixels are assigned specific positions and color values.

The graphics rendering pipeline supports drawing from vertices (which are processed to create textures in the pipeline) and rendering directly with textures (images). Accordingly, in the actual development, there are two ways to draw the interface: one is manual drawing (Custom drawing); Another way is to use contents image.

Contents Image means to configure the Image through the Contents property of the CALayer, typically CGImage to specify its content. Custom Drawing refers to using Core Graphics to draw the host Drawing directly. In actual development, it is common to draw from definition by inheriting UIView and implementing the -drawRect: method.

Although -drawrect: is a UIView method, it’s actually the underlying CALayer that does the redrawing and saves the resulting image. The following figure shows the basic principle of -DRAWRECt: Drawing a definition host diagram.

  • Uiviews have a CALayer property

  • The CALayer has a weak reference delegate attribute and implements the

    protocol. The protocol method is implemented by UIView proxy.

  • When redrawing is required, the CALayer first calls the -displayLayer method, in which case the agent can directly set contents property.

    Need to redraw refers to: such as changed the Frame, update the UIView/CALayer level, or manually invoked the UIView/CALayer setNeedsLayout/setNeedsDisplay method;

  • If the agent does not implement the -displayLayer: method, the CALayer will try to call the -drawLayer:inContext: method. Before calling this method, the CALayer creates an empty host diagram (size determined by bounds and contentScale) and a Core Graphics drawing context, CGContextRef, in preparation for drawing the host diagram, which is passed in as a CTX parameter.

  • -drawLayer: drawRect is called inside the inContext.

    - (void)drawLayer:(CALayer*)layer inContext:(CGContextRef)context {
        UIGraphicsPushContext(context);
    
        CGRect bounds;
        bounds = CGContextGetClipBoundingBox(context);
        [self drawRect:bounds];
    
        UIGraphicsPopContext();
    }
    Copy the code

    The specific function call stack is as follows:

  • Finally, byCore GraphicsDrawing the generated host map will be storedbacking store.

Core Animation Pipeline

After understanding the essence and process of CALayer, introduce the working principle of Core Animation Pipeline in detail, as shown in the following figure:

In iOS, the application is not responsible for rendering, but a specialized rendering process, namely the Render Server;

Before iOS 5 this process was called SpringBoard, after iOS 6 it was called BackBoard or backboardd;

Jailbreak to check the system process, this process does exist, as shown in the figure below:

The main processing process is as follows:

  • First, the App handles Handle Events, such as user clicks. During this process, the App may need to update the view tree, and accordingly, the layer tree will be updated.

  • Secondly, the App uses CPU to complete the calculation of the display content, such as: view creation, layout calculation, picture decoding, text drawing, etc. After calculating the display content, the App packages the layer and sends it to the Render Server on the next RunLoop, completing a Commit Transaction.

    Commit transcation can be broken down into the following steps:

    • Layout, mainly for view building, including:LayoutSubviewsOverloading of methods,addSubview:Method to fill the subview;
    • Display, mainly for view drawing, here is just to set the most image metadata. Overloaded viewdrawRect:Methods can be customizedUIViewThe display, the principle is indrawRect:Method to draw the host diagram internally, using CPU and memory;
    • Prepare, is an additional step, generally dealing with image decoding and conversion and other operations;
    • Commit, mainly packaging the layers and sending them via IPC toRender Server. This process is performed recursively, because the layers and views are in a tree structure.
  • The Render Server performs OpenGL, Core Graphics-related operations, such as preparing the Render with OpenGL according to the various properties of the layer (if it is an animation property, it will calculate the intermediate value of the animation layer property);

  • GPU renders the layer to the screen through Frame Buffer, video controller and other related components.

To meet the screen refresh rate of 60FPS, the RunLoop should operate no more than 16.67ms apart, and the above steps should be performed in parallel.

Apply colours to a drawing with the RunLoop

The iOS display is powered by the VSync signal, which is generated by the hardware clock and emits 60 times per second (depending on the device’s hardware, for example, it’s usually 59.97 on an iPhone). After receiving the VSync signal, the iOS graphics service will notify the App through IPC. After startup, the Runloop of the App registers the corresponding CFRunLoopSource to receive the clock signal notification through mach_port, and then the callback of the Source drives the animation and display of the entire App.

Note: The actual observation is that the App does not register the Source related to VSync after it is started, so the above application should be the Render Server process to register the Source and listen for VSync signals to drive the layer rendering, and then submit to the GPU.

Core Animation registers an Observer in RunLoop that listens for BeforeWaiting and Exit events. The priority of this Observer is 2 million, which is lower than other common observers. When a touch event arrives, the RunLoop wakes up and the code in the App performs actions such as creating and adjusting the view hierarchy, setting the FRAME of the UIView, changing the CALayer’s transparency, and adding an animation to the view. These actions are eventually captured by the CALayer and submitted to an intermediate state via the CATransaction (the documentation for the CATransaction is slightly mentioned, but incomplete). When all of the above is done and the RunLoop is about to go to sleep (or exit), the Observer that follows the event is notified. The Observer registered by Core Animation will then merge all the intermediate states and submit them to the GPU in the callback. If there is an Animation, Core Animation will trigger the process several times through mechanisms such as CADisplayLink.

Render performance optimization

In order to ensure the rendering performance, it is mainly to ensure that CPU and GPU will not hinder the above rendering process and cause the phenomenon of “frame dropping”. Therefore, it is necessary to analyze, evaluate and optimize the impact of CPU and GPU on the rendering process respectively.

Causes and solutions of CPU resource consumption

Object creation

Object creation consumes CPU resources by allocating memory, adjusting properties, and even reading files (such as creating UIViewController to read XIB files). Therefore, try to use lighter objects instead of heavier ones, such as calayers that do not need to respond to touch events than UIViews; If the object does not involve UI operations, try to put it in the background thread. Performance-sensitive view objects, try to create them in code rather than in storyboards; If the object is reusable, you can use a cache pool for reuse.

Object to adjust

Object adjustments, such as CALayer property changes, view hierarchy adjustments, adding and removing views, are also a common source of CPU consumption.

There are no property methods inside the CALayer. Instead, the CALayer dynamically receives the resoleInstanceMethod method from the Runtime to temporarily add a method to an object, and saves the corresponding property values to an internal Dictionary. The CALayer also notifythe delegate, creates animations, and so on. UIView display-related properties (such as frame/bounds/ Transform) are actually mapped from the CALayer properties.

Object is destroyed

Although object destruction resources are small, they can add up. Usually when a container class holds a large number of objects, the resource consumption of its destruction is very obvious, so you can see that the objects used by the background thread to release move the background thread. The tip code is as follows:

// Capture the object in a block, and then throw it in the background queue to send a random message to avoid compiler warnings; NSArray *tmp = self.array; self.array = nil; dispatch_async(queue, ^{ [tmp class]; });Copy the code
Layout calculation

View layout calculation is the most common CPU resources destruction, it eventually will be through the UIView. The frame/bounds/center on the adjustment of the property, therefore, to avoid the CPU resource consumption to calculate in advance good layout, one-time adjust corresponding properties, when there is a need for Instead of calculating and adjusting these properties multiple and frequently.

Autolayout

Auotlayout is a technology advocated by Apple, which can improve development efficiency in most cases, but it will bring serious performance problems for complex views, see Pilky.me /36/ for details, so for views with high performance requirements, try to use code to implement the view.

The text calculated

If the page contains a large amount of text, the text width and height calculation will take up a large part of the resources, and inevitably. This can be implemented internally by UILabel: [NSAttributedString boundingRectWithSize: options: context] rich text AttributedString to compute text width is high, With [NSAttributeString drawWithRect: options: context:] to draw text, and on background threads execute avoid blocking the main thread; Or use CoreText’s C-based cross-platform API to draw text.

Core Text is for developers who must deal with low-level font handling and Text layout. If not necessary, You should use TextKit (Text Programming Guide for iOS), CocoaText (CocoaText Architecture Guide) and other frameworks to develop your App or Mac application. Core Text is the underlying implementation of both Text frameworks, so their speed and efficiency are shared. In addition, the above two text frameworks provide rich text editing and page layout engines. If your App only uses Core Text, you’ll need to provide other basic implementations for it. Core Text Programming guide

Text rendering

All text content controls that are visible on the screen, including UIWebView, are plotted as bitmaps using CoreText at the bottom. Common text controls, such as UILabel, UITextView, etc., are typesetting and drawing in the main thread. When a large number of text is displayed, CPU pressure will be very large. The only solution is to customize the text control and draw the text asynchronously with TextKit or the bottom-level CoreText.

Image decoding

When an image is created using the methods of UIImage or CGImageSource, the image data is not decoded immediately. Only when the image is set to UIImageView or Calayer. contents, and before the CALayer is submitted to the GPU, the data in the CGImage will be decoded, and it needs to be executed on the main thread.

Solution: The background line first draws the image into the CGBitmapContext, and then creates the image directly from the Bitmap. At present, the common network photo library has this function.

Image painting

Drawing an image usually refers to the process of drawing an image into a canvas using a method beginning with CGxx, and then creating and displaying the image from the canvas. The most common method is the [UIView drawRect:] method. Since the CoreCraphic method is usually thread-safe, the drawing of the image can be easily performed on a background thread, as shown in this example:

- (void)display { dispatch_async(backgroudQueu, ^{ CGContextRef ctx = CGBitmapContextCreate(...) ; //draw in context .... CGImageRef img = CGBitmapContextCreateImage(ctx); CFRelease(ctx); dispatch_async(mainQueue, ^{ layer.contents = img; }); }); }Copy the code

Causes and solutions for GPU resource consumption

In contrast to the CPU, the GPU basically receives submitted texture and vertex descriptions (triangles), applies transformations, blends, renders, and outputs to the screen. Most of the things you see are mainly textures (images) and shapes (vector graphics for triangle simulation).

Rendering of textures

All bitmaps, including images, text, and rasterized content, are eventually delivered from memory to video memory and bound to GPU textures. Both the process of submitting to video memory and the process of GPU modulating and rendering textures consume a lot of GPU resources. When a large number of images are displayed in a short period of time (such as when UITableView has a large number of images and slides quickly), the CPU usage is very low and the GPU usage is very high, thus causing the interface to drop frames. The effective way to avoid this situation is to minimize the display of a large number of pictures in a short period of time, as far as possible to combine many pictures into a display.

Blending of views

When multiple views exist and multiple layers overlap, the GPU will first mix them together. If the view structure is complex, the blending process will consume a lot of GPU resources. In order to reduce the consumption of GPU, the order of magnitude level of view should be reduced as far as possible, and opaque attribute should be marked in the opaque view to avoid useless Alpha channel synthesis.

Graph generation

CALayer’s border, rounded corners, shadows, and masks, CASharpLayer’s vector graphics display, usually trigger offscreen rendering, which usually occurs in gpus. When there are a lot of CALayer with rounded corners in a list view and the style slides, it will consume a lot of GPU resources, resulting in interface stalling. To avoid this, try starting the calayer. shouldRasterize property, which transfers off-screen rendering to the CPU; It is best to avoid rounded corners, shadows, masks, etc.

There are two types of GPU Screen Rendering: on-screen Rendering and off-screen Rendering. The current Screen Rendering is the normal GPU Rendering process, where the GPU puts the rendered frames into the frame buffer and displays them On the Screen. Off-screen rendering creates an additional off-screen rendering buffer (such as to save data reused later), which is still submitted to the frame buffer for display on screen.

Off-screen rendering requires the creation of new buffers, and the rendering process involves multiple context switches from the current screen to the off-screen environment. After the off-screen rendering is completed, the rendering result needs to be switched to the current screen environment, which is costly.

AsyncDisplayKit

AsyncDisplayKit(ASDK) is a Facebook open source library for keeping iOS interfaces flowing. The basic principle is as follows:

Will not require the main thread to perform performance consumption through asynchronous execution, such as text width and height and view layout calculation, text rendering, picture interface and graphic drawing, object creation, property modulation and destruction; However, UIKit and Core Animation related operations must be performed on the main thread, and performance is optimized for those that cannot be performed in the background.

UIView CALayer encapsulation

The ASDisplayNode class (short for ASNode) is encapsulated on the basis of UIView and CALayer. Packaging the common view properties (such as frame/bounds/aplphs/transform/backgroudColor superNode/subNodes, etc.), establish ASNode, the corresponding relationship between the CALayer when CALayer attribute changes or animation, The ASNode is notified via the DELEGATE notified UIVIew. Since UIViews and CALayer are not thread-safe and can only be created, accessed, and destroyed in the main thread, asNodes are thread-safe and can be created and modified in background threads. ASNode also provides a Layer Backed property, which eliminates the middle-tier functionality of UIView when you don’t need to touch events. Also provides a number of optimized subclasses of encapsulation, such as the Button/Control/Cell/Image/ImageView/Text/TableView/CollectView, etc.

Layer precompositing

For multi-layer CALayer situations, Gpus require layer composition. However, situations in which multiple layers do not require animation or position adjustment result in unnecessary GPU performance consumption. Therefore, ASDK has implemented a PR-composing technology to combine multiple layers into a single image. Effectively reduce the GPU consumption.

Asynchronous concurrent operation

The background thread-able tasks mentioned above are executed asynchronously and concurrently via GCD, taking advantage of the iPhone’s multi-core processor.

RunLoop task distribution

The ASDK emulates the Core Animation mechanism here: For all modifications and submissions to the ASNode, there are always tasks that must be executed in the master thread. When such a task occurs, ASNode encapsulates the task in ASAsyncTransaction(Group) and delivers it to a global container. The ASDK also registers an Observer in RunLoop that monitors the same events as the CA, but with a lower priority. When the RunLoop goes to sleep and the CA finishes processing events, the ASDK executes all the tasks submitted in the loop. Through this mechanism, ASDK can synchronize asynchronous and concurrent operations to the main thread at the appropriate opportunity, and can get good performance.

Caton detection

Instrument tool

The main tools are as follows:

The Time Profiler is used to detect CPU usage. It can tell us which methods in the program are consuming a lot of CPU time. Using a lot of CPU isn’t necessarily a problem – you might expect animation paths to be very CPU-dependent, as animation tends to be the most demanding task on iOS devices. However, if you are having performance problems, looking at CPU time can be helpful in determining whether performance is CPU dependent and in locating functions.

Core Animation: Monitors the performance of Core Animation. It gives us periodic FPS.

Use Core Animation to view Frames Per Second (FPS).

Detection based on RunLoop

There are two main options:

  • FPS monitoring

    The principle is to add CADisplayLink object to Runloop to count the number of callbacks per second, and obtain the screen refresh rate FPS by the number/time. The specific implementation is as follows:

// Create CADisplayLink, To add to the current run loop of NSRunLoopCommonModes _link = [CADisplayLink displayLinkWithTarget: self selector: @ the selector (tick)]; [_link addToRunLoop:[NSRunLoop mainRunLoop] forMode:NSRunLoopCommonModes]; - (void)tick:(CADisplayLink *)link { if (_lastTime == 0) { _lastTime = link.timestamp; return; } _count++; NSTimeInterval delta = link.timestamp - _lastTime; _count if (delta < 1) return; _lastTime = link.timestamp; Float FPS= _count/delta; float FPS= _count/delta; _count = 0; NSLog(@"current FPS: %d", (int)round(fps)); }Copy the code

CADisplayLink is a timer that is consistent with the refresh rate of the screen (although the actual implementation is more complicated, unlike NSTimer, which actually operates a Source inside). If a long task is performed between screen flusher, one frame will be skipped (similar to NSTimer), causing the interface to feel stuck. Even a frame of lag when sliding a TableView quickly will be noticeable to the user. By comparing the CADisplayLink added to the runloop modes change before and after, found that its implementation is added to the runloop Source1 callback IODispatchCalloutFromCFMessage;

UI drawing does not necessarily have a full FPS of 60 frames, such as animation FPS of 24, so it is problematic to monitor the lag through FPS scheme.

FPS is the number of frames per second, which is the number of screen changes per second. If you look at animated movies, the FPS of animated movies is 24, not 60 full frames. In other words, for a cartoon, it is not smooth at 24 frames than at 60 frames, but it is coherent, so it cannot be said that it is stuck at 24 frames.

  • Main thread stalling monitoring

    The runloop of the main thread is monitored by the child thread to determine whether the time between kCFRunLoopBeforeSources and kCFRunLoopAfterWaiting reaches a certain threshold. If the lag is detected, the function call information at this time will be recorded. The specific code is as follows:

    static void runLoopObserverCallBack(CFRunLoopObserverRef observer, CFRunLoopActivity activity, void *info) { MyClass *object = (__bridge MyClass*)info; Object ->activity = activity; // semaphore = moniotr->semaphore; dispatch_semaphore_signal(semaphore); } - (void)registerObserver { CFRunLoopObserverContext context = {0,(__bridge void*)self,NULL,NULL}; CFRunLoopObserverRef observer = CFRunLoopObserverCreate(kCFAllocatorDefault, kCFRunLoopAllActivities, YES, 0, &runLoopObserverCallBack, &context); CFRunLoopAddObserver(CFRunLoopGetMain(), observer, kCFRunLoopCommonModes); Semaphore = dispatch_semaphore_create(0); // Dispatch_async (dispatch_get_global_queue(0, 0), ^{while (YES) {long st = dispatch_semaphore_wait(semaphore, dispatch_time(DISPATCH_TIME_NOW, 50*NSEC_PER_MSEC)); if (st ! = 0) { if (activity==kCFRunLoopBeforeSources || activity==kCFRunLoopAfterWaiting) { if (++timeoutCount < 5) continue; // Use the third-party crash collection library PLCrashReporter, PLCrashReporterConfig *config = [[PLCrashReporterConfig alloc] initWithSignalHandlerType:PLCrashReporterSignalHandlerTypeBSD symbolicationStrategy:PLCrashReporterSymbolicationStrategyAll]; PLCrashReporter *crashReporter = [[PLCrashReporter alloc] initWithConfiguration:config]; NSData *data = [crashReporter generateLiveReport]; PLCrashReport *reporter = [[PLCrashReport alloc] initWithData:data error:NULL]; NSString *report = [PLCrashReportTextFormatter stringValueForCrashReport:reporter withTextFormat:PLCrashReportTextFormatiOS]; NSLog(@" looks like a bit of a jam "); } } timeoutCount = 0; }}); }Copy the code

    The reason why we need to monitor the time between kCFRunLoopBeforeSources and kCFRunLoopAfterWaiting is mainly because the Source0 time of APP internal event processing is handled between the two, such as touch event, CFSocketRef, The Core Animation submits all the layer intermediate states to the GPU, and most of the scenes that cause lag are in between the two.

    When the main thread RunLoop is idle, it is in the kCFRunLoopBeforeWaiting state between kCFRunLoopBeforeSources and kCFRunLoopAfterWaiting, so the error is judged to be stalled. Therefore, the child thread ping scheme was optimized to solve this problem. Create a child thread to ping the main thread through the semaphore, because when ping the main thread must be between kCFRunLoopBeforeSources and kCFRunLoopAfterWaiting. Set the flag bit to YES for each check, and then dispatch the task to the main thread with the flag bit set to NO. Then the child thread will sleep the timeout threshold, and determine whether the flag bit is successfully set to NO. If there is NO indication that the main thread has stalled, ANREye will use the child thread Ping method to detect the stalled, the specific code is as follows:

    @interface PingThread : NSThread ...... @end @implementation PingThread - (void)main { [self pingMainThread]; } - (void)pingMainThread { while (! self.cancelled) { @autoreleasepool { __block BOOL timeOut = YES; dispatch_async(dispatch_get_main_queue(), ^{ timeOut = NO; dispatch_semaphore_signal(_semaphore); }); [NSThread sleepForTimeInterval: lsl_time_out_interval]; if (timeOut) { NSArray *callSymbols = [StackBacktrace backtraceMainThread]; . } dispatch_wait(_semaphore, DISPATCH_TIME_FOREVER); } } } @endCopy the code

Reference

iOS-Core-Animation-Advanced-Techniques

Computer Things (8) – Graphic image rendering principles

Principles of iOS graphics rendering

Tips for keeping the interface flowing on iOS

About the drawRect

In-depth understanding of the iOS Rendering Process

Exploration of iOS view and animation rendering mechanism

IOS Core Animation: Advanced Techniques

IOS off-screen rendering

Off-screen rendering for iOS

Off-screen rendering optimization details: example demonstration + performance test

IOS Core animation advanced and skills

IOS Performance optimization – Tools for Instruments Time Profiler

CADisplayLink

backboardd

Quality control – Caton test

IOS Development — Summary of APP Performance Detection Scheme (I)