The startup speed of an APP is very important for an APP, which will directly affect the user experience. If the startup speed is too slow, it will lead to the loss of users. This paper analyzes the optimization of the startup speed, and provides some ideas for the optimization of the startup speed.

1. Application startup classification

An application has three startup states, each of which affects the time required for the application to be displayed to the user: cold startup, warm startup, and hot startup. In cold startup, the application starts from scratch. In the other two states, the system needs to bring applications running in the background to the foreground.

1.1. Cold start

App cold start can be divided into two stages

The first stage

1. Load and start the APP

2. A blank startup window is displayed immediately after startup

Create the APP process

The second stage

1. Create app objects

2. Start the main thread

3. Create the main Activity

4. Load the layout

5. Set up the screen

6. Draw the first frame

Once the application process finishes drawing the first frame, the system process replaces the currently displayed background window with the main Activity. At this point, the user can start using the application

In cold startup, the developer can intervene mainly in OnCreate stage of Application and OnCreate stage of Activity, as shown below:

1.2. Hot start

When hot starts, the system brings the activity to the foreground. If all of your application’s activities are in memory, your application can avoid repeated object initialization, rendering, and drawing operations

1.3. Warm start

At a warm start, the second stage of a cold start is performed because the app process still exists

1. Create app objects

2. Start the main thread

3. Create the main Activity

4. Load the layout

5. Set up the screen

6. Draw the first frame

Common scenarios of warm startup:

1. The user exits the application and then restarts it. The process may continue to run. But the application restarts from the onCreate method that called the Activity

2, insufficient memory, the system will kill the application, and then the user restarts. The process and Activity need to be restarted, but the saved instance bundle passed to onCreate helps

Next, how to get the boot time

2. Obtain the startup time

2.1. Obtain the adb command

adb shell am start -W [packageName]/[packageName.xxActivity]

adb shell am start -W com.tg.test.gp/com.test.demo.MainActivity
Starting: Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] cmp=com.tg.test.gp/com.test.demo.MainActivity }
Status: ok
LaunchState: COLD
Activity: com.tg.test.gp/com.test.demo.MainActivity
ThisTime: 1344
TotalTime: 1344
WaitTime: 1346
Complete
Copy the code

ThisTime: How long it took for the last Activity to start

TotalTime: TotalTime spent on all activities during startup

WaitTime: The total time that AMS starts all activities (including before the target application is started)

2.2. Use codes to dot

public class LaunchTimer { private static long sTime; public static void startRecord() { sTime = System.currentTimeMillis(); } public static void endRecord() { endRecord(""); } public static void endRecord(String msg) { long cost = System.currentTimeMillis() - sTime; LogUtils.i(msg + "cost " + cost); }}Copy the code

In this method, the Application initializes the attachBaseContext method and sets the start time stamp. When the user interface displays that the operation is complete, the end time stamp is set. The time difference indicates the start time

But this way is not elegant, if the startup logic is complex, the method of many cases, it is best to use AOP for optimization.

3. Use startup optimization analysis tools

3.1, TraceView

Traceview is an Android performance analysis tool, which can graphically display method call time, call stack, and view all thread information. It is a very good tool for analyzing method time and call chain

The method of use is to use the code buried:

1. At the start of the collection, execute debug.startMethodTracing (“app_trace”), where the parameter is a custom file name

2. At the end of the collection, execute debug.endMethodTracing ()

The file is generated in /sdcard/Android/data/ package name /files/ file name.trace. You can open the file using Android Studio

Here’s an example:

Let’s take a look at how long testAdd takes

public class MainActivity extends AppCompatActivity {

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);
        testAdd(3.5);
    }

    private void testAdd(int a, int b) {
        Debug.startMethodTracing("app_trace");
        int c = a + b;
        Log.i("Restart"."c = a + b = "+ c); Debug.stopMethodTracing(); }}Copy the code

After running the program, find/sdcard/Android/data/com. Restart the perference/files/app_trace trace file, double-click it in the AndroidStudio, can resolve the information in the file, after opening is as follows:

FlameChart, FlameChart, Top Down, and Bottom Up. First of all, choose a good time region, like in this case, because the program is very simple, can analyze the region is relatively small, after selecting it, can get the following picture:

In the graph obtained by Call Chart, you can see the entire Call stack of the program and see the method time.

For example, the testAdd method in the figure calls debug.startMethodTracing, log. I, and debug.stopMethodTracing. Meanwhile, it can be seen from the figure that startMethodTracing takes longer than stopMethodTracing. In practical optimization, it is very useful to find out which method is time-consuming and targeted optimization.

When analyzing third-party programs and system codes, we generally cannot optimize them. CallChart is very considerate to use color to help us distinguish which codes are written by ourselves. The orange ones are system apis, the blue ones are usually third-party apis, and the green ones are our own. For example, the testAdd method in the figure, which we wrote ourselves, can be tuned.

Flame Chart is a reverse diagram of Call Chart with similar functions. The graph is as follows:

Top Down can see which methods are called internally by each method, as well as the time consumption and time proportion of each method. Compared with graph search by Call Chart, Top Down is more specific, with specific method time consumption data

In the figure, Total represents the Total time spent by the method, self represents the code time spent by non-method calls within the method, and Children represents the time spent by other methods called within the method.

Similarly, taking testAdd method as an example, the total time of testAdd method is 3840us, accounting for 97% of the program running time; the time of its own code is 342us, accounting for 8.65%; the total time of other methods called in testAdd method is 3498us, accounting for 88.52%

private void testAdd(int a, int b) {
        Debug.startMethodTracing("app_trace");// Count to Children
        int c = a + b;// This sentence is counted in self time, which is very short
        Log.i("Restart"."c = a + b = " + c);// Count to Children
        Debug.stopMethodTracing();// Count to Children
}
Copy the code

Bottom Up is the inverted graph of Top Down, which method is called by

There’s another option worth noting in TraceView,

In the upper right corner, there are options for Wall Clock Time and Thread Time, where Wall Clock Time refers to the actual Time the method takes to execute and Thread Time refers to the CPU Time. When we talk about optimizing CPU Time, it is more reasonable to use Thread Time to analyze the elapsed Time when I/O operations are involved

In addition, we need to pay attention to the run-time overhead of TraceView when using TraceView, because it takes a long time and may skew our optimization direction.

3.2, Systrace

Systrace combines data from the Android kernel to generate HTML reports

Systrace is in the Android/ SDK /platform-tools/systrace directory. You need to install Python before using it, because Systrace generates HTML in Python

Report, the command is as follows:

python systrace.py -b 32768 -t 10-a Package name -o perference. HTML sched GFX view WM AM appCopy the code

Specific parameter can reference: developer. The android. Google. Cn/studio/initial…

After the command is executed, the following information is displayed

Use Chrome to open it, otherwise a blank screen may appear. If you use Chrome to display a blank screen, enter Chrome: Tracing in the browser and Load files to display the blank screen

When viewing the picture, the A key moves left, the D key moves right, the S key shrinks, and the W key zooms in

4. Common optimization

4.1. Start loading common optimization strategies

The larger an application is, the more modules it involves, the more services and even processes it contains, such as initialization of network modules and initialization of underlying data, etc. These loads need to be prepared in advance, and some unnecessary ones should not be put into the application. Starting points can usually be organized in four dimensions:

1. Necessary and time consuming: Start initialization. Consider using threads for initialization

2, necessary not time-consuming: no processing

3. Unnecessary time consuming, data reporting, plug-in initialization, processing as needed

4, not necessary not time-consuming: directly remove, there is a need to load again

Classify the content to be executed when the application is started as described above and implement the loading logic on demand. So what are the common optimized loading strategies?

Asynchronous loading: A load that takes a lot of time is executed asynchronously in a child thread

Lazy loading: Lazy loading of data is not required

Pre-load: Pre-initialize using ContentProvider

Here are some common treatments for asynchronous and lazy loading, respectively

4.2. Asynchronous loading

Asynchronous loading, simply put, is using child threads to load asynchronously. In real scenarios, various third-party libraries are often initialized during startup. You can greatly speed up startup by putting initialization into child threads.

However, some business logic can only run normally after the initialization of the third party library. At this time, if you simply run the business logic in the child thread, it is likely to run the business logic before the initialization is completed, leading to exceptions.

For more complex cases, either CountDownLatch or the idea of an initiator can be used.

CountDownLatch use

class MyApplication extends Application {

    // The thread waits for the lock
    private CountDownLatch mCountDownLatch = new CountDownLatch(1);


    / / the number of CPU cores
    private static final int CPU_COUNT = Runtime.getRuntime().availableProcessors();
    // Number of core threads
    private static final int CORE_POOL_SIZE = Math.max(2, Math.min(CPU_COUNT - 1.4));

    void onCreate(a) {
		ExecutorService service = Executors.newFixedThreadPool(CORE_POOL_SIZE);
        service.submit(new Runnable() {
            @Override public void run(a) {
            	// Initialize the WEEx, because the Activity loads the layout and needs to be initialized in advanceinitWeex(); mCountDownLatch.countDown(); }}); service.submit(new Runnable() {
            @Override public void run(a) {
            	// Initialize Bugly, do not care if it is initialized before the interface is drawninitBugly(); }});// Commit other library initializations, omit...
		
		try {
            OnCreate waits until the WEEX initialization is complete
            mCountDownLatch.await();
		} catch(Exception e) { e.printStackTrace(); }}}Copy the code

Using CountDownLatch is recommended when the initialization logic is not complex. However, if the initialization of several libraries are interdependent and the logic is complex, it is recommended to use loader.

starter

The core of an initiator is as follows:

  • Make full use of CPU multi-core capability, automatically comb and perform tasks in sequence;
  • The code is task-based, and the startup Task is abstracted into various tasks.
  • A directed acyclic graph is generated by sorting all task dependencies.
  • Multithreading executes in order of thread priority

For details, see github.com/NoEndToLF/A…

4.3. Lazy loading

Some third-party library initialization is not a high priority and can be loaded on demand. Or use IdleHandler to do batch initialization while the main thread is idle.

On-demand loading can be implemented on a case-by-case basis and will not be described here. This section describes how to use IdleHandler

    private MessageQueue.IdleHandler mIdleHandler = new MessageQueue.IdleHandler() {
        @Override
        public boolean queueIdle(a) {
            // When return true, the IdleHandler is removed and will not be called again. When return false, the IdleHandler will be called again the next time the main thread is idle
            return false; }};Copy the code

Batch initialization using IdleHandler. Why batch initialization? When the main thread is idle, IdleHandler is executed, but if there is too much IdleHandler, it will still stall. Therefore, it is best to batch initialization operations when the main thread is idle

public class DelayInitDispatcher {

    private Queue<Task> mDelayTasks = new LinkedList<>();

    private MessageQueue.IdleHandler mIdleHandler = new MessageQueue.IdleHandler() {
        @Override
        public boolean queueIdle(a) {
            // Execute one Task at a time
            if(mDelayTasks.size()>0){
                Task task = mDelayTasks.poll();
                new DispatchRunnable(task).run();
            }
            // If null, return false to remove IdleHandler
            return !mDelayTasks.isEmpty();
        }
    };
	
    // Add the initialization task
    public DelayInitDispatcher addTask(Task task){
        mDelayTasks.add(task);
        return this;
    }
	
    // Add IdleHandler to the main thread
    public void start(a){ Looper.myQueue().addIdleHandler(mIdleHandler); }}Copy the code

4.4. Pre-loading

The quickest time to initialize is in the Application’s onCreate, but there are earlier ways. The onCreate of the ContentProvider occurs between the attachBaseContext and onCreate methods of the Application. That is, it executes earlier than Application’s onCreate method. So you can use this to pre-load the initialization of third-party libraries.

Use androidx – startup

How to use it: As a first step, write a class that implements Initializer, whose generic type is the returned instance, or if you don't need itUnit
class TimberInitializer : Initializer<Unit> {
	
    // Write the initialization execution and return the initialization instance
    override fun create(context: Context) {
        if (BuildConfig.DEBUG) {
            Timber.plant(Timber.DebugTree())
            Timber.d("TimberInitializer is initialized.")}}// Write initializers that depend on other initializers, return empty List if not
    override fun dependencies(a): List<Class<out Initializer<*>>> {
        returnEmptyList ()}} Step 2, declare provider in AndroidManifest and configure meta-dataWrite the initialized class < Provider Android :name="androidx.startup.InitializationProvider"
    android:authorities="com.test.pokedex.androidx-startup"Android: exported ="false"// Merge is written here because other modules may have the same provider declaration to merge tools:node="mergecom.test.pokedex.initializer.TimberInitializer" android:value="androidx.startup" />
</provider>
Copy the code

4.5 Other optimizations

1, in the application, add a default image or a custom Theme, in the Activity first use a default interface to solve part of the startup short black and white screen problem. Such as android: theme = “@ style/theme. AppStartLoad”

5, summary

1, cold start, warm start, hot start mainly processing and their differences

2, how to obtain the startup time, introduced using ADB command and code dot two ways

3, How to use the tool to find the program time-consuming code, introduce the use of TraceView and Systrace

4. Introduction of common startup optimization methods and implementation (asynchronous loading, delayed loading, early loading, etc.), key ideas: asynchronous, delayed, lazy loading