1. Introduction

Recently, I feel really lazy. I promised to update some friends, but I have been delayed. Today, I will make up for it. This chapter is a summary of the previous four chapters

Senior APP Performance Optimization series of First-line Large Factories – Caton Positioning (I)

Advanced APP Performance Optimization Series — Asynchronous Optimization and Topological Sorting (2)

Large-scale APP Performance Optimization Series of First-line Factories – Custom Initiator (3)

Performance Optimization series for Large Apps of First-tier Manufacturers – More Elegant Delay Schemes (IV)

These four chapters are actually the content of the big chapter of startup optimization. After reading these four chapters, at least the startup optimization place is already very OK. And then of course we’re going to move on to the second chapter, which is going to be a full set of five chapters on how to do memory optimization in a real project and hopefully you’ll get a little bit of a head start on memory optimization.

LeakCanary is used by LeakCanary and its principle, but how to optimize it? How do you avoid these problems, that’s the lesson


2. Summary of Caton positioning

This is the first chapter of our first chapter. Let’s see

Senior APP Performance Optimization series of First-line Large Factories – Caton Positioning (I)

In this chapter we focus on is how to obtain method is time consuming, to do a good job, must first sharpen his, this is our big project optimization daily as a first step, first analyze the big projects take case, there are many students reflect that old project maintenance upgrade, caton a hindrance, so the analysis of caton takes time is the first step in the project optimization.

1. The conventional way

Just add a line of code above and below each method and print, as in:

public void func(a){
    long startTime = System.currentTimeMillis();
     xxxx
    Log.d("lybj"."Func () method time:"+ (System.currentTimeMillis() - startTime));
}
Copy the code

Simple is simple, but remember we do performance checks, you can’t write code that is very intrusive, it’s very possible that a stable system, because your performance checks code, becomes unstable, and that’s not allowed. So scrap!

2. The way of Aop

This way, we can separate our performance checking code from our logic code

@Aspect
public class PerformanceAop {

    @Around("call(* com.bj.performance.MyApplication.**(..) )")
    public void getTime(ProceedingJoinPoint joinPoint){

        long startTime = System.currentTimeMillis();
        String methodName = joinPoint.getSignature().getName();
        try {
            joinPoint.proceed();
        } catch (Throwable throwable) {
            throwable.printStackTrace();
        }
        Log.d("lybj", methodName + "Method time:"+ (System.currentTimeMillis() - startTime)); }}Copy the code

Disadvantages: If the project is huge and there are hundreds of methods, it is impossible to do all of them. Then, we need an intuitive tool to see which method takes too long at a glance.

3. Wall time and CPU time

There are two time modes in traceView:

Wall time is the time it takes to execute this code, but if the thread is stuck, the waiting time is also counted.

The CPU time (Thread time) is the time that the CPU spends on the Thread, which must be less than the wall time.

So if you find that the wall time is very long and the CPU time is very short, then one thing is that you need to start asynchronous threads, because the main reason for the time is that the CPU is idle and your thread is waiting.


3. Summary of asynchronous optimization

Start the beginning of optimization and optimize the third-party dependencies initialized in application. This involves two chapters as follows:

Advanced APP Performance Optimization Series — Asynchronous Optimization and Topological Sorting (2)

Large-scale APP Performance Optimization Series of First-line Factories – Custom Initiator (3)

1. The conventional way

This can be done using thread pools or start threads, simply to help the main thread share the load.

Disadvantages: 1. If there is a dependency between two asynchronies, it is difficult to handle

For example, we need to initialize the aurora push and obtain the device ID. If these two methods are time-consuming and need to be initialized in the child thread, but they must obtain the device ID first and then initialize the aurora push, there is a dependency between the two, it is not easy to handle. For details, see Chapter 2.

Disadvantages: 2. If one of the asynchronous processes needs to finish its own execution before the main thread can continue to execute, then the normal way is not easy to process.

Cons: 3. If you do both of the above, congratulations, your code is not very readable!

2. Principle of the initiator

Remember, we implemented a starter together, the principle is very simple, see chapter 3, here is just a brief description:

task

The first is to encapsulate our time-consuming operations into a task, which has 4 methods and 1 attribute

Attribute 1: taskCountDownLatch. The number of locks the task has depends on the number of task sets it depends on. Method 1: dependentArr () returns the task set on which it depends method 2: startLock () opens the lock method 3: UNLOCK () completes a dependency, reduces the lock method 4: needWait() whether the main thread is required to wait until the task is completed

Launcher (TaskManager)

Initiators are mainly used to distribute tasks

Property 1: mCountDownLatch Lock. Unlike the lock in task, this lock locks the main thread and is locked when the main thread calls it. The number of locks depends on the number of wait counters (mainNeedWaitCount)

Method 1: add: Add a task to the collection and create a Map for value based on the key. Basically, when a task is finished, we can loop through the Map to find the completed task, get the set of objects that need to depend on it, and then reduce the lock on each object. The task’s needWait() method is then called to see if the main thread needs to wait for it and, if so, for the counter +1

Method 2: startTask: distribute tasks. Firstly, we reorder the task set according to the topological ordering of directed acyclic graph. For example, we pass in tasks A, B, and C, but if A depends on B, we need to initialize B, process C, and then process A. The returns are B, C, A. Set the lock of the main thread, the number of locks wait for the number of sets, and ensure that some tasks that must be executed before switching pages are executed. Distribution is then processed according to the required thread.

Executor (TaskRunnable)

Actuators are designed primarily to execute tasks and inherit from Runnable

Method 1: run(): starts the lock in the task. Remember, the number of locks it holds depends on the number of objects it depends on, in order to execute its dependencies first. Then, we execute the run method in the task, which is an empty method for our time-consuming task, and then we execute the unLockForChildren method in the launcher, which is used to loop through the previously stored Map to find the finished task, get the set of objects that need to depend on it, And then for each object, I’m going to take one less lock.

Isn’t that easy? If you don’t understand, study chapter 3, read it and recite it


4. More elegant asynchronous loading

A chapter is involved, as follows:

Performance Optimization series for Large Apps of First-tier Manufacturers – More Elegant Delay Schemes (IV)

1. The conventional way

Lazy loading can be achieved by Handler().sendMessagedelayed ()

But the project is such use is not recommended, because can preempt the CPU, the performance will be loss, such as a page of the initialized by some third party service, although it can be delayed for a period of time to initialize, but if this page has been task execution, such as a timer or polling request interface, etc., so delay time, We still need to grab the CPU to perform initialization of our third-party service. So you can’t just do it that way

2.IldeHandler

This is ok, you can do time-consuming processing in the IDLE time of the CPU, but if the added task has a sequence of execution, you can not simply use it, because it is out of order, the added task is free to execute who

3. Asynchronous initiators

Look at chapter 4. This is just a brief introduction

IldeTaskManager

The principle is very simple. The add() method adds a task to a collection, encapsulates an IldeHandler, iterates through the collection, retrieves the task, and presents it to TaskRunnable, which we encapsulated in Chapter 3.


5. The source code

Well, here is the most anticipated source code

Click download remember to click a star!


6.END

It’s a sad story when the company wants to resume work after a happy vacation.

After a month, I finally finished the first big chapters start optimizing content updates, basic is the content, checked on the Internet, though there are many, such as preload bytecode file, etc., but, actually feel not necessary, a software 0.1 seconds faster is actually very little, and lock the frequency of the CPU, But you can’t guarantee that you’ll save power, and there are adaptations that have been proven to be either ineffective or flashy. Basically master the above four chapters, start optimization is enough, even if the large factory optimization to this degree is also ok.

The next few weeks will update you with the second chapter, which deals with memory optimization in real-world projects. Remember to review!

Finally: Welcome everyone to like + follow, your support is my motivation to continue to update the following 16-17 chapters!