I changed my job for the first time since I officially joined the job, which happened to catch up with the epidemic, but also saved the rush of interview. (The summary content is long, it is suggested to use the computer to review, and I have written another article about the algorithm of Android-flutter surface ii — Algorithm of Android-flutter surface — resume and interview skills)

Because these are summarized during the interview, the first reaction is to search for excellent answers on the Internet. This is the first time that I published a summary without adding links to the knowledge points used in the summary of the great gods. I am sorry that I will not do so in the future

Rhythm of the interview

I started my first interview with the first company on March 26, during which I did not consider applying for other companies’ resumes. The main purpose was to practice my interview, after all, I had not had an interview for four years, and it was still very empty. Stumbling over two eventually won the first round interview on Friday offer, at the same time the company human there also know that I want to leave for indirect urged me to submit OA departure, our very passive, a impulse directly submit OA, reserved two weeks the delivery time, also is to have two weeks time to look for a job, getting anxious, Directly on the boss to open the resume, and cast five, and then a day past without any reaction, is very anxious, until the third day received 4 interview invitation, hanging heart finally relieved.

The interview officially began on April 15th, and on April 16th, I asked for leave and sat down at home for the interview. There were five rounds of interviews in three companies in one day, and each round of interviews lasted about an hour. Because I expressed that time was tight, the HR of the interviewing company followed up with great efforts. On April 20th, I basically got three offers and one company was satisfied with them. In the following days, the second interview of the other two companies also started, but I didn’t make an appointment with the new interview company, so I passed. There are still three companies that have reached the final offer, but they rejected the offer because it took too long.

I have met with 7 companies and experienced 27 rounds of interviews. I have received 4 offers (including one for FLUTTER development), and have directly refused to participate in 3 HR interviews. Seven companies range from large BBA companies to medium-sized series D companies.

Companies have standard interview procedures and will email in advance to schedule an hour-long interview.

And then there was the satisfaction of a three-hour interview that was done all at once

There are five modules, including Java, Android, Network, DART, and Flutter

java

The GC mechanism

Recycling involves two things: finding the trash and recycling it. Generally speaking, there are two ways to find trash:

  • Reference counting: When an object is referenced, its reference counter is incremented by one, and objects with a reference count of 0 are cleaned up by garbage collection. However, there is A problem with this method. For example, there are two objects A and B. A references B, and B references A. Because of this circular reference problem, Java does not use reference counting.
  • Reachability analysis: We view the relationship between object references in Java as a graph, and objects that are unreachable from the root level are cleared by the garbage collector. Root-level objects typically include objects in the Java virtual machine stack, objects in the local method stack, static objects in the method area, and constants in the constant pool. There are four ways to recycle garbage:
  • Mark clearing algorithm: as the name suggests, it is divided into two steps, mark and clear. First mark the garbage objects that need to be collected, and then recycle them. The disadvantage of the tag cleanup algorithm is that it can cause memory fragmentation after garbage objects are removed.
  • Copy algorithm: The copy algorithm is to copy the living object to another memory area, and do the corresponding memory defragment work. The advantage of the replication algorithm is that it avoids memory fragmentation, but the disadvantage is also obvious: it requires twice as much memory.
  • Tag sorting algorithm: the tag sorting algorithm is also divided into two steps, marking first and then sorting. It marks garbage objects that need to be collected, and when garbage objects are removed, the surviving objects are compressed, avoiding memory fragmentation.
  • Generation algorithm: The generation algorithm divides objects into new generation and old age objects. So why make this distinction? The main reason is that a Java runtime produces a large number of objects with very different life cycles. Some of these objects have very long life cycles, and some are used only once and then never used again. Therefore, different collection strategies for objects with different lifecycles can improve the efficiency of GC.

The new generation objects are divided into three regions: Eden region and two Survivor regions. Newly created objects are placed in Eden, and when Eden’s memory reaches its threshold, a Minor GC is triggered. The surviving objects are copied to a Survivor zone, and their life count is increased by one. The Eden region is then idle, and when the Minor GC is triggered again, the Eden region and the surviving objects from the previous Survivor region are copied to another Survivor region using the replication algorithm I mentioned earlier, and their life count is increased by one.

This process continues many times, until the object’s survival count reaches a certain threshold and triggers a phenomenon called promotion: the new generation of the object is placed into the old generation. Objects in the old days were long-lived Java objects that survived multiple GC’s. A Major GC is triggered when older memory reaches a threshold, using a mark-collation algorithm.

Partition of JVM memory areas, which areas will occur OOM

JVM memory regions can be divided into two categories: thread-private and regions and thread-shared regions. Thread private areas: program counters, JVM virtual machine stacks, local method stack threads common areas: heap, method area, runtime constant pool

  • Program counter. Each thread has a private program counter, and only one method is executing at any time per thread, known as the current method. The program counter holds the JVM instruction address for the current method.
  • JVM virtual machine stack. When a thread is created, it creates a virtual machine stack within the thread, which stores a stack frame, corresponding to a method call. There are two kinds of operations on the JVM stack: pushing and outbound. Stack frames hold local variables, method return values, definitions of normal or abnormal exits of methods, and so on.
  • Local method stack. It is similar to the JVM stack, except that it supports Native methods.
  • The heap. The heap is the core area of memory management where object instances are stored. Almost all created object instances are allocated directly to the heap. Therefore, the heap is also the main area of garbage collection. The garbage collector will have a more detailed division of the heap, most commonly dividing the heap into new generation and old generation.
  • Methods section. The method area mainly stores the structural information of the class, such as static properties and methods.
  • Run-time constant pool. The runtime constant pool is located in the method area and mainly holds various constant information.

In fact, except for the program counter, all parts will occur OOM.

  • The heap. Most OOM occurrences occur in the heap, and the most common cause of OOM is a memory leak.
  • JVM virtual machine stack and local method stack. When we write a recursive method that has no loop termination condition, we end up with StackOverflow errors. Of course, if the stack space expansion fails, it will also occur OOM.
  • Methods section. Methods are now less likely to get OOM, but they can also get OOM if there is too much class information loaded in memory earlier.

Java’s memory model

The Java memory model stipulates that all variables are stored in the main memory, and each thread has its own working memory. The working memory of the thread stores a copy of the main memory of the variables used in the thread. All operations on variables must be carried out in the working memory of the thread, instead of reading and writing the main memory directly. Different threads cannot directly access variables in each other’s working memory, and the transfer of variables between threads requires data synchronization between their own working memory and main memory.

atomic

In Java, two high-level bytecode instructions, Monitorenter and Monitorexit, are provided to ensure atomicity. Synchronized is the key word corresponding to these two bytecodes in Java.

Therefore, synchronized can be used in Java to ensure that operations within methods and code blocks are atomic.

visibility

The Java memory model relies on main memory as a transfer medium by synchronizing the new value back to main memory after a variable is modified and flushing the value from main memory before the variable is read.

The volatile keyword in Java provides the ability to synchronize modified variables to main memory immediately after they are modified, and to flush variables from main memory each time they are used. Therefore, volatile can be used to ensure visibility of variables in multithreaded operations.

In addition to volatile, the Java keywords synchronized and final are also visible. It’s just implemented in a different way. It’s not expanded anymore.

order

In Java, synchronized and volatile can be used to ensure order between multiple threads. Implementation methods are different:

The volatile keyword disallows instruction reordering. The synchronized keyword ensures that only one thread can operate at a time.

Ok, so this is a brief introduction to the keywords that can be used to solve atomicity, visibility, and order in Java concurrent programming. As readers may have noticed, the synchronized keyword seems to be all-purpose, satisfying all three of these attributes at once, which is why so many people abuse synchronized.

Synchronized, however, is a performance deterrent, and while the compiler provides many lock optimization techniques, overuse is not recommended

Class loading process

Class loading in Java is divided into three steps: load, link, and initialize.

  • Load. Loading is the process of reading bytecode data from various data sources into JVM memory and mapping it to JVM approved data structures, known as Class objects. The data sources can be Jar files, Class files, and so on. If the format of the data is not the structure of ClassFile, ClassFormatError is reported.
  • Link. Linking is a core part of class loading, which is divided into three steps: validation, preparation, and parsing.
  • Validation. Validation is an important step in securing the JVM. The JVM needs to verify whether the byte information complies with the specification to prevent malicious information and non-standard data from endangering the JVM operation security. If the validation fails, VerifyError is reported.
  • To prepare. This step creates static variables and sets up memory for them.
  • Parsing. This step replaces the symbolic reference with a direct reference.
  • Initialization. Initialization assigns values to static variables and executes logic in static code blocks.

Parental delegation model

Class loaders can be broadly divided into three categories: startup class loaders, extension class loaders, and application class loaders.

  • The startup class loader is mainly loadedjre/libUnder thejarFile.
  • The extension class loader mainly loadsjre/lib/extUnder thejarFile.
  • The application class loader mainly loadsclasspathThe next file.

The so-called parent delegate model is that when a class is loaded, the parent class loader is used first, and the child class loader is used only when the parent class loader cannot be loaded. The goal is to avoid reloading classes.

The principle of HashMap

The inside of a HashMap can be seen as a composite structure of arrays and linked lists. The array is divided into buckets. The hash value determines how key-value pairs are addressed in an array. Key-value pairs with the same hash value form a linked list. Note that when the length of the list exceeds the threshold (default is 8), the tree will be triggered and the list will become a tree structure.

There are four methods to understand the principles of HashMap: Hash, PUT, Get, and resize.

  • The hash method. Shift the high value of the hashCode value of the key to the low value for xOR operation. The reason for doing this is that some key hashCode differences are concentrated at the high end, and hash addressing ignores the high end above capacity, which effectively avoids hash collisions.
  • The put method. The put method has the following steps:
  • The hash method is used to obtain the hash value and address the hash value.
  • If there is no collision, put it directly into the bucket.
  • If a collision occurs, it is placed behind the bucket as a linked list.
  • When the length of the list exceeds the threshold, the tree is triggered and the list is converted into a red-black tree.
  • If the array length reaches a threshold, the resize method is called to expand the capacity.
  • The get method. The get method has the following steps:
  • The hash method is used to obtain the hash value and address the hash value.
  • If it is equal to the key addressed to the bucket, return the corresponding value.
  • If there is a conflict, there are two situations. If it is a tree, call getTreeNode to get the value; If it is a linked list, find the corresponding value through a loop.
  • The resize method. Resize does two things:
  • Expand the original array by 2 times
  • Recalculates the index value and puts the original node back into the new array. This step disperses the previously conflicting nodes into a new bucket.

The difference between “sleep” and “wait”

  • The sleep method is a static method in the Thread class, and wait is a method in the Object class
  • Sleep does not release the lock, while wait does
  • Sleep can be used anywhere, while Wait can only be used in synchronized methods or blocks of synchronized code
  • In sleep, time must be passed. In wait, time can be passed or not. If time is not passed, only notify or notifyAll can wake up

The use of the join

The Join method is usually a method that ensures sequential scheduling between threads. It is a method in the Thread class. For example, if thread B.join() is executed in thread A, thread A will enter A wait state and will not wake up until thread B completes its execution to continue executing subsequent methods in thread A.

The join method can pass time arguments or no arguments, which actually calls join(0). Wait method is used. Join method is as follows:

public final synchronized void join(long millis)
    throws InterruptedException {
        long base = System.currentTimeMillis();
        long now = 0;
        if (millis < 0) {
            throw new IllegalArgumentException("timeout value is negative");
        }
        if (millis == 0) {
            while (isAlive()) {
                wait(0); }}else {
            while (isAlive()) {
                long delay = millis - now;
                if (delay <= 0) {
                    break;
                }
                wait(delay); now = System.currentTimeMillis() - base; }}} copy the codeCopy the code

volatile

In general, the concept of a memory model cannot be mentioned without reference to volatile. As we all know, in the running of the program, each instruction is executed by the CPU, and the execution process of the instruction is bound to involve data reading and writing. The problem with this is that the execution speed of the CPU is much faster than the reading and writing speed of the main memory, so reading and writing data directly from main memory reduces the CPU efficiency. To solve this problem, we have the concept of a cache, a cache in every CPU, which reads data from main memory beforehand and then flusher it to main memory at the appropriate time after the CPU has done the calculation.

This running mode is fine in a single thread, but can cause cache consistency issues in multiple threads. For a simple example: I = I +1, execute this code in two threads, assuming the initial value of I is 0. We expect two threads to run with a 2, so we have a situation where both threads read I from main memory into their respective caches, where I in both threads is zero. After the execution of thread 1, I =1 is obtained, and it is flushed to main memory, thread 2 starts execution. Since I in thread 2 is 0 in the cache, the I flushed to main memory after execution of thread 2 is still 1.

Therefore, this leads to the problem of cache consistency for shared variables. In order to solve this problem, a cache consistency protocol is proposed: When a CPU writes to a shared variable, it tells other cpus to set their internal shared variable to invalid. When other cpus read a shared variable in the cache, they discover that the shared variable is invalid, and it reads the latest value from main memory.

In Java multithreaded development, there are three important concepts: atomicity, visibility, and orderliness.

  • ** Atomicity: ** One or more operations will either not be performed, or they will all be performed.
  • Visibility: Changes made to a shared variable (a member variable or static variable in a class) in one thread are immediately visible to other threads.
  • Orderliness: The order in which programs are executed follows the order in which code is executed. Declaring a variable volatile ensures visibility and order. Visibility, as I mentioned above, is essential in multithreaded development. Again, for efficiency of execution, instruction reordering sometimes occurs, so that in a single thread the output of instruction reordering is consistent with our code logic output. Problems can occur in multithreading, however, and volatile can partly prevent instruction reordering.

Volatile works by adding a lock prefix to the generated assembly code. The prefix acts as a memory barrier that serves three purposes:

  • Make sure that instructions are not rearranged behind the screen in front of the screen.
  • Flush shared variables in the cache to main memory immediately after modification.
  • Caches in other cpus are invalid when a write operation is performed.

The difference between volatile and Synchronize

volatile

The variables it modifies do not retain copies and access main memory directly.

In the Java memory model, there is main Memory, and each thread has its own memory (such as registers). For performance, a thread keeps a copy of the variable to be accessed in its own memory. In this case, the value of the same variable in memory in one thread may be inconsistent with the value in memory in another thread or in main memory at any given moment. Declaring a variable volatile means that it is subject to modification by other threads and therefore cannot be cached in thread memory.

Usage scenarios

You can use volatile variables instead of locks only in a limited number of cases. For volatile variables to provide ideal thread-safety, both conditions must be met:

1) Write operations to variables do not depend on the current value.

2) This variable is not included in invariants with other variables.

Volatile is best used when one thread writes and multiple threads read.

If multiple threads write concurrently, you still need to use locks or thread-safe containers or atomic variables instead.

synchronized

When it is used to decorate a method or a code block, it ensures that at most one thread is executing the code at a time.

  1. When two concurrent threads access the synchronized(this) block in the same object, only one thread can be executed at a time. Another thread must wait for the current thread to finish executing the code block before executing it.
  2. However, when one thread accesses a synchronized(this) synchronized block of an object, another thread can still access a non-synchronized (this) synchronized block of that object.
  3. In particular, when one thread accesses a synchronized(this) block of an object, access to all other synchronized(this) blocks in the object is blocked by other threads.
  4. When a thread accesses a synchronized(this) block of an object, it acquires the object lock. As a result, access by other threads to all synchronized code parts of the object is temporarily blocked.

The difference between

  1. Volatile is a variable modifier, while synchronized acts on a piece of code or method.
  2. Volatile simply synchronizes the value of a variable between thread memory and “main” memory; Synchronized synchronizes the values of all variables by locking and unlocking a monitor, and clearly synchronized consumes more resources than volatile.
  3. Volatile does not block threads; Synchronized can cause threads to block.
  4. Volatile guarantees visibility, but not atomicity; Synchronized guarantees atomicity and, indirectly, visibility because it synchronizes data in private and public memory.
  5. Volatile variables are not optimized by the compiler; Variables of the synchronized tag can be optimized by the compiler.

Thread safety involves atomicity and visibility, and Java synchronization mechanisms are built around these two aspects to ensure thread safety.

The main use of the keyword volatile is when multiple threads sense that an instance variable has been modified and can obtain the latest value, i.e. when multiple threads read a shared variable, they can obtain the latest value.

The keyword volatile indicates that the thread reads variables from shared memory each time, rather than from private memory, thus ensuring the visibility of synchronized data. One caveat: if you modify the data in an instance variable

The role of ThreadLocal

Instead of using various locking mechanisms to access variables, the idea of a ThreadLocal is to exchange space for time so that each thread can access its own copy of variables without interfering with each other. Reduce the complexity of passing some common variables between functions or components in the same thread.

The get function is used to get the value of the ThreadLocal associated with the current thread. If the current thread does not have a value for that ThreadLocal, the initialValue function is called to get the initialValue. InitialValue is protected. So generally when we use it, we need to inherit the function and give the initial value. The set function is used to set the value of this ThreadLocal for the current thread, and the remove function is used to remove the value of the ThreadLocal binding. In some cases, this function needs to be called manually to prevent memory leaks.

Producer and consumer patterns in Java

The producer-consumer pattern ensures that producers no longer produce objects when the buffer is full and consumers no longer consume objects when the buffer is empty. The implementation mechanism is to keep producers in a wait state when the buffer is full and consumers in a wait state when the buffer is empty. When a producer produces an object it wakes up a consumer, and when a consumer consumes an object it wakes up a producer.

Three implementations: Wait and notify, await and signal, and BlockingQueue.

  • Wait and notify
//waitAnd notify the import Java. Util. LinkedList; public class StorageWithWaitAndNotify { private final int MAX_SIZE = 10; private LinkedList<Object> list = new LinkedList<Object>(); public voidproduce() {
        synchronized (list) {
            while (list.size() == MAX_SIZE) {
                System.out.println("Warehouse full: Production suspended");
                try {
                    list.wait();
                } catch (InterruptedException e) {
                    e.printStackTrace();
                }
            }
            list.add(new Object());
            System.out.println("A new product has been produced and the current inventory is:" + list.size());
            list.notifyAll();
        }
    }
    public void consume() {
        synchronized (list) {
            while (list.size() == 0) {
                System.out.println(Inventory at zero: Consumption pause);
                try {
                    list.wait();
                } catch (InterruptedException e) {
                    e.printStackTrace();
                }
            }
            list.remove();
            System.out.println("Consumed one product, current inventory is:"+ list.size()); list.notifyAll(); }}} copy the codeCopy the code

  • Await and signal
import java.util.LinkedList;
import java.util.concurrent.locks.Condition;
import java.util.concurrent.locks.ReentrantLock;
class StorageWithAwaitAndSignal {
    private final int                MAX_SIZE = 10;
    private       ReentrantLock      mLock    = new ReentrantLock();
    private       Condition          mEmpty   = mLock.newCondition();
    private       Condition          mFull    = mLock.newCondition();
    private       LinkedList<Object> mList    = new LinkedList<Object>();
    public void produce() {
        mLock.lock();
        while (mList.size() == MAX_SIZE) {
            System.out.println("Buffer is full. Suspend production.");
            try {
                mFull.await();
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
        }
        mList.add(new Object());
        System.out.println("A new product has been produced and the current capacity is:" + mList.size());
        mEmpty.signalAll();
        mLock.unlock();
    }
    public void consume() {
        mLock.lock();
        while (mList.size() == 0) {
            System.out.println("Buffer is empty. Suspend consumption.");
            try {
                mEmpty.await();
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
        }
        mList.remove();
        System.out.println("Consumed a product, the current capacity is:"+ mList.size()); mFull.signalAll(); mLock.unlock(); }} Copy the codeCopy the code

  • BlockingQueue
import java.util.concurrent.LinkedBlockingQueue;
public class StorageWithBlockingQueue {
    private final int                         MAX_SIZE = 10;
    private       LinkedBlockingQueue<Object> list     = new LinkedBlockingQueue<Object>(MAX_SIZE);
    public void produce() {
        if (list.size() == MAX_SIZE) {
            System.out.println("Buffer is full. Suspend production.");
        }
        try {
            list.put(new Object());
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
        System.out.println("A product has been produced with a current capacity of:" + list.size());
    }
    public void consume() {
        if (list.size() == 0) {
            System.out.println("Buffer is empty. Suspend consumption.");
        }
        try {
            list.take();
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
        System.out.println("Consumed a product, the current capacity is:"+ list.size()); }} Copy the codeCopy the code

Final, finally, Finalize difference

Final can modify classes, variables, and methods. The modifier class indicates that the class is not inheritable. A modifier variable means that it cannot be changed. The modifier means that the method cannot be overridden (override).

Finally is a mechanism to ensure that important code will execute. Try-finally or try-catch-finally is usually used to close a file stream.

Finalize is a method in the Object class that is designed to ensure that an Object completes the collection of a specific resource before garbage collection. The Finalize mechanism is now deprecated and was marked deprecated in JDK 9.

The singleton pattern in Java

There are several common implementations of the singleton pattern in Java: hunchman, double-judge slob, static inner class implementation singleton, and enumeration implementation singleton. Here we’ll focus on the double-judged slob and the singleton of static inner class implementations.

The lazy form of double judgment:

Public class SingleTon {// Volatile private static mInstance of volatile SingleTon; privateSingleTon() {
    }
    public static SingleTon getInstance() {
        if (mInstance == null) { 
            synchronized (SingleTon.class) {
                if(mInstance == null) { mInstance=new SingleTon(); }}}returnmInstance; }} Copy the codeCopy the code

The double-judged lazy singleton satisfies both lazy initialization and thread safety. Synchronized wraps code to realize thread safety and improves the efficiency of program execution through dual judgment. The important thing to note here is that singleton instances need to have volatile decorations, which can be problematic in multithreaded situations without them. The reason is that mInstance=new SingleTon() is not an atomic operation. It contains three operations:

  1. Allocate memory to mInstance
  2. Call the constructor of the SingleTon to initialize the member variable
  3. Allocating mInstance to allocated memory space (mInstance is no longer null at this step)

We know that instruction reordering occurs in the JVM, and the normal order of execution is 1-2-3, but instruction reordering can result in 1-3-2. We consider such A situation. When thread A performs step 3 of 1-3-2 and suspends, thread B calls getInstance and goes to the outermost if judgment. Since the outermost if judgment is not wrapped with synchronized, the sentence can be executed. Since thread A has executed Step 3, mInstance is no longer null, and thread B directly returns mInstance. But in fact, we know that A complete initialization must go through all three steps, and since thread A only takes two steps, it is bound to report an error.

The solution is to use volatile to modify mInstance. We know that volatile serves two purposes: to ensure visibility and to prevent instruction reordering. The key here is to prevent instruction reordering.

Singletons for static inner class implementations:

class SingletonWithInnerClass {
    private SingletonWithInnerClass() {
    }
    private static class SingletonHolder{
        private static SingletonWithInnerClass INSTANCE=new SingletonWithInnerClass();
    }
    public SingletonWithInnerClass getInstance() {
        returnSingletonHolder.INSTANCE; }} Copy the codeCopy the code

Lazy initialization is implemented because the loading of the external class does not cause the inner class to be loaded immediately, and the inner class is loaded only when getInstance is called. Since classes are only loaded once, and class loading is thread-safe, all our requirements are met. Singletons for static inner class implementations are also the most recommended approach.

Java reference type differences, specific usage scenarios

In Java, there are four types of reference types: strong reference, soft reference, weak reference, and virtual reference.

  • Strong references: A strong reference is a reference created through a new object, and the garbage collector will not reclaim the object it points to even if it is out of memory.
  • Soft references: Soft references are implemented via SoftRefrence, which has a shorter lifetime than strong references, and the garbage collector reclaims objects referenced by soft references before they run out of memory and throw OOM. A common use scenario for soft references is to store memory sensitive caches that are reclaimed when memory runs out.
  • Weak References: Weak references are implemented through WeakRefrence, which has a shorter lifetime than soft references and is reclaimed by GC as soon as the weakly referenced object is scanned. Weak references are also commonly used to store memory sensitive caches.
  • Virtual References: Virtual references are implemented through FanttomRefrence, which has the shortest life cycle and can be reclaimed at any time. If an object is referenced only by a virtual reference, we cannot access any properties and methods of the object through a virtual reference. It just makes sure that objects do something after Finalize. A common use scenario for virtual references is to track the activity of objects being garbage collected, and a system notification is received before an object associated with a virtual reference is collected by the garbage collector.

The difference between Exception and Error

Both exceptions and errors are derived from Throwable. In Java, only Throwable objects can be thrown or caught. Throwable is the basic Exception handling type.

Exception and Error represent Java’s classification of different Exception situations. An Exception is an unexpected condition that can and should be caught during normal operation of a program and handled accordingly.

Error refers to the situation that is unlikely to occur under normal circumstances, and most of the errors will make the program in an abnormal and irreparable state. The common OutOfMemoryError is a subclass of Error, since it is not normal and therefore not convenient or necessary to catch.

Exception is also classified as checked Exception and unchecked Exception.

  • Checked Exceptions must be explicitly caught in code as part of the compiler’s checks.
  • Unchecked exceptions are runtime exceptions, such as null-pointer exceptions and array outlaws, that are generally avoidable logic errors and are not mandatory by the compiler.

A comparison of several common methods of threading

  1. Thread.sleep(long millis), the current Thread enters TIMED_WAITING state, but does not release the object lock. After millis, the Thread automatically wakes up and enters ready state. Action: The best way to give other threads a chance to execute.
  2. Thread.yield(), which must be called by the current Thread. The current Thread gives up the CPU slice, but does not release the lock resource. The running state changes to the ready state, allowing the OS to select the Thread again. What it does: Allows threads of the same priority to take turns, but there is no guarantee that they will take turns. In practice, there is no guarantee that yield() will yield because the yielding thread may be selected again by the thread scheduler. Thread.yield() does not block. This method is similar to sleep(), except that the user cannot specify how long to pause.
  3. Thread.join ()/thread.join(long millis) : the current thread is in WAITING/TIMED_WAITING state, and the current thread will not release the lock. The current thread enters the ready state after the thread completes execution or the millis time expires.
  4. Thread.interrupt (), which interrupts the specified thread by calling the interrupt() method of another thread in the current thread. If the specified thread calls the wait() method group or joins the method group while blocking, the specified thread throws InterruptedException
  5. Thread.interrupted: This method is called by the current Thread to check whether the current Thread has been interrupted. This method resets the interrupt flag of the current Thread and returns whether the current Thread has been interrupted.
  6. Thread.isinterrupted () returns whether the current thread has been interrupted by calling the isInterrupted() method of another thread
  7. Object.wait (), the current thread calls the wait() method of the object. The current thread releases the object lock and enters the wait queue. Wake up by notify()/notifyAll() or wait(long timeout) timeout time to automatically wake up.
  8. Object.notify () wakes up a single thread waiting on this object monitor, and the selection is arbitrary. NotifyAll () wakes up all threads waiting on the monitor of this object.

Object.wait() / Object.notify() Object.notifyAll()

Any Java Object has a set of monitor methods (defined on java.lang.object), including wait(),

The wait(long timeout), notify(), and notifyAll() methods, which work with the synchronized keyword, can

Implement wait/notification mode

  1. When we want to use an Object’s monitor method, we need the Object’s lock, synchronized(obj){…. //1 obj.wait(); //2 obj.wait(long millis); / / 2… //3} a thread obtains the obj lock and, after doing some work, finds that it needs to wait for some condition to occur. The thread calls obj.wait() and releases the obj lock. The difference between obj.wait() and obj.wait(long millis) is that obj.wait() waits indefinitely until obj.notify() or obj.notifyall () is called and wakes up the thread. Obj. Wait (long millis) is a timeout wait, and the thread wakes up and acquires the lock. Obj. notify() wakes up any thread that is waiting on the object. The thread acquires the lock and its state is BLOCKED into RUNNABLE. Obj. notifyAll() wakes up all the threads that are waiting on the object, and these threads compete for the lock. The state of the thread that acquired the lock is BLOCKED from BLOCKED to RUNNABLE, while the other threads remain BLOCKED. The thread that acquired the lock releases the lock upon completion of code 3, and the other threads continue to compete for the lock until all threads have completed execution. synchronized(obj){ …. //1 obj.notify(); //2 obj.notifyAll(); //2} A thread acquires the obj lock and, after doing something for some time, calls obj.notify() or obj.notifyall (). The thread releases the obj lock and wakes up the thread waiting on obj. The difference between obj.notify() and obj.NotifyAll () is that obj.notify() wakes up any thread waiting on obj (as determined by the JVM). Obj.notifyall () wakes up all threads waiting on obj
  2. Synchronized (obj){// use while instead of ifwhile(obj meets/does not meet a certain condition){obj.wait()}} in the while, To prevent the object monitored by the thread in WAITING state from being awakened (notify or notifyAll) by another reason, but the condition in the while is not satisfied (or it may be satisfied at that time, but it is not satisfied again after the operation of another thread), we need to call wait again to suspend the object

Strong and weak virtual references and usage scenarios

Strong references:

1. Normally created objects will never be collected by GC as long as the reference exists, even in OOM

Object obj = new Object();

2. If you want to break the association between a strong reference and an object, assign it a value of NULL so that the GC will reclaim the object when appropriate

3. The Vector clear() method does this by assigning null

Soft references

1. Collect data before memory overflow, collect data when memory is insufficient, and not collect data if memory is sufficient

2. Usage scenario: Cache in the case of sufficient memory to improve the speed, when the memory is insufficient, the JVM will automatically reclaim

3. This can be used in conjunction with the ReferenceQueue ReferenceQueue. If the object referenced by the soft reference is reclaimed by the JVM, the soft reference is added to the ReferenceQueue associated with it

A weak reference

1. Recycle every time GC, whether there is enough memory or not

2. Usage scenarios: A. ThreadLocalMap Prevents memory leaks B. 3. Monitor whether objects will be reclaimed

3. Weak references can be used in conjunction with a ReferenceQueue (ReferenceQueue). If the object referenced by a weak reference is reclaimed by the JVM, the soft reference is added to the ReferenceQueue associated with it

Phantom reference

1. Each garbage collection is collected to monitor whether the object has been deleted from the memory

2. Virtual references must be associated with the reference queue. When the garbage collector attempts to reclaim an object and finds that it has a virtual reference, it adds the virtual reference to the reference queue associated with it

3. The program can determine whether the referenced object is about to be garbage collected by determining whether a virtual reference has been added to the reference queue. If the program finds that a virtual reference has been added to the reference queue, it can take the necessary action before the memory of the referenced object is reclaimed

Android

What are cold startup and hot startup, the difference, how to optimize, use scenarios, etc.

App cold start: When an application is started, the system creates a new process and assigns it to the application. This startup mode is called cold start (the application process does not exist in the background). Cold start Because the system will create a new process assigned to it, the Application class will be created and initialized, then the MainActivity class will be created and initialized (including a series of measurements, layout, drawing), and finally displayed on the interface.

App hot launch: When the app has been opened but is returned to the desktop or other applications by pressing the Back or Home button, it is called hot launch (the app process already exists in the background). The hot start process will start from the existing process, so the hot start process will not use the Application step, but directly use the MainActivity (including a series of measurements, layout, drawing), so the hot start process only needs to create and initialize a MainActivity. You don’t have to create and initialize the Application

Cold start process

When you tap the app’s startup icon, Android forks a new process from the Zygote process and assigns it to the app, This is done in turn by creating and initializing the Application class, creating the MainActivity class, loading properties like windowBackground in the Theme style Theme to the MainActivity and configuring some properties in the Activity hierarchy, the inflate layout, and when OnCreate/onStart/onResume method are covered before the final measure of contentView/layout/the draw is displayed in the interface on the cold start the brief process of the life cycle of:

Application constructor — > attachBaseContext() — >onCreate — >Activity constructor — >onCreate () — > Configure operations such as backgrounds in the body — >onStart() — > onResume() – > Measure, layout, and display

Cold start optimization is mainly visual optimization, to solve the problem of white screen, improve user experience, so through the above process of cold start. The optimizations that can be made are as follows:

1. Reduce the onCreate() method’s workload 2. Do not let Application participate in business operations 3

Cold start process:

① Click the desktop App icon, and the Binder IPC sends the startActivity request to the system_server process;

② After receiving the request, the system_server process sends a request to zygote to create a process.

(3) Zygote forks a new child process, that is, App process;

4. The App sends attachApplication requests to the Sytem_server process through the Binder IPC.

(5) The System_server process sends scheduleLaunchActivity requests to the App process through binder IPC after a series of preparations.

The binder thread sends the LAUNCH_ACTIVITY message to the main thread through the handler.

After receiving the Message, the main thread creates the target Activity through the launch mechanism, and calls the activity.oncreate () method.

Today to this, the App was officially launched, begin to enter the Activity life cycle, after the onCreate/onStart/onResume method, after the UI rendering can see the main interface of the App.

Activity Startup process

1. Activity1 calls startActivity, which actually calls the execStartActivity method of the Instrumentation class, which is used by the system to monitor the Activity. An Activity has its shadow throughout its life cycle. (1-4)

2. Bind to ActivityManagerService with a cross-process invocation, which handles the Activity stack and notifies Activity1 of Pause. Activity1 notifies AMS of Pause. (5-29)

3. The process.start () method is called in startProcessLocked in ActivityManagerService. Zygote’s native method forkandwte is called through the connection to perform the fork task. It then makes a cross-process call to the Activity2 process. (30-36)

4. ApplicationThread is a binder object that runs in a binder thread pool and contains a class H that inherits from the class Handler. The main thread initiates the Bind Application, AMS does some configuration, and then tells the main thread to bind ApplicationThread, which sends Activity2 information to the main thread via the H object. The message sent is EXECUTE_TRANSACTION, and the body of the message is a ClientTransaction, or LaunchActivityItem. After the main thread gets the Activity2 information, it calls the newActivity method of the Instrumentation class, which creates an Activity2 instance through the ClassLoader. (37-40)

5. Tell Activity2 to performCreate. (41 – Finally)

Note: Now it’s all EXECUTE_TRANSACTION, TransactionExecutor is used to execute ClientTransaction, ClientTransaction contains various ClientTransactionItems, Such as PauseActivityItem, LaunchActivityItem, StopActivityItem, ResumeActivityItem, DestroyActivityItem, etc. The execute methods of these items handle the corresponding handles, such as handlePauseActivity and handleLaunchActivity, and notify the corresponding activities to perform.

Activity Lifecycle

1. Start the Activity: the system calls onCreate, onStart, and onResume to start the Activity.

2. The current Activity is overwritten by another Activity or the screen is locked: The system calls the onPause method to pause the Activity.

3. The current Activity returns to the foreground or unlock screen after being overwritten: the system calls onResume to start the Activity again.

4. The current Activity moves to a new Activity screen or returns to the Home screen by pressing the Home button. The system calls onPause and then onStop to enter the pause state.

5. The user steps back to the Activity: the system calls the onRestart method, then the onStart method, and finally the onResume method to start the Activity again.

6. The current Activity is overwritten or not visible in the background, that is, in step 2 and step 4, the system is out of memory, kill the current Activity, and then the user returns to the current Activity: call onCreate, onStart, and onResume methods again to enter the running state.

7. The user exits the current Activity: the system calls onPause, onStop, and onDestory to end the current Activity.

Activity Four startup modes

Standard mode: Default to Standard if not set in Mainfest; Standard creates a new Activity instance in the stack.

SingleTop: Compared with Standard, top of stack reuse can effectively reduce the resource consumption of repeated activity creation. However, it depends on the specific situation and cannot be generalized.

SingleTask: In-stack singleTask mode, where there is only one instance of an activity in the stack, and an instance of an activity already exists in the stack. If you start an activity in another activity, Android will directly remove the other activity instances from the stack.

SingleInstance: in-heap singleton: there is only one instance in the entire mobile operating system which is the memory singleton.

In singleTop, singleTask, singleInstance, if there is an Activity instance in the application, OnCreate, onStart (Intent Intent), onStart (Intent Intent), onCreate (Intent Intent), onStart (Intent Intent) Instead, it calls onNewIntent(Intent Intent);


LauchMode Instance

The default standard mode is not configured in Standard mail or Mainfest

The singleTop login page, WXPayEntryActivity, WXEntryActivity, and push notification bar

Logic entry of singleTask module: main page (Fragment containerActivity), WebView page, Scan page, e-commerce: shopping interface, order confirmation interface, payment interface

SingleInstance system Launcher, lock screen key, caller id and other system applications

Activity Life cycle of an activity when switching between vertical and horizontal screens, and life cycle of a view

  • When configChanges is not configured: The life cycle is changed once when switching between vertical and horizontal screens
  • When configuring configChanges: Must be set to the android: configChanges = “orientation” | screenSize “, don’t retrace lifecycle methods, will only callback onConfigurationChanged method, pay attention to, If configChanges is not configured or if both values are configured but not included, the lifecycle method is reconfigured and the onConfigurationChanged method is not called back.
  • In addition, onSaveInstanceState() and onRestoreIntanceState() will be called when the lifecycle method is restarted, and the system configuration related to the resource has changed or the resource is insufficient: For example, when the screen rotates, the current Activity is destroyed, and onSaveInstanceState is called before onStop to save data, and onRestoreInstanceState is called after onStart when recreating the Activity. Bundle data is passed to onCreate (which does not necessarily have data) and onRestoreInstanceState (which does have data). When a user or programmer actively destroys an Activity, it is not called back; otherwise, it is called to save interface information. If finish () or the user presses back in the code, there is no callback.

Five kinds of process

Top 1: Foreground process

The foreground process is the most important process in the Android system and is the process with which the user is interacting.

Second highest: visible progress

Visible process refers to the part of the program interface that is visible to the user but does not interact with the user in the foreground.

Third highest: service process

A process that contains a started service is a service process. A service has no user interface and does not interact directly with users, but can run in the background for a long time and provide important functions that users care about.

Fourth highest: background processes

A process is a background process if it contains no services already started and no activities visible to the user.

Fifth highest: empty process

An empty process is one that does not contain any active components. Will be the first to know when system resources are tight.

Differences between startService and bindService, life cycle, and usage scenarios

StartService is different from bindService

StartService: onCreate -> onStartCommand -> onDestory. If startService is called several times, onCreate is not executed repeatedly, but onStartCommand is executed. After startService calls this, it will exist until stopService is called.

BindService: onCreate -> onBind -> onUnbind -> onDestory Its life cycle follows that of the caller. When the caller releases the Service, it must be unbound. When all the bindings are removed, the system will destroy the Service. BindService is called an onServiceConnected method that is connected to a Service object. Connected to a Service object, the object is called an onServiceConnected method.

Usage scenarios

If you want to start a background service for a long period of time, use startService

If it is only for short use, use bindService.

If you want to start a background service for a long period of time and need to interact with the caller in the process, you can use both or startService + BoardCast/ EventBus.

If both startService and bindService are used, note the following when ending the service:

  • To terminate a Service, both unbindService and stopService must be called.

IntentService, by the way, differs from a Service in that it encapsulates a worker thread, which means that the code inside the IntentService works in a child thread.


What are the advantages of IntentService in Android

IntentService is a Service that can handle asynchronous requests by starting it with context. startService(Intent). To use it, you simply inherit IntentService and override the onHandleIntent method in it to receive one An Intent object that stops itself when appropriate (usually when the work is finished). All requests are processed in a worker thread, and they alternate (but do not block the main thread), executing only one request at a time.

This is a message-based service. Each time you start the service, instead of immediately processing your work, you first create the corresponding Looper,Handler and add a Message object to the MessageQueue with the Intent of the customer. When Looper finds a Message, it then gets an “I” The ntent object calls your handler in onHandleIntent((Intent)msg.obj). After processing, it will stop its own service. Meaning that the lifecycle of the Intent is consistent with the task you’re handling. So this class is great for downloading tasks, and the service itself will exit when the downloading task is done.

How many ways are there to communicate between processes

AIDL, broadcast, file, socket, pipe


Broadcast the difference between static and dynamic registration

  1. Dynamically registered broadcasts are not resident broadcasts, which means that broadcasts follow the Activity’s life cycle. Notice that the broadcast receiver is removed before the Activity ends. Static registration is resident, which means that when an application is closed, it is automatically run by system calls if a message is broadcast.
  2. When the broadcast is ordered: the highest priority is received first (static or dynamic). For broadcast receivers of the same priority, dynamic takes precedence over static
  3. The same priority of the same type of broadcast receiver, static: scan first before scan later, dynamic: register first before register later.
  4. When broadcast is default broadcast: priority is ignored and dynamic broadcast receivers take precedence over static broadcast receivers. The same priority of the same type of broadcast receiver, static: scan first before scan later, dynamic: register first before register later.

Android performance optimization tool use (this problem is suggested to cooperate with the Performance optimization in Android)

Common performance optimization tools for Android include Android Profiler, LeakCanary, and BlockCanary that come with Android Studio

The Android Profiler can detect performance problems in three areas: CPU, MEMORY, and NETWORK.

LeakCanary is a third-party memory leak detection library, and once our project is integrated, LeakCanary will automatically detect memory leaks during application run and output them to us.

BlockCanary is also a library for third-party detection of UI lag. After project integration, Block will automatically detect UI lag during application running and output it to us.

Class loaders in Android

  • The PathClassLoader can only load apK that has been installed in the system
  • DexClassLoader can be loadedjar/apk/dex, you can load uninstalled APK from the SD card

What are the categories of animation in Android, and what are their characteristics and differences

Android Animation can be roughly divided into three categories: frame Animation, Tween Animation and Property Animation.

  • Frame animation: Configure a group of images through XML and play them dynamically. Rarely used.
  • Tween Animation: it can be roughly divided into four types of operations: rotation, transparency, scaling and displacement. Rarely used.
  • Property Animation: Property Animation is the most commonly used type of Animation and is more powerful than tween Animation. There are roughly two usage types for property animations, ViewPropertyAnimator and ObjectAnimator. The former works well for general purpose animations such as rotation, displacement, scaling and transparency, and is easy to useView.animate()You get the ViewPropertyAnimator, and then you can animate it. The latter is suitable for animating our custom controls, of course we should add the corresponding in the custom View firstgetXXX()setXXX()Getter and setter methods for the corresponding property, which is called after changing the property in the custom View in the setter methodinvalidate()To refresh the View’s drawing. After the callObjectAnimator.ofProperty type () returns an ObjectAnimator, calledstart()Method to start animation.

The difference between tween animation and attribute animation:

  • The tween animation is the parent container constantly drawing the view, making it look like it’s moving the effect, but actually the view is still there.
  • By constantly changing the value of the properties inside the view, you really change the view.

TimeInterpolator (Interpolator)

Effect: Calculates the percentage change in the current property value based on the percentage time elapsed

Existing interpolators in the system:

  • LinearInterpolator: Uniform animation.
  • AccelerateDecelerateInterpolator (acceleration deceleration interpolation) : animation both slow and fast in the middle.
  • Activity: Animation gets slower and slower.

TypeEvaluator:

Effect: Calculates the value of the changed attribute based on the percentage of the current attribute change.

Existing estimators of the system:

  • IntEvaluator: Indicates an integer attribute
  • FloatEvaluator: For floating-point properties
  • ArgbEvaluator: for the Color attribute

Handler mechanism

When it comes to Handler, there are several classes that are closely related to it: Message, MessageQueue, and Looper.

  • The Message. There are two member variables of interest in Message: target and callback.
  • Target is the Handler object that sends the message
  • Callback is when calledhandler.post(runnable)Is passed to a task of type Runnable. The essence of the POST event is to create a Message, assigning the runnable we passed to the created Message callback member variable.
  • MessageQueue.The MessageQueue is obviously the queue that holds the message, but it is interesting to note that the MessageQueue in MessageQueuenext()Method, which returns the next message to be processed.
  • The stars.The Looper message poller is the core that connects the Handler to the message queue. First of all, we all know that if we want to create a Handler in a thread, we have to passLooper.prepare()Create Looper and then call itLooper.loop()Enable polling. Let’s focus on these two methods.
  • prepare().This method does two things: it passes firstThreadLocal.get()Gets the Looper in the current thread, or if it is not null, a RunTimeException is thrown, meaning that one thread cannot create two LoOpers. If null, go to the next step. The second step is to create a Looper and pass itThreadLocal. Set (stars).Bind the Looper we created to the current thread. It is important to note that the creation of the message queue actually takes place in the Looper constructor.
  • loop().This method opens polling for the entire event mechanism. The essence of it is to start an endless cycle of passingMessageQueue next ()Method to get the message. It is called when it gets the messagemsg.target.dispatchMessage()Let’s do the processing. Actually, we mentioned that when we talked about Message,msg.targetThis is actually the handler that sends this message. The essence of this code is to callThe handler dispatchMessage ().
  • The Handler.With all this foreshadowing, here comes the most important part. Handler analysis focuses on two parts: sending messages and processing messages. * Send messages. In fact, besides sendMessage, there are different ways to send messages, such as sendMessageDelayed, Post and postDelayed. But they all essentially call sendMessageAtTime. EnqueueMessage is called in the sendMessageAtTime method. In the enqueueMessage method, you do two things: passmsg.target = thisBinds the message to the current handler. Then throughqueue.enqueueMessageMessage enqueueing is implemented.
  • Process messages.The core of message processing is thisdispatchMessage()This method. The logic behind this method is simple: judge firstmsg.callbackNull or not. If not, this runnable is executed. If it is empty, ours will be executedhandleMessageMethods.

Handler

www.yuque.com/docs/share/…

What happens if a child thread updates the UI, why don’t we let the child thread update the UI, why don’t we get an error if we update the UI in onCreate with a child thread

ActivityThread. HandleResumeActivity () to initialize the ViewRootImpl then execute requestLayout () check for thread

if(r.window == null && ! a.mFinished && willBeVisible) { r.window = r.activity.getWindow(); View decor = r.window.getDecorView(); decor.setVisibility(View.INVISIBLE); ViewManager wm = a.getWindowManager(); WindowManager.LayoutParams l = r.window.getAttributes(); a.mDecor = decor; l.type = WindowManager.LayoutParams.TYPE_BASE_APPLICATION; l.softInputMode |= forwardBit;if (r.mPreserveWindow) {
                a.mWindowAdded = true;
                r.mPreserveWindow = false;
                // Normally the ViewRoot sets up callbacks with the Activity
                // in addView->ViewRootImpl#setView. If we are instead reusing
                // the decor view we have to notify the view root that the
                // callbacks may have changed.
                ViewRootImpl impl = decor.getViewRootImpl();
                if (impl != null) {
                    impl.notifyChildRebuilt();
                }
            }
            if (a.mVisibleFromClient) {
                if(! a.mWindowAdded) { a.mWindowAdded =true;
                    wm.addView(decor, l);
                } else {Copy the code

Android Performance Optimization

In my opinion, performance optimization in Android can be divided into the following aspects: memory optimization, layout optimization, network optimization, installation package optimization.

  • Memory optimization: The next question is.
  • Layout optimization: The essence of layout optimization is to reduce the hierarchy of the View. Common layout optimization schemes are as follows
  • Choosing a RelativeLayout in preference to a LinearLayout and a RelativeLayout will reduce the View’s hierarchy
  • Extract and use commonly used layout components\< include \>The label
  • through\< ViewStub \>Tag to load uncommon layouts
  • use\< Merge \>Tag to reduce the nesting level of the layout
  • Network optimization: Common network optimization schemes are as follows
  • Minimize network requests and merge as many as you can
  • Avoid DNS resolution. Searching by domain name may take hundreds of milliseconds and there may be the risk of DNS hijacking. You can add dynamic IP address update based on service requirements, or switch to domain name access when IP address access fails.
  • Large amounts of data are loaded in pagination mode
  • Network data transmission uses GZIP compression
  • Add network data to the cache to avoid frequent network requests
  • When uploading images, compress them as necessary
  • Installation package optimization: The core of installation package optimization is to reduce the volume of APK. Common solutions are as follows
  • Obfuscation can reduce APK volume to some extent, but the actual effect is minimal
  • Reduce unnecessary resource files in the application, such as pictures, and try to compress pictures without affecting the effect of the APP, which has a certain effect
  • When using SO library, the v7 version of SO library should be retained first, and other versions of SO library should be deleted. The reason is that in 2018, the V7 version of the SO library can meet most of the market requirements, maybe eight or nine years ago phone can not meet, but we don’t need to adapt to the old phone. In practice, the effect of reducing the size of APK is quite significant. If you use a lot of SO libraries, for example, a version of SO with a total of 10 MB, then only keep the V7 version and delete the Armeabi and V8 SO libraries, the total size can be reduced by 20 MB.

Android Memory Optimization

In my opinion, Memory optimization for Android is divided into two parts: avoiding memory leaks and expanding memory.

The essence of a memory leak is that an object with a longer life refers to an object with a shorter life.

Common memory leaks
  • Memory leak caused by singleton pattern. The most common example is when you create this singleton and you pass in a Context, you pass in an Activity Context, and because of the static properties of the singleton, its life cycle is loaded from the singleton class until the end of the application, So even after finishing the incoming Activity, our singleton still holds a reference to the Activity, causing a memory leak. The solution is simple: don’t use an Activity Context. Use an Application Context to avoid memory leaks.
  • Memory leaks caused by static variables. Static variables are placed in the method area, and their lifetime is from class loading to the end of the program. As you can see, the lifetime of static variables is very long. The most common example of a memory leak caused by static variables is when we create a static variable in an Activity that requires passing in a reference to the Activity’s this. In this case, even if the Activity calls Finish, memory leaks. The reason is that because this static variable lives almost the same as the entire application life cycle, it holds a reference to the Activity, causing a memory leak.
  • ** Memory leaks caused by non-static inner classes. Non-static inner classes can cause memory leaks by holding references to external classes. The most common example is the use of handlers and Threads in activities. Handlers and Threads created using non-static inner classes hold references to the current Activity while executing delayed operations, which can cause memory leaks if the Activity is terminated while executing delayed operations. There are two solutions: the first is to use a static inner class, where you invoke an Activity with a weak reference. The second method is called in the Activity’s onDestroyhandler.removeCallbacksAndMessagesTo cancel the delayed event.
  • Memory leaks caused by using resources that are not shut down in time. Common examples are: various data streams are not closed in a timely manner, Bitmap is not closed in a timely manner, etc.
  • The third-party library cannot be unbound in time. Several libraries provide registration and unbinding functions, the most common being EventBus, which we all know is registered in onCreate and unbound in onDestroy. If unchained, EventBus is a singleton pattern that holds references to activities forever, causing memory leaks. Also common is RxJava, which is called in the onDestroy method after doing some delay with the Timer operatordisposable.dispose()To cancel the operation.
  • Memory leak caused by property animation. A common example is when you exit an Activity during a property animation execution, and the View object still holds a reference to the Activity, causing a memory leak. The solution is to cancel the property animation by calling the animation’s Cancel method in onDestroy.
  • Memory leak caused by WebView. WebView is special in that it causes a memory leak even when its destroy method is called. In fact, the best way to avoid WebView memory leakage is to make the WebView Activity in another process, when the Activity ends kill the current WebView process, I remember Ali Nail WebView is another process started. This should also be used to avoid memory leaks.

Expand the memory

Why expand our memory? Sometimes we have to use a lot of third party commercial SDKS in our actual development. These SDKS are actually good and bad. The large SDK may leak less memory, but the small SDK quality is not very reliable. So the best way to deal with this situation that we can’t change is to expand memory.

There are two common ways to expand memory: one is to add the largeHeap=”true” attribute under Application in the manifest file, and the other is to start multiple processes in the same Application to increase the total memory space of an Application. The second method is actually quite common, for example, I have used a twitter DK, a twitter Service is actually in a separate process.

Memory optimization in Android is generally open source and throttling, open source is to expand memory, throttling is to avoid memory leaks.

Binder mechanism

In Linux, processes are independent of each other in order to avoid interference with other processes. There is also user space and kernel space within a process. The isolation here is divided into two parts, inter-process isolation and intra-process isolation.

If there is isolation between processes, there is interaction. Interprocess communication is IPC, and user space and kernel space communication is system call.

In order to ensure the independence and security of Linux, processes cannot directly access each other. Android is based on Linux, so it also needs to solve the problem of interprocess communication.

There are many ways to communicate between Linux processes, such as pipes, sockets, and so on. Why Is Binder used for Android interprocess communication and not Linux

There are two main considerations in the existing methods: performance and safety

  • Performance. Performance requirements on mobile devices are more stringent. Traditional Linux interprocess communication such as pipes, sockets, etc., replicates data twice with binders. Binder is superior to traditional process communication in terms of performance.
  • Security. Traditional Linux process communication does not involve authentication between the communicating parties, which can cause some security issues. The Binder mechanism incorporates authentication to improve security.

Binder is based on the CS architecture and has four main components.

  • The Client. Client process.
  • Server. Server process.
  • ServiceManager. Provides the ability to register, query, and return proxy service objects.
  • Binder. Mainly responsible for establishing Binder connections between processes, data interaction between processes and other low-level operations.

The main flow of Binder mechanism is as follows:

  • The server uses Binder drivers to register our services with the ServiceManager.
  • Clients use Binder drivers to query services registered with the ServiceManager.
  • The ServiceManager returns a proxy object to the server using the Inder driver.
  • After receiving the proxy object from the server, the client can communicate with the other processes.

The principle of LruCache

The core principle of LruCache is the effective use of LinkedHashMap, which has a LinkedHashMap member variable inside. There are four methods worth focusing on: constructors, GET, PUT, and trimToSize.

  • Constructor: Two things are done in the constructor of LruCache, setting maxSize and creating a LinkedHashMap. Note here that LruCache sets the accessOrder of the LinkedHashMap to true, which is the order in which the output of the LinkedHashMap is iterated. True means output in order of access, false means output in order of addition. Since we usually output in order of addition, accessOrder is false by default, but our LruCache needs to output in order of access. So explicitly set accessOrder to true.
  • Get method: Essentially the get method that calls the LinkedHashMap, and since we set accessOrder to true, each call to the GET method places the current element we are visiting at the end of the LinkedHashMap.
  • The put method: essentially calls the LinkedHashMap put method. Due to the nature of LinkedHashMap, each call to the Put method also puts the new element at the end of the LinkedHashMap. After the addition, the trimToSize method is called to ensure that the added memory does not exceed maxSize.
  • TrimToSize method: Inside the trimToSize method, a while(true) loop is started, which continuously deletes elements from the top of the LinkedHashMap until the deleted memory is less than maxSize and breaks out of the loop.

In fact, we can summarize here, why is this algorithm called least recently used algorithm? The principle is simple: each of our put or get is treated as a visit, and due to the nature of LinkedHashMap, the elements accessed are placed at the end of each visit. When our memory reaches the threshold, the trimToSize method is triggered to remove the element at the head of the LinkedHashMap until the current memory is less than maxSize. The reason for removing the front element is obvious: the most recently accessed elements are placed at the tail, and the elements at the front must be the least recently used elements, so they should be removed first when memory is low.

DiskLruCache principle

Design an asynchronous loading frame for images

To design a picture loading framework, we must use the idea of three level cache of picture loading. The three-level cache is divided into memory cache, local cache and network cache.

Memory cache: The Bitmap is cached in memory, which is fast but has small memory capacity. Local cache: Caches images to files, which is slower but larger. Network cache: Get pictures from the network. The speed is affected by the network.

If we were designing an image loading framework, the process would look something like this:

  • After getting the image URL, first look for the BItmap in memory, if found directly load.
  • If it is not found in the memory, it is searched from the local cache. If it can be found in the local cache, it is directly loaded.
  • In this case, the image will be downloaded from the network. After downloading, the image will be loaded, and the downloaded image will be put into the memory cache and local cache.

Here are some basic concepts. If it is a specific code implementation, there are probably several aspects of the file:

  • First we need to determine our memory cache, which is usually LruCache.
  • To determine the local cache, DiskLruCache is usually used. It should be noted that the file name of the image cache is usually a string of URLS encrypted by MD5, so as to avoid the file name directly exposing the URL of the image.
  • Once the memory cache and local cache have been determined, we need to create a new class MemeryAndDiskCache, of course, whatever the name is, which contains the LruCache and DiskLruCache mentioned earlier. In the MemeryAndDiskCache class, we define two methods, one is getBitmap, the other is putBitmap, corresponding to the image acquisition and cache, the internal logic is also very simple. In getBitmap, bitmaps are obtained by memory and local priority. In putBitmap, memory is cached first and then cached locally.
  • After the cache policy class is identified, we create an ImageLoader class that must contain two methods, one to display imagesdisplayImage(url,imageView)The other is to get pictures from the InternetdownloadImage(url,imageView). The first thing to do in the show picture method is to passImageView.setTag(url)Bind url to imageView, this is to avoid the image mismatch bug caused by imageView reuse when loading network images in the list. Then the cache will be fetched from MemeryAndDiskCache. If it exists, load it directly. If not, the fetch image from the network method is called. There are many ways to get images from the Internet, and I generally use them hereOkHttp+Retrofit. When you get a picture from the network, judge it firstimageView.getTag()If the url is consistent with the image, the image is loaded; if not, the image is not loaded. In this way, the asynchronous loading of images in the list is avoided. At the same time, MemeryAndDiskCache will be used to cache the image after obtaining the image.

Event distribution mechanism in Android

When our finger touches the screen, the event actually goes through a process like Activity -> ViewGroup -> View to the View that finally responds to our touch event.

When it comes to event distribution, the following methods are essential: dispatchTouchEvent(), onInterceptTouchEvent(), onTouchEvent. Next, follow the Activity -> ViewGroup -> View process to outline the event distribution mechanism.

When our finger touches the screen, an Action_Down event is triggered, and the Activity on the current page responds first, going to the Activity’s dispatchTouchEvent() method. The logic behind this method is simply this:

  • callGetWindow. SuperDispatchTouchEvent ().
  • If the previous step returns true, return true; Otherwise return its ownOnTouchEvent ().The logic is easy to understand,getWindow().superDispatchTouchEvent()If you return true, the current event has already been handled, so you don’t need to call your own onTouchEvent. Otherwise, the event is not handled and the Activity needs to handle it itself, calling its own onTouchEvent.

The getWindow() method returns an object of type Window, as we all know. In Android, PhoneWindow is the only implementation class for Window. So this essentially calls superDispatchTouchEvent() from PhoneWindow. `

In the method in the actual call mDecor PhoneWindow. SuperDispatchTouchEvent (event). The mDecor is DecorView, which is a subclass of FrameLayout and calls super.DispatchTouchEvent () in superDispatchTouchEvent() in DecorView. It should be obvious at this point that DecorView is a subclass of FrameLayout, which is a subclass of ViewGroup that essentially calls the dispatchTouchEvent() of the ViewGroup.

Now that our event has been passed from the Activity to the ViewGroup, let’s examine the event handling methods in the ViewGroup.

The logic in dispatchTouchEvent() in ViewGroup looks like this:

  • throughonInterceptTouchEvent()Check whether the current ViewGroup blocks events. The default ViewGroup does not block events.
  • If intercepted, return its ownonTouchEvent();
  • If not intercepted, according tochild.dispatchTouchEvent()The return value of. If true, return true; Otherwise return its ownonTouchEvent(), where unprocessed events are passed up.

In general, the onInterceptTouchEvent() of the ViewGroup returns false, that is, no intercepting. The important thing to note here is the sequence of events, such as Down events, Move events…… Up event, from Down to Up is a complete sequence of events, corresponding to the finger from press Down to lift the series of events, if the ViewGroup intercepts the Down event, then subsequent events will be handed to the ViewGroup onTouchEvent. If the ViewGroup intercepts something other than a Down event, an Action_Cancel event is sent to the previous View that handled the Down event, notifying the child View that the subsequent sequence of events has been taken over by the ViewGroup. The child View can be restored to its previous state.

Here is a common example: in a Recyclerview clock there are a lot of buttons, we first press a Button, then slide a distance and then loosen, at this time Recyclerview will follow the slide, will not trigger the Button click event. In this example, when we press the button, the button receives the Action_Down event, and normally the subsequent sequence of events should be handled by the button. But when we swipe a little bit, the Recyclerview senses that this is a swipe, intercepts the sequence of events, and goes through its own onTouchEvent() method, which is reflected on the screen as a swipe of the list. The button is still in the pressed state, so it needs to send an Action_Cancel to tell the button to restore its previous state during interception.

Event distribution eventually goes to the View’s dispatchTouchEvent(). There is no onInterceptTouchEvent() in the dispatchTouchEvent() of a View, which is easy to understand. A View is not a ViewGroup and does not contain other child views, so there is no intercepting or intercepting. Ignoring a few details, the View dispatchTouchEvent() directly returns its own onTouchEvent(). If onTouchEvent() returns true, the event is processed, otherwise unprocessed events are passed up until a View handles the event or until the Activity’s onTouchEvent() terminates.

People here often ask the difference between onTouch and onTouchEvent. First, both methods are in the View’s dispatchTouchEvent() logic:

  • If touchListener is not null, and the View is enable, and onTouch returns true, it will return true if all three conditions are metonTouchEvent()Methods.
  • If one of these conditions is not satisfied, it will go toonTouchEvent()Methods. So the onTouch sequence is before the onTouchEvent.

View

Drawing process

The view is drawn from the performTraversals() method of ViewRootImpl, which calls mview.measure (), mview.layout () and mview.draw () in sequence.

The drawing process of View is divided into three steps: measurement, layout and drawing, which correspond to three methods: Measure, layout and draw.

  • Measurement phase. The measure method will be called by the parent View. After some optimization and preparation in the measure method, the onMeasure method will be called for actual self-measurement. The onMeasure method does different things in a View and ViewGroup:
  • The View. The onMeasure method in the View calculates its size and stores it via setMeasureDimension.
  • The ViewGroup. The onMeasure method in the ViewGroup calls all the child IEW’s measure methods for self-measurement and saving. Then calculate its own size from the size and position of the child View and save it.
  • Layout stage. The Layout method is called by the parent View. The Layout method saves the size and position passed in by the parent View and calls onLayout for the actual internal layout. OnLayout also does different things in View and ViewGroup:
  • The View. Since the View has no child views, the onLayout of the View does nothing.
  • The ViewGroup. The onLayout method in the ViewGroup calls the Layout methods of all child Views, passing them dimensions and positions, and letting them complete their own internal layout.
  • Drawing phase. The draw method does some scheduling, and then calls the onDraw method to draw the View itself. The scheduling flow of draw method is roughly like this:
  • Draw the background. The correspondingdrawBackground(Canvas)Methods.
  • Draw the main body. The correspondingonDraw(Canvas)Methods.
  • Draw the child View.The correspondingdispatchDraw(Canvas)Methods.
  • Draw slide correlation and foreground.The correspondingonDrawForeground(Canvas).

MeasureSpec

MeasureSpec is the measurement rule of a View. WidthMeasureSpec and heightMeasureSpec are usually passed by the parent control to measure the child control. This value contains two messages, SpecMode and SpecSize. How can an int contain two pieces of information? We know that int is a 4-byte 32-bit data. In the two ints, the first 2 bits are SpecMode, and the last 30 bits are SpecSize.

There are three types of mode:UNSPECIFIED.EXACTLY.AT_MOST

Measurement model application
EXACTLY Precision mode, used when width or height is fixed XXDP or MACH_PARENT
AT_MOST This measurement mode is used when width or height is set to warp_content
UNSPECIFIED The parent container has no display of the current View, and the child View can take any size. Generally used in the system, such as Scrollview, ListView.

How do we get two pieces of information out of an int? Don’t worry, there is a MeasureSpec class inside the View. This class already encapsulates various methods for us:

/ / the Size and mode into an int value int measureSpec = measureSpec. MakeMeasureSpec (Size, mode); Int size = MeasureSpec. GetSize (MeasureSpec); Int mode = MeasureSpec. GetMode (MeasureSpec); Copy the codeCopy the code

Specific implementation details, you can view the source code

DecorView’s measureSpec calculation logic

We might wonder if all child controls’ measureSpec is generated by combining the parent’s own measureSpec with the child View’s LayoutParams. How does a DecorView, the top-level parent of a view, get its own measureSpec? Let’s analyze the source code :(the source code has been deleted)

/ / ViewRootImpl class private voidperformTraversalsChildWidthMeasureSpec = getRootMeasureSpec(mWidth, lp.width); // Get DecorView height measureSpec int childHeightMeasureSpec = getRootMeasureSpec(mHeight, lP.height); // Ask host how big it wants to be performMeasure(childWidthMeasureSpec, childHeightMeasureSpec); } Duplicate codeCopy the code

//ViewRootImpl private static int getRootMeasureSpec(int windowSize, int rootDimension) {int measureSpec; switch (rootDimension) {case ViewGroup.LayoutParams.MATCH_PARENT:
            // Window can't resize. Force root view to be windowSize. measureSpec = MeasureSpec.makeMeasureSpec(windowSize, MeasureSpec.EXACTLY); break; case ViewGroup.LayoutParams.WRAP_CONTENT: // Window can resize. Set max size for root view. measureSpec = MeasureSpec.makeMeasureSpec(windowSize, MeasureSpec.AT_MOST); break; default: // Window wants to be an exact size. Force root view to be that size. measureSpec = MeasureSpec.makeMeasureSpec(rootDimension, MeasureSpec.EXACTLY); break; } return measureSpec; } Duplicate codeCopy the code

WindowSize is the width-height of the widow, so we can see that the DecorView’s measureSpec is generated from the window’s width-height and its own LayoutParams.

Ideas and solutions of event conflict resolution

There are two options: external and internal

Why are Android UI operations not thread-safe

It is possible that the UI thread (or other non-UI threads) is refreshing the interface when a non-UI thread refreshes the interface, causing multiple interface refreshes to not be synchronized and thus causing thread insecurity.

Summary of causes and solutions of ANR

The full name of ANR is application not responding, refers to the application does not respond, the Android system for some events need to be completed within a certain time range, if the scheduled time can not get an effective response or response time is too long, will cause ANR. Generally, a prompt box will pop up to inform the user that XXX is not responding. The user can continue to wait or Force Close.

First of all, the occurrence of ANR is conditionally limited, which can be divided into the following three points:

The ANR is only generated by the main thread, which is the UI thread;

Some input event or specific operation must occur, such as an input event such as a BroadcastReceiver or touch screen, to call functions throughout the life cycle of the BroadcastReceiver or Service;

The event response timeout described above varies with context

A. The main thread does not complete processing of the input event within 5 seconds

The BroadcastReceiver onReceive() function is not completed within 10 seconds

C. The main thread is not processed within 20 seconds of each life cycle function of the foreground Service (background Service200s)

So what is the root cause of ANR? The simple summary is as follows:

1. The main thread performs time-consuming operations, such as database operations or network programming, and I/O operations

2. Other processes (that is, other programs) occupy the CPU so that the current process cannot obtain the CPU time slice. For example, frequent read/write operations of other processes may cause this problem.

In terms of subdivision, the reasons for ANR are as follows:

Time-consuming network access

Lots of data reads and writes

Database operations

Hardware operations (e.g. camera)

Call join(), sleep(), wait(), or wait for a thread lock

The number of Service binder reached the upper limit

The WatchDog ANR occurs in the system server. Procedure

Service Busy causes timeout and no response. Procedure

Another thread holds the lock, causing the main thread to wait out

Other threads terminate or crash causing the main thread to wait

So how do you avoid ANR or what is the solution to ANR?

1. Avoid time-consuming operations on the main thread. Create a child thread to complete all time-consuming operations, and then update the UI on the main thread.

2. When the BroadcastReceiver wants to perform time-consuming operations, it should start a service and hand over time-consuming operations to the Service.

3. Avoid launching an Activity in an Intent Receiver, which creates a new screen and steals focus from the program the current user is running. If your application needs to show something to the user in response to an Intent broadcast, you should use the Notification Manager to do so.


Android source code common design patterns and their own common design patterns in development

Blog.csdn.net/zxc123e/art…

How does Android interact with JS

In Android, the interaction between Android and JS is divided into two aspects: Android calls the method in JS, JS calls the method in Android.

  • Android js. There are two ways to tune JS on Android:
  • Webview.loadurl (” method name in javascript: JS “). The advantage of this method is that it is very simple, but the disadvantage is that there is no return value. If you need to get the return value of the JS method, you need js to call the method in Android to get the return value.
  • WebView. EvaluateJavaScript (the method name “in the” javascript: js, ValueCallback). The advantage of this method over loadUrl is that the ValueCallback callback gets the return value of the JS method. The disadvantage is that Android4.4 only has this method, poor compatibility. However, in 2018, most apps on the market require a minimum version of 4.4, so I think this compatibility is not a problem.
  • Js Android. There are three ways to tune Android with JS:
  • WebView. AddJavascriptInterface (). This is the official solution for JAVASCRIPT to call Android methods. Note that the @javascriptInterface annotation should be added to Android methods for JAVASCRIPT to avoid security vulnerabilities. The downside of this solution is that Android4.2 had security holes before, but they have been fixed since. Again, compatibility is not an issue in 2018.
  • Override WebViewClient’s shouldOverrideUrlLoading() method to intercept the URL, parse the URL, and call the Android method if it meets the requirements of both parties. The advantage is to avoid the security vulnerability before Android4.2, the disadvantage is also very obvious, can not directly get the return value of the call Android method, only through Android call JS method to get the return value.
  • Rewrite WebChromClientonJsPrompt()Method, the same as the previous way, after getting the URL first parse, if it meets the requirements of both parties, you can call the Android method. Finally, if you want to return a value, passResult.confirm ("Android method return value ")The return value of Android can be returned to JS. The advantage of the method is that there are no vulnerabilities, there are no compatibility restrictions, and it is also easy to obtain the return value of the Android method. The important thing to note here is that in addition to onJsPrompt there are onJsAlert and onJsConfirm methods in WebChromeClient. So why not choose the other two methods? The reason is that onJsAlert has no return value, while onJsConfirm has only true and false return values. Meanwhile, in front-end development, prompt method is rarely called, so onJsPrompt is used.

SparseArray principle

SparseArray, generally speaking, is a data structure used in Android to replace HashMap. To be precise, it replaces a HashMap with a key of type Integer and a value of type Object. Note that SparseArray only implements the Cloneable interface, so it cannot be declared with a Map. Internally, SparseArray consists of two arrays. One is mKeys of type int[], which holds all the keys. The other is mValues of type Object[], which holds all values. SparseArray is most commonly compared to a HashMap. SparseArray uses less memory than a HashMap because it consists of two arrays. As we all know, add, delete, change, search and other operations need to find the corresponding key-value pair first. SparseArray is addressed internally by binary search, which is obviously less efficient than the constant time complexity of HashMap. Binary search, the other thing to mention here is binary search is that the array is already sorted, and that’s right, SparseArray is going to sort it up by key. Taken together, SparseArray takes up more space than HashMap, but is less efficient than HashMap, and is a typical time-swap space suitable for smaller storage. From a source code perspective, I think the things to watch out for are SparseArray’s remove(), PUT (), and GC () methods.

  • remove().The SparseArrayremove()Instead of just deleting and then compressing the array, the value to be deleted is set to the static property of DELETE, which is essentially an Object, The mGarbage property in SparseArray is also set to true, which is convenient for calling itself when appropriategc()Method to compress arrays to avoid wasting space. This improves efficiency by overwriting the DELETE with the value to be added if the key to be added in the future is equal to the deleted key.
  • The gc ().In the SparseArraygc()Methods have absolutely nothing to do with the JVM’s GC. The inside of the ‘ ‘gc()’ method is actually a for loop, which compresses the array by moving the non-DELETE key-value pairs forward and overwriting the DELETE key-value pairs. At the same time, mGarbage is set to false to avoid wasting memory.
  • Put ().The put method is the logic that overwrites the value if a binary lookup finds a key in the mKeys array. If the index is not found, the key index that is closest to the key to be added will be retrieved. If the value corresponding to the index is DELETE, the new value can be directly overwritten by DELET. In this case, the movement of array elements can be avoided, thus improving efficiency. If value is not DELETE, mGarbage is judged, if true, mGarbage is calledgc()Method to compress the array, then find the appropriate index, move the indexed key-value pair back, insert a new key-value pair, this process may trigger the array expansion.

How to avoid OOM loading images

We know that the size of the Bitmap in memory is calculated as: pixels long * pixels wide * memory per pixel. There are two ways to avoid the OOM: scale down the length and width and reduce the amount of memory per pixel.

  • Scale down the length and width. We know that bitmaps are created using the factory method of BitmapFactory,DecodeFile (), decodeStream(), decodeByteArray(), decodeResource(). Each of these methods takes a parameter of type Options, which is an inner class of BitmapFactory that stores BItmap information. Options has one property: inSampleSize. We can modify inSampleSize to reduce the length and width of the image, thus reducing the amount of memory used by BItma P. Note that the inSampleSize size needs to be a power of 2. If it is less than 1, the code forces inSampleSize to be 1.
  • Reduce memory occupied by pixels. Options has a property inPreferredConfig, which is the defaultARGB_8888Represents the size occupied by each pixel. We can do this by changing it toRGB_565orARGB_4444To reduce memory in half.

A larger load

Loading a large hd image, such as Riverside Scene at Qingming Festival, is impossible to display on the screen at first, and considering the memory condition, it is impossible to load all the images into the memory at one time. This time you need to load the local, Android has a responsible for local loading class: BitmapRegionDecoder. Use method is simple, through BitmapRegionDecoder. NewInstance () to create objects, called after decodeRegion (the Rect the Rect, BitmapFactory. Options Options). The first argument rect is the area to display, and the second argument is Options, an inner class in BitmapFactory.

Bitmap calculation and memory optimization

Total size of data = image width × image height × (device resolution/resource directory resolution)^2 × size of bytes per pixel (can be elaborated)

Bitmap memory optimization

Image memory is generally divided into run-time memory and storage-time local overhead (reflected in package size), but we will focus only on run-time memory optimization.

In the previous section, we saw that an 800 * 600 image was parsed into memory without any processing, which took up 17.28 MB of memory. Imagine this overhead in a list of images, and the memory footprint would be grossly exaggerated. According to the previous calculation formula of Bitmap memory, memory can be reduced in the following ways:

  1. Use a low-color parsing mode, such as RGB565, to reduce the byte size of a single pixel
  2. Resource files are placed properly, and high resolution pictures can be placed in high resolution directories
  3. Reduce the size of the picture

The first method reduces the memory overhead by about half. Android uses ARGB8888 configuration to handle colors by default, which takes up 4 bytes. RGB565 will only take up 2 bytes, at the cost of displaying relatively few colors, which is suitable for scenarios that require less color richness.

The second method is related to the specific resolution of the image. It is suggested that the high-resolution image should be placed in a reasonable resource directory during development. Notice that the default resource directory on Android is 160dpi. In theory, the higher the resolution of the resource directory the image is placed in, the less memory it will consume, but the lower resolution image will be stretched and the display will be distorted. On the other hand, higher resolution images also mean more local storage.

The third method, in theory, can reduce memory usage by a factor of ten depending on the applicable environment. It is based on the fact that the source image size is generally larger than the target image size, so it can be scaled to reduce the width and height of the display image, thus greatly reducing the memory footprint.

AsyncTask summary

Android UI is thread-unsafe, and if you want to perform UI operations in child threads, you need to take advantage of Android’s asynchronous message processing mechanism.

But to make it easier to update UI elements in child threads, Android has introduced an AsyncTask class since version 1.5

In Android, threads are usually divided into two types, one of which is called Main Thread. All threads other than Main Thread can be called Worker Thread

AsyncTask: An asynchronous task, literally, is performed asynchronously while our UI main thread is running. AsyncTask allows us to perform an asynchronous task in the background. We can place time-consuming operations in asynchronous tasks and send the results of task execution back to our UI thread at any time to update our UI controls

When we define a class that inherits AsyncTask, we need to specify three generic parameters for it:

AsyncTask<Params, Progress, Result>

Params: This generic specifies the type of parameters we pass to the asynchronous task execution

Progress: This generic specifies the type of parameter that our asynchronous task returns to the UI thread when it executes

Result: This generic specifies the type of Result returned to the UI thread after the asynchronous task completes execution

If neither is specified, both are written as Void

To execute an asynchronous task, follow the following four steps:

OnPreExecute (): This method is executed before the asynchronous task is executed and is executed in the UI Thread. Normally, we do some initialization of the UI control in this method, such as popup to ProgressDialog

doInBackground(Params… params): The onPreExecute() method is executed immediately after the onPreExecute() method is executed. This method is used to handle asynchronous tasks. The Android operating system starts a worker thread in the background thread pool to execute our method. Therefore, this method is executed in worker thread. After this method is executed, we can send our execution result to our last onPostExecute method. In this method, we can obtain data from the network and other time-consuming operations

onProgressUpdate(Progess… Values): This method is also executed in UI Thread. When executing asynchronous tasks, we sometimes need to send the execution progress back to our UI interface. For example, when downloading a network picture, we need to display its download progress at any time, so that we can use this method to update our progress. Before calling this method, we need to call a publishProgress(Progress) method in the doInBackground method to pass our Progress to the onProgressUpdate method to update it from time to time

onPostExecute(Result… Result: When our asynchronous task completes, we return the result to this method, which is also called in the UI Thread. We can display the result on the UI control

We can cancel the execution of our asynchronous task at any time by calling cancel(Boolean).

When using AsyncTask to do asynchronous tasks, you must follow the following rules:

The AsyncTask class must be loaded in the UI Thread, which is done automatically after the Android Jelly_Bean version

AsyncTask objects must be instantiated in the UI Thread

The execute method must be called in the UI Thread

Do not manually call AsyncTask onPreExecute, doInBackground, publishProgress, onProgressUpdate, and onPostExecute methods, which are automatically called by Android

An AsyncTask task can be executed only once

Advantages of use:

L Simple, fast

L Process control

Disadvantages of use:

L Becomes complicated when multiple asynchronous operations are used and Ui changes need to be made.

Efficiency of linear and relative layouts, efficiency of constrained and relative layouts

RelativeLayout measures all child views twice, one horizontally and one vertically. Why is that? RelativeLayout Neutron Views are arranged based on their dependencies. This dependency may not be the same as the order of the views in the layout. To determine the position of each sub-view, you need to sort all the sub-views first. And because A and B are subviews of A RelativeLayout, B depends on A horizontally and A depends on B vertically. So we need to do a sequential measure horizontally and vertically. MSortedHorizontalChildren and mSortedVerticalChildren are respectively the child controls of horizontal and vertical direction after the sort the child controls of the View of the array.

The LinearLayout measure is much simpler than a RelativeLayout. You only need to determine whether the LinearLayout is horizontal or vertical, and then measure it: If the weight attribute is not used, the LinearLayout will perform a measure in the current direction. If the weight attribute is used, the LinearLayout will avoid the view with the overweight attribute to do the first measure. Then measure the view with the overweight property a second time. The weight attribute has an impact on performance, and it has its own pitfalls.

conclusion

(1) The LinearLayout will call its subview onMeasure 2 times. The LinearLayout will call its subview onMeasure 2 times when it has weight

(2) If the height of a relative View is different from that of a Relative View, this can cause efficiency problems, especially when the relative View is complex. If possible, use padding instead of margin.

(3) Use LinearLayout and FrameLayout instead of RelativeLayout without compromising the depth of the hierarchy.

(4) To improve the use of rendering performance

RelativeLayout measures all of its sub-views twice. LinearLayout measures all of its sub-views twice using the weight attribute. If they are at the top of the View tree and may be nested in multiple layers, views at the bottom will perform a lot of measure operations, greatly reducing application performance. Therefore, try to place RelativeLayout and LinearLayout at the bottom of the View tree and minimize nesting

RecyclerView vs. ListView: Cache mechanism

1. Different levels:

RecyclerView is two levels more than ListView cache, support multiple off-screen ItemView cache, support developers to customize the cache processing logic, support all RecyclerView to share the same RecyclerViewPool(cache pool).

To be specific:

ListView(two-level cache) :

RecyclerView(Level 4 Cache) :

ListView and RecyclerView cache mechanisms are basically the same:

1). MActiveViews and mAttachedScrap function is similar, the purpose is to quickly reuse the list item visible on the screen ItemView, without the need to re-create view and bindView;

2). MScrapView is similar to mCachedViews + mReyclerViewPool. MScrapView is designed to cache items that leave the screen and reuse items that are about to enter the screen.

3). RecyclerView advantage lies in the use of A.cacheviews, can be done off-screen list item ItemView into the screen without bindView rapid reuse; B. RecyclerPool can be used for multiple RecyclerView, in specific scenarios, such as viewpaper+ multiple list pages have the advantage. Objectively speaking, RecyclerView has strengthened and improved the cache mechanism of ListView in specific scenarios.

2. Different caches:

RecyclerView Cache recyclerview. ViewHolder, abstract can be understood as:

View + ViewHolder(avoid calling findViewById each time createView is called) + flag(identify state);

2). ListView cache View.

network

What’s the difference between HTTP and HTTPS? How does HTTPS work?


HTTP is a hypertext transfer protocol, while HTTPS can be simply understood as the secure HTTP protocol. HTTPS encrypts data by adding SSL to HTTP to ensure data security. HTTPS provides two functions: establishing secure information transmission channels to ensure data transmission security; Verify the authenticity of the site.

The differences between HTTP and HTTPS are as follows:

  • HTTPS requires a CA to apply for a certificate, which is rarely free and therefore costs a certain amount of money
  • HTTP is plaintext transmission with low security. HTTPS uses SSL to encrypt data based on HTTP, ensuring high security
  • The default port used for HTTP is 80. The default port used by HTTPS is 443

HTTPS workflow

When it comes to HTTPS, encryption algorithms fall into two categories: symmetric encryption and asymmetric encryption.

  • Symmetric encryption: encryption and decryption are using the same secret key, the advantage is fast, the disadvantage is low security. Common symmetric encryption algorithms include DES and AES.
  • Asymmetric encryption: Asymmetric encryption has a secret key pair, divided into public and private keys. Generally speaking, the private key can be kept by itself and the public key can be disclosed to each other. The advantages of the private key are higher security than symmetric encryption, but the disadvantages are lower data transmission efficiency than symmetric encryption. Information encrypted using a public key can be decrypted only by the corresponding private key. Common asymmetric encryption includes RSA.

In formal use scenarios, symmetric encryption and asymmetric encryption are generally used together. Asymmetric encryption is used to complete the transfer of secret keys, and then symmetric secret keys are used to encrypt and decrypt data. The combination not only ensures security, but also improves data transmission efficiency.

The HTTPS process is as follows:

  1. The client (usually a browser) first issues a request to the server for encrypted communication
  • Supported protocol versions, such as TLS 1.0
  • A client-generated random number random1, which is later used to generate the “conversation key”
  • Supported encryption methods, such as RSA public key encryption
  • Supported compression methods
  1. The server receives the request and responds
  • Verify the version of the encrypted communication protocol used, such as TLS 1.0. If the browser does not match the version supported by the server, the server turns off encrypted communication
  • A server generates a random number random2 that is later used to generate the “conversation key”
  • Confirm the encryption method used, such as RSA public key encryption
  • Server certificate
  1. The client first verifies the certificate after receiving it
  • First verify the security of the certificate
  • After the authentication passes, the client will generate a random number pre-master secret, and then encrypt it with the public key in the certificate, and pass it to the server
  1. The server receives the content encrypted with the public key and decrypts it using the private key on the server side to obtain the random number pre-master secret. Then according to radom1, Radom2 and pre-master secret, a symmetric encryption secret key is obtained through certain algorithm. Use symmetric keys for subsequent interactions. The client also uses radom1, Radom2, pre-master Secret, and the same algorithm to generate symmetric secret keys.
  2. The symmetric secret key generated in the previous step is then used in subsequent interactions to encrypt and decrypt the transmitted content.

HTTP header fields and their meanings

  • Accept :Content-types that browsers (or other HTTP-based client programs) can receive, for exampleAccept: text/plain
  • The Accept – Charset:A character set recognized by the browser, for exampleAccept-Charset: utf-8
  • The Accept – Encoding:The encoding that the browser can handle. Note that the encoding here is different from the character set. The encoding here usually refers to gzip,deflate, etc. For example,Accept-Encoding: gzip, deflate
  • The Accept – Language:The language received by the browser, in fact, is the user in what language region, such as simplified Chinese isAccept-Language: zh-CN
  • Authorization: In HTTP, the server can authenticate some resources, and if you want to access them, you need to provide the username and password that are included in the Authorization header, in the base64 encoding of the “username:password” string
  • Cache-control: This directive, found in both request and response, tells the caching system (on the server, or on the browser) how to handle caching. This header field is important, especially if you want to use caching to improve performance
  • Connection: Tells the server what Connection the user agent (usually the browser) wants to use. The values are keep-alive and close. Http1.1 defaults to keep-alive. Keep-alive means that the connection between the browser and the server will be kept alive and will not be closed immediately, while close will be closed immediately after response. It is important to note that HTTP is stateless regardless of whether it is keep-alive. Do not consider keep-alive to be an improvement on the stateless nature of HTTP.
  • Cookies:The browser sends a cookie when it sends a request to the server, or the server attaches a cookie to the browser, which is what puts the cookie around here. Such as:Cookie:user=admin
  • Content-length: specifies the memory Length of a request body, expressed in bytes. The request body refers to the Content after the end of the HTTP header and after two cr-LF character groups, commonly including the form data submitted by POST. This content-Length does not contain the Length of the request line or HTTP header.
  • The Content – MD5:MD5 checksum of base64 encoded request body. Such as:Content-MD5: Q2hlY2sgSW50ZWdyaXR5IQ==
  • The content-type:The MIME type of the content in the request body. This is usually only used in POST and PUT requests. Such as:Content-Type: application/x-www-form-urlencoded
  • Date:GMT time when the request is sent. Such as:Date: Tue, 15 Nov 1994 08:12:31 GMT
  • From:The email address of the user who sent the request. Such as:From: [email protected]
  • Host:The domain name or IP address of the server, including the port number if not a common port, for example:Host: www.some.com:182
  • The Proxy Authorization:The authentication information used to connect to an agent is similar to the Authorization header. Such as:Proxy-Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==
  • The user-agent:This is usually information about the user’s browser. Such as:The user-agent: Mozilla / 5.0 (X11; Linux x86_64; The rv: 12.0) Gecko / 20100101 Firefox 12.0
  • Warning: Records some Warning information.

Dart

1. Dart’s “.. What does it mean?

Dart’s “..” Means “cascade operator” and is used for ease of configuration. “..” And “.” The difference is calling “..” The return is equivalent to this, and “. Returns the value returned by the method.

Summary of variable declaration

  1. Var: Can be any type if there is no initial value
  2. Dynamic: Dynamic any type. The type is not checked during compilation
  3. Object dynamic arbitrary type, check the type difference during compilation:
  1. The only difference is that var is locked if it has an initial value

Final and const summary

In common

  • The declared type can be omitted
  • No more assignment after initialization
  • Cannot be used together with var

Differences (what to watch out for)

  • Class level constants, using static, const.
  • Const can initialize its value with the value of another const constant
  • Const can initialize its value with the value of another const constant
  • You can change the value of a non-final, non-const variable, even if it once had a const value
  • Immutability resulting from const is transitive
  • The same const constant is never created twice in memory
  • Const needs to be a compile-time constant

The operator

The suffix operation

  • Conditional member access is similar to. Except that the operation object on the left cannot be null, such as foo? Bar Returns null if foo is null, otherwise returns the bar member

The quotient operator

  • ÷ divisor = quotient… The remainder, A ~/ B = C, is the quotient. The Java equivalent of /

Type determination operator

  • As, is, is! Determine object types at run time

Conditional expression

  • Condition? expr1 : expr2

Flow control statement

if else

for , forEach , for-in


2. Dart scope

Dart does not have keywords such as “public” or “private”. It is public by default, and private variables start with an underscore _. The “_” limit is not at the class access level, but at the library access level.

3. Is Dart a single-threaded model? How does it work?

Dart is a single-threaded model that runs the process shown below.

Simply put, Dart runs as a message loop in a single thread, consisting of two task queues, a microtask queue and an event queue.

When the Flutter application is started, the message loop mechanism is started. First, tasks in the microtask queue will be executed one by one in first-in, first-out order. When all the tasks in the microtask queue are completed, tasks in the event queue will be executed. After the execution of the event task is completed, the microtask will be executed.

4. How does Dart achieve multitasking?

Dart is single-threaded, there is no multithreading, so how do you multitask in parallel? Dart’s multithreading has a lot in common with front-end multithreading. Flutter multithreading relies on Dart’s concurrent programming, asynchronous, and event-driven mechanisms.


To put it simply, in Dart, an Isolate object is a reference to the Isolate execution environment. In general, we use the current Isolate to control interactions with other isolates. When we want to create a new Isolate, we can use the isolate. spawn method to get a new Isolate object. The two isolates send messages to each other using SendPort. There is also a ReceivePort corresponding to the ISOLATE to receive messages for processing, but it is important to note that there is a pair of ReceivePort and SendPort on each isolate. Only ReceivePort in the same ISOLATE can receive and process the message sent by SendPort of the current class.


Future, Async and await

The Future is an asynchronous programming solution. The Future is based on the observer pattern and has three states: The pending, depressing and Rejected callback functions can be specified with the THEN method which also accepts an optional named parameter called onError. The Future instance of the failed state callback throws an exception, which is caught by the onError callback, and then returns a Future object. So we can actually use the cathError of the Future object to make chained calls to catch exceptions

The async and await keywords have been added to Dart1.9. With these keywords, we can write asynchronous code more succinctly without calling future-related apis

When the async keyword is used as a suffix for method declarations, it has the following meanings

  • The decorated method will add aFutureObject as the return value
  • The method synchronizes the code executing the method until the first await keyword, and then it suspends the rest of the method;
  • As soon as the Future task referenced by the await keyword completes execution, the next line of await code will execute immediately.

Async is not executed in parallel, it follows the Dart event loop rules and is simply a syntactic sugar to simplify the use of the Future API.

Dart asynchronous programming in Stream data streams?

In Dart, Stream, like Future, is a tool for handling asynchronous programming. The difference is that a Stream can receive multiple asynchronous results, while a Future has only one.

Streams can be created and controlled using stream.fromFuture or StreamController. One more note: regular streams can only have one subscriber, and if you want to have multiple subscribers, use asBroadcastStream().

What are the two subscription modes for Stream? How are they called

There are two subscription modes for Stream: single and broadcast. Single subscription means only one subscriber, whereas broadcast can have multiple subscribers. This is similar to the Message Service processing pattern. Single-subscription is similar to point-to-point in that data is held until the subscriber appears, and then handed over to the subscriber after the subscriber appears. Broadcasting, on the other hand, is similar to a publish-subscription model in that it can have multiple subscribers at the same time, and data will be transmitted to all subscribers when it is available, regardless of whether there are existing subscribers or not.

Stream is in single-subscription mode by default, so listen and most other methods on the same Stream can only be called once, and an error will be reported the second time. But streams can be called consecutively through the transform() method, which returns another Stream. Stream.asbroadcaststream () can be used to convert a single broadcaststream into a multi-broadcaststream. The isBroadcast property can be used to determine the mode of the Stream.

Mixins mechanism

Mixins are a feature added to Dart 2.1. Previous versions used abstract class instead. Dart introduces the mixin keyword to support multiple inheritance. The mixin keyword is special in that classes defined by mixins cannot have constructors, so that conflicts of parent class constructors can be avoided when multiple classes are inherited.

The object of mixins is a class. Mixins are neither inheritance nor interface, but a brand new feature that can mixins multiple classes. The use of mixins needs to meet certain conditions.


JIT and AOT

With advanced toolchains and compilers, Dart is one of the few languages that supports both JIT (Just In Time) and AOT (Ahead of Time). So what exactly are JIT and AOT? Languages usually need to be compiled before they can be run, with JIT and AOT being the two most common compilation modes. JIT is used during the development cycle to deliver and execute code dynamically. The development test efficiency is high, but the running speed and execution performance are affected by the just-in-time compilation. AOT is pre-compilation, which can generate binary code that is directly executed. It runs fast and performs well. However, it needs to be compiled in advance before each execution, resulting in low development and testing efficiency.

Memory allocation and garbage collection

The Dart VM has a simple memory allocation strategy, where objects are created by moving Pointers across the heap, and memory growth is always linear, eliminating the need to find available memory. In Dart, concurrency is achieved through Isolate. Isolate is a worker that operates independently and is similar to threads but does not share memory. This mechanism allows Dart to achieve lock-free fast allocation. Dart uses a multi-generation algorithm for garbage collection. When garbage collection is triggered, Dart copies “active” objects in the current half-space to the standby space, and then releases all memory in the current space. Dart only operates on a small number of “active” objects during the collection process, and a large number of “dead” objects that are not referenced are ignored. This recycling mechanism is suitable for scenarios where a large number of widgets in the Flutter framework are destroyed and rebuilt.

Flutter

A brief introduction to the Flutter framework and its advantages and disadvantages?

Flutter is an open source cross-platform UI framework developed by Google that allows users to quickly build high-quality native user interfaces on Android, iOS and the Web. Flutter is also the default development kit for Google’s new Fuchsia operating system. Flutter is being used by more and more developers and organizations around the world, and Flutter is completely free and open source. Flutter is built using a modern, responsive framework. The central idea of Flutter is to use components to build an application’S UI. When a component’s state changes, the component reconstructs its description, and Flutter compares the previous description to determine the minimum change required to transition the underlying rendering tree from the current state to the next state.

advantages

  • Hot Reload, which can be saved and reloaded in Android Studio with a single CTRL + S. The emulator can see the effect immediately, which is much better than the lengthy compilation process of the native version.
  • The idea that everything is a Widget. For Flutter, everything in the mobile app is a Widget, enabling an appealing and flexible interface design with a combinable collection of Spaces, a rich animation library, and a layered extension architecture.
  • High quality user experience across platforms with a portable GPU-accelerated rendering engine and high performance native code runtime. To put it simply: The end result is that applications built with Flutter will run almost as efficiently as native applications.

disadvantages

  • Hot updates are not supported;
  • Tripartite library is limited, need to build their own wheels;
  • Dart is written in a language that makes it more difficult to learn and has no other use after learning Dart compared to JS and Java.

Introduce the conceptual architecture of Flutter

And that’s actually the picture down here.

From the figure above, the Flutter Framework can be divided into Embedder, Engine and Framework from bottom to top. Embedder was the operating system Embedder layer, which implemented the Embedder of platform related features such as rendering Surface Settings, thread Settings, and platform plug-ins. The Engine layer is responsible for drawing graphics, typography and providing the Dart runtime. The Engine layer has an independent virtual machine that enables Flutter applications to run on different platforms and across platforms. The Framework layer is a basic set of visual libraries written using Dart, including animation, graphic drawing and gesture recognition, and is the most frequently used layer.

The FrameWork and Engine layers of Flutter, and what they do

The FrameWork layer of Flutter is an SDK written in Drat. It implements a library that contains Material and Cupertino UI interfaces. The following are common Widgets. After that comes some animation, drawing, rendering, gesture library, etc. The SDK for this pure Dart implementation is packaged into a Dart library called Dart: UI. When we use Flutter to write apps, we import this library directly to use functions such as components.

The Engine layer of Flutter is the drawing Engine library of Skia 2D, the predecessor of vector drawing software. Skia is used as the drawing Engine for Chrome and Android. Skia provides a very friendly API and provides friendly and efficient performance in graphics conversion, text rendering, and bitmap rendering. Skia is cross-platform, so it can be embedded into the iOS SDK of Flutter without having to explore iOS closed source Core Graphics/Core Animation. Android comes with Skia, so the Flutter Android SDK is much smaller than the iOS SDK.


This section describes the concepts of Widget, State, and Context

  • Widgets: Almost everything in Flutter is a Widget. Think of a Widget as a visual component (or one that interacts with visual aspects of your application) that you are using when you need to build anything directly or indirectly related to the layout.
  • Widget tree: Widgets are organized in a tree structure. Widgets that contain other widgets are called parent widgets (or Widget containers). Widgets that are included in the parent widget are called child widgets.
  • Context: Simply a reference to the location of a Widget in the tree structure of all widgets that have been created. In short, you treat the context as part of the widget tree to which the widgets corresponding to the context are added. A context is subordinate to a widget, is linked together like widgets, and forms a context tree.
  • State: Defines the behavior of the StatefulWidget instance, which contains the behavior and layout for “interacting/intervening” Widget information. Any changes applied to State force the Widget to be rebuilt.

StatelessWidget and StatefulWidget are two state component classes

  • StatelessWidget: Once created, it doesn’t care about any changes and won’t change until the next build. They don’t depend on anything other than their own configuration information, which is provided when the parent node is built. Typical Text, Row, Column, Container, and so on are statelessWidgets. Its life cycle is fairly simple: initialize, render via build().
  • Statefulwidgets: The data held by such widgets may change during their life cycle. Such data is called State. These widgets that have dynamic internal data are called StatefulWidgets. Check boxes, buttons, etc. The State is associated with the Context and the association is permanent. The State object will never change its Context and will still be associated with the Context even if it can move around the tree structure. When state is associated with context, state is considered mounted. The StatefulWidget consists of two parts and must be initialized with an associated State object when createState() is created.

The life cycle of StatefulWidget

Flutter widgets are divided into statelessWidgets and StatefulWidgets. Statelesswidgets are stateless and statefulWidgets are stateful, so in practice, they are more statefulwidgets. The life cycle of StatefulWidget is shown below


  • InitState () : The Widget initializes the current State. Context cannot be retrieved in the current method. To retrieve Context, try future.delayed ().
  • DidChangeDependencies () : called after initState() and when State object dependencies change.
  • Deactivate () : This method is called when State is temporarily removed from the view tree and when a page is switched, similar to onPause in Android.
  • Dispose () : called when the Widget is destroyed.
  • DidUpdateWidget: Called when the status of the Widget changes.


Relationships between Widgets, RenderObjects, and Elements

Look at the meaning of these a few objects and action above all.

  • Widget: Used only to store information needed for rendering.
  • RenderObject: Manages layout, drawing, and so on.
  • Element: is the entity in the large control tree.

The Widget is inflate to the Element and the Element manages the underlying render tree. Widgets do not manage State and rendering directly, but rather manage State through the State object. Flutter creates a visible tree of elements, which is mutable compared to widgets. In general, we do not manipulate elements directly in interface development, but instead implement the internal logic of the framework layer. For example, a UI view tree may contain multiple TextwidGets (widgets are used multiple times), but the views placed in the internal view tree are filled with separate elements. Element holds renderObject and Widget instances. Remember that the Widget is just a configuration, and RenderObject is responsible for managing layout, drawing, and so on.

When a Widget is created for the first time, an Element is created and inserted into the tree. If the Widget changes later, it is compared to the old Widget and the Element is updated accordingly. Importantly, Element will not be rebuilt, only updated.

What is state management and what state management frameworks do you know?

The states in Flutter are the same concept as the states in the front-end React. The core idea of React framework is componentization. Applications are built by components. The most important concept of components is state, which is the UI data model of components and the data basis for component rendering.

The states of Flutter can be divided into global and local states. Common state management methods include ScopedModel, BLoC, Redux/FishRedux, and Provider. Detailed usage and differences can be understood by yourself.

The drawing process of Flutter

Flutter only cares about providing view data to the GPU. The GPU’s VSync signal is synchronized to the UI thread. The UI thread uses Dart to construct an abstract view structure. This data is fed to the GPU via OpenGL or Vulkan.

The thread management model of Flutter

By default, the Flutter Engine layer creates an Isolate and the Dart code runs on this main Isolate by default. The new Isolate can be created using spawnUri and spawn if necessary. The newly created Isolate is managed by the Flutter.

In fact, the Flutter Engine itself does not create and manage threads. The creation and management of the Flutter Engine threads is the responsibility of the Embeder, which refers to the middle-tier code that ports the Engine to the platform. The architecture of the Flutter Engine layer is shown in the diagram below.

In the architecture of Flutter, Embeder provides four Task runners, namely Platform Task Runner, UI Task Runner Thread, GPU Task Runner, and IO Task Runner, Each Task Runner is responsible for a different Task. The Flutter Engine doesn’t care which thread the Task Runner runs on, but it needs the thread to be stable throughout its life cycle

How does Flutter communicate with native Android and iOS?

Flutter interacts with the native via platformchannels, which are divided into three types:

  • BasicMessageChannel: Used to pass strings and semi-structured information.
  • MethodChannel: Used for method invocation.
  • EventChannel: Communication for Event streams.

At the same time, Platform Channel is not thread-safe. For more details, please refer to The Deep Understanding of Flutter Platform Channel.

Briefly describe the thermal overload of Flutter

The hot reload of Flutter is code incremental synchronization based on JIT compilation mode. Because the JIT is dynamically compiled and has the ability to compile the Dart code into generated intermediate code that the Dart VM interprets at run time, incremental synchronization can be achieved by dynamically updating the intermediate code.

The hot reloading process can be divided into five steps, including scanning for project changes, incremental compilation, push updates, code merging, and Widget rebuilding. Flutter does not restart the App after receiving code changes, but only triggers the redrawing of the Widget tree. This allows Flutter to remain in the same state as before, greatly reducing the time required between code changes and seeing the changes.

On the other hand, hot overloading is not supported because it involves saving and restoring state, involving state compatibility and state initialization scenarios, such as incompatible Widget state before and after changes, changes in global variables and static properties, changes in main methods, changes in initState methods, changes in enumerations and generics, etc.

As you can see, hot overloading improves the efficiency of debugging the UI and is ideal for writing interface styles that require repeated review of changes. However, hot overloading itself has some unsupported boundaries due to its state preservation mechanism.


How does Flutter work?

Unlike most other frameworks for building mobile applications, Flutter is a complete solution that rewrites both the underlying rendering logic and the upper development language. This ensures that view rendering is not only highly consistent on Android and iOS (high fidelity), but also comparable in code execution efficiency and rendering performance to the native App experience (high performance). This is,

The essential differences between Flutter and other cross-platform solutions:

Frameworks such as React Native simply invoke system components through JavaScript VIRTUAL machine extensions, and render components by Android and iOS systems.

Flutter completes its own loop of component rendering. So how does Flutter render its components? This needs to start with the basic principles of image display. In computer system, the display of image needs the cooperation of CPU, GPU and display: CPU is responsible for image data calculation, GPU is responsible for image data rendering, and the display is responsible for the final image display. The CPU gives the calculated content to be displayed to the GPU, and the GPU completes the rendering and puts it into the frame buffer. Then the video controller reads the frame data from the frame buffer and submits it to the display to complete the image display at the speed of 60 times per second according to the VSync signal. The operating system follows this mechanism for rendering images, and Flutter, as a cross-platform development framework, adopts this underlying scheme. Here is a more detailed diagram explaining how Flutter is drawn.

Flutter is concerned with how to compute and synthesize view data between VSync signals of two hardware clocks as quickly as possible, which is then delivered to the GPU for rendering via Skia: Dart is used by the UI thread to construct view structure data, which is then layered in the GPU thread and then delivered to the Skia engine for processing into GPU data, which is ultimately provided to the GPU rendering through OpenGL.

architecture

componentization

Componentization is also called modularization, which is to divide a large software system (App) into multiple independent components or modules based on the separation of concerns for the purpose of reuse. Each independent component is an independent system, which can be maintained, upgraded, or even directly replaced. It can also depend on other independent components. As long as the functions provided by components are not changed, the overall functions of other components and software systems will not be affected.


As you can see, the central idea of componentization is to split independent functions, and the constraints of componentization are relatively loose in terms of granularity. A standalone component can be a Package, a page, a UI control, or even a module that encapsulates some functions. The granularity of components can be large or small, so how can we do a good job of packaging and reuse components? What code should go into a component? There are some basic principles, including the principle of simplicity, abstraction, stability and self-completeness. Let’s first look at what these principles mean.

The principle of simplicity means that each component provides only one function. Divide and conquer is the central idea of componentization, where each component has its own set of responsibilities and clear boundaries to focus on doing one thing so that the component can thrive. A counterexample is the Common or Util component, which is often the result of poorly defined and ill-defined code in development: “Gee, this code doesn’t seem to fit anywhere, let’s put Common (Util)”. Over time, such components become a garbage dump that no one cares about. So, the next time you encounter code that you don’t know where to put it, you need to rethink the design and responsibility of your component.


The principle of abstraction states that the functional abstractions provided by components should be as stable and repeatable as possible. To achieve this, we need to improve the ability of function abstraction and summary, do a good job of function abstraction and interface design in component packaging, and adapt all the factors that may change inside the component, so as not to expose to its callers.


The stability principle means that stable components should not depend on unstable components. For example, component 1 depends on component 5. If component 1 is stable, but component 5 changes frequently, then component 1 becomes unstable and needs to be adapted frequently. If component 5 does have code that is indispensable to component 1, we can consider separating that code into a new component X, or simply copying the dependent code into component 1.



Self-sufficiency means that components need to be self-sufficient as much as possible and reduce the dependence on other underlying components to achieve the purpose of code reuse. For example, if component 1 only relies on a method in a large component 5, it is better to strip component 1 of its dependence on component 5 and copy the method directly into component 1. This makes component 1 better able to handle subsequent external changes.


Now that we understand the basic principles of componentization, let’s look at the concrete implementation steps of componentization, which are stripping away basic functionality, abstracting business modules, and minimizing service capabilities. First, we need to strip the application of non-business essential functions, such as network requests, component middleware, third-party library packaging, UI components, and so on, into a separate base library. We then manage the project using PUB. If it is a third-party library, considering the cost of subsequent maintenance and adaptation, we’d better encapsulate another layer, so that the project does not directly depend on external code, so that it is easy to update or replace. With the basic functionality encapsulated into more clearly defined components, we can then break down individual modules by business dimensions, such as home page, details page, search page, and so on. The granularity of separation can be coarse at first and then fine. As long as the business components that are generally clearly divided can be separated, the subsequent distribution iteration and local fine-tuning can finally realize the componentization of the whole business project. Once the business components and the base components are separated and encapsulated, the componentized architecture of the application is basically in place. Finally, the four principles described above can be used to correct the downward dependencies of each component and minimize the exposure capabilities.

platform

As you can see from the definition of a component, a component is a loose, generalized concept whose size depends on the dimensions of functionality we encapsulate, and the relationships between components are maintained only by dependencies. If the dependencies between components are complex, functional coupling will occur to some extent.

In the component schematic shown below, components 2 and 3 are directly referenced by multiple business components and underlying functional components at the same time, and there are even circular dependencies between components 2 and 5, and between components 3 and 4. Once the internal implementation and external interface of these components change, the whole App will fall into an unstable state, which is called affecting the whole body


Platformization is the upgrade of componentization, that is, on the basis of componentization, the functions provided by them are classified, unified and stratified, and the concept of dependency governance is added. To conceptually classify these functional units more uniformly, we analyze which components can be categorized into four dimensions, business and UI, by using a four-quadrant analysis method.


As you can see, after business and UI decomposition, these components can be divided into four categories:

Independent business modules with UI attributes;

Basic business functions without UI attributes;

UI controls that do not have business properties

Basic functions that do not have business attributes

By definition, these four categories of components imply hierarchical dependencies. For example, the home page in the business module relies on the account function in the base business module. Another example is the rotation card located in the UI control module, which relies on the storage management function located in the basic function module. We divide them from top to bottom in order of dependency, and we have a complete App


It can be seen that the biggest difference between platformization and componentization lies in the addition of the concept of layers. The functions of each layer are based on the functions of the same layer and the lower layer, which makes each component not only maintain independence, but also have certain flexibility, and divide each function into its own function without crossing the boundary.


While componentization is more concerned with the independence of components, platformization is more concerned with the rationality of the relationships between components, which is also a one-way dependency principle that needs to be considered when designing platformization architecture.



The so-called one-way dependency principle means that the order of component dependencies should be from top to bottom according to the number of layers of the application architecture, and there should not be the phenomenon of cyclic dependency between the lower module and the upper module. This minimizes complex coupling and reduces componentization difficulties. If each component depended only one-way on the other, and the relationships between components were clear, code decoupling would be much easier. Platformization emphasizes the sequential nature of dependencies. In addition to not allowing lower-level components to rely on upper-level components, dependencies between cross-layer components and components of the same layer should also be strictly controlled, because such dependencies often lead to architectural design confusion.

What if the underlying component really needs to call the code of the upper component?

In this case, we can add an intermediate layer, such as Event Bus, Provider or Router, to realize information synchronization in the form of intermediate layer forwarding. For example, if the network engine at Layer 4 redirects to the unified error page at Layer 1 for a specific error code, we can use the named route provided by the Router to jump without realizing the implementation of the error page. For example, the account component located at layer 2 will actively refresh the home page and my page located at layer 1 when users log in and out. At this time, we can use the Event Bus to trigger the account switching Event and inform them to update the interface without obtaining the page instance.


conclusion

Don’t say you can’t

Give yourself plenty of time to change jobs, otherwise it will be a passive process, as many companies have such a long process that it can take up to a week between rounds of interviews



Android-Flutter surface has two algorithms

Android-flutter Surface-Resume and interview Skills

There are a lot of things I would like to share with you in this interview, such as resume (very important), project questions, answering skills, etc