Java interview questions

The GC mechanism

Recycling involves two things: finding the trash and recycling it. Generally speaking, there are two ways to find trash:

  • Reference counting method:

When an object is referenced, its reference counter is incremented by one, and garbage collection cleans up objects with a reference count of zero. However, there is A problem with this method. For example, there are two objects A and B. A references B, and B references A. Because of this circular reference problem, Java does not use reference counting.

  • Accessibility analysis:

We treat the relationship between object references in Java as a graph, and objects that are unreachable from the root level are cleared by the garbage collector. Root-level objects typically include objects in the Java virtual machine stack, objects in the local method stack, static objects in the method area, and constants in the constant pool. There are four ways to recycle garbage:

  • Mark clearing algorithm:

As the name suggests there are two steps, mark and clear. First mark the garbage objects that need to be collected, and then recycle them. The disadvantage of the tag cleanup algorithm is that it can cause memory fragmentation after garbage objects are removed.

  • Replication algorithm:

The replication algorithm is to copy the surviving objects to another memory area and do the corresponding memory defragment work. The advantage of the replication algorithm is that it avoids memory fragmentation, but the disadvantage is also obvious: it requires twice as much memory.

  • Tag sorting algorithm:

The label sorting algorithm is also divided into two steps, marking first and then sorting. It marks garbage objects that need to be collected, and when garbage objects are removed, the surviving objects are compressed, avoiding memory fragmentation.

  • Generation algorithm:

The generation algorithm divides objects into new generation and old age. So why make this distinction? The main reason is that a Java runtime produces a large number of objects with very different life cycles. Some of these objects have very long life cycles, and some are used only once and then never used again. Therefore, different collection strategies for objects with different lifecycles can improve the efficiency of GC.

The new generation objects are divided into three regions: Eden region and two Survivor regions. Newly created objects are placed in Eden, and when Eden’s memory reaches its threshold, a Minor GC is triggered. The surviving objects are copied to a Survivor zone, and their life count is increased by one. The Eden region is then idle, and when the Minor GC is triggered again, the Eden region and the surviving objects from the previous Survivor region are copied to another Survivor region using the replication algorithm I mentioned earlier, and their life count is increased by one.

This process continues many times, until the object’s survival count reaches a certain threshold and triggers a phenomenon called promotion: the new generation of the object is placed into the old generation. Objects in the old days were long-lived Java objects that survived multiple GC’s. A Major GC is triggered when older memory reaches a threshold, using a mark-collation algorithm.

Partition of JVM memory areas, which areas will occur OOM

JVM memory regions can be divided into two categories: thread-private and regions and thread-shared regions. Thread private areas: program counters, JVM virtual machine stacks, local method stack threads common areas: heap, method area, runtime constant pool

  • Program counter. Each thread has a private program counter, and only one method is executing at any time per thread, known as the current method. The program counter holds the JVM instruction address for the current method.
  • JVM virtual machine stack. When a thread is created, it creates a virtual machine stack within the thread, which stores a stack frame, corresponding to a method call. There are two kinds of operations on the JVM stack: pushing and outbound. Stack frames hold local variables, method return values, definitions of normal or abnormal exits of methods, and so on.
  • Local method stack. It is similar to the JVM stack, except that it supports Native methods.
  • The heap. The heap is the core area of memory management where object instances are stored. Almost all created object instances are allocated directly to the heap. Therefore, the heap is also the main area of garbage collection. The garbage collector will have a more detailed division of the heap, most commonly dividing the heap into new generation and old generation.
  • Methods section. The method area mainly stores the structural information of the class, such as static properties and methods.
  • Run-time constant pool. The runtime constant pool is located in the method area and mainly holds various constant information.

In fact, except for the program counter, all parts will occur OOM.

  • The heap. Most OOM occurrences occur in the heap, and the most common cause of OOM is a memory leak.
  • JVM virtual machine stack and local method stack. When we write a recursive method that has no loop termination condition, we end up with StackOverflow errors. Of course, if the stack space expansion fails, it will also occur OOM.
  • Methods section. Methods are now less likely to get OOM, but they can also get OOM if there is too much class information loaded in memory earlier.

Class loading process

Class loading in Java is divided into three steps: load, link, and initialize.

  • Load. Loading is the process of reading bytecode data from various data sources into JVM memory and mapping it to JVM approved data structures, known as Class objects. The data sources can be Jar files, Class files, and so on. If the format of the data is not the structure of ClassFile, ClassFormatError is reported.
  • Link.Linking is a core part of class loading, which is divided into three steps: validation, preparation, and parsing.
    • Validation. Validation is an important step in securing the JVM. The JVM needs to verify whether the byte information complies with the specification to prevent malicious information and non-standard data from endangering the JVM operation security. If the validation fails, VerifyError is reported.
    • To prepare. This step creates static variables and sets up memory for them.
    • Parsing. This step replaces the symbolic reference with a direct reference.
  • Initialization. Initialization assigns values to static variables and executes logic in static code blocks.

Parental delegation model

Class loaders can be broadly divided into three categories: startup class loaders, extension class loaders, and application class loaders.

  • The startup class loader is mainly loadedjre/libUnder thejarFile.
  • The extension class loader mainly loadsjre/lib/extUnder thejarFile.
  • The application class loader mainly loadsclasspathThe next file.

The so-called parent delegate model is that when a class is loaded, the parent class loader is used first, and the child class loader is used only when the parent class loader cannot be loaded. The goal is to avoid reloading classes.

A general description of the class loading process is as follows: first, the current class loader determines whether the class has been loaded, and then the parent class loader determines whether the parent class has been loaded. If neither class has been loaded, the current class loader performs the class loading process.

Collection classes in Java

The principle of HashMap

The inside of a HashMap can be seen as a composite structure of arrays and linked lists. The array is divided into buckets. The hash value determines how key-value pairs are addressed in an array. Key-value pairs with the same hash value form a linked list. Note that when the length of the list exceeds the threshold (default is 8), the tree will be triggered and the list will become a tree structure.

There are four methods to understand the principles of HashMap: Hash, PUT, Get, and resize.

  • The hash method. Shift the high value of the hashCode value of the key to the low value for xOR operation. The reason for doing this is that some key hashCode differences are concentrated at the high end, and hash addressing ignores the high end above capacity, which effectively avoids hash collisions.

  • The put method. The put method has the following steps:

    • The hash method is used to obtain the hash value and address the hash value.
    • If there is no collision, put it directly into the bucket.
    • If a collision occurs, it is placed behind the bucket as a linked list.
    • When the length of the list exceeds the threshold, the tree is triggered and the list is converted into a red-black tree.
    • If the array length reaches a threshold, the resize method is called to expand the capacity.
  • The get method. The get method has the following steps:

    • The hash method is used to obtain the hash value and address the hash value.
    • If it is equal to the key addressed to the bucket, return the corresponding value.
    • If there is a conflict, there are two situations. If it is a tree, call getTreeNode to get the value; If it is a linked list, find the corresponding value through a loop.
  • The resize method. Resize does two things:

    • Expand the original array by 2 times
    • Recalculates the index value and puts the original node back into the new array. This step disperses the previously conflicting nodes into a new bucket.

When do Java deadlocks occur, and how do I locate, fix, and write deadlocks

The difference between “sleep” and “wait”

  • The sleep method is a static method in the Thread class, and wait is a method in the Object class
  • Sleep does not release the lock, while wait does
  • Sleep can be used anywhere, while Wait can only be used in synchronized methods or blocks of synchronized code
  • In sleep, time must be passed. In wait, time can be passed or not. If time is not passed, only notify or notifyAll can wake up

The use of the join

The Join method is usually a method that ensures sequential scheduling between threads. It is a method in the Thread class. For example, if thread B.join() is executed in thread A, thread A will enter A wait state and will not wake up until thread B completes its execution to continue executing subsequent methods in thread A.

The join method can pass time arguments or no arguments, which actually calls join(0). Wait method is used. Join method is as follows:

public final synchronized void join(long millis) throws InterruptedException { long base = System.currentTimeMillis(); long now = 0; if (millis < 0) { throw new IllegalArgumentException("timeout value is negative"); } if (millis == 0) { while (isAlive()) { wait(0); } } else { while (isAlive()) { long delay = millis - now; if (delay <= 0) { break; } wait(delay); now = System.currentTimeMillis() - base; }}}Copy the code

The difference between volatile and Synchronize

Thread pools in Java

Thread communication

Concurrent collections in Java

Producer and consumer patterns in Java

The producer-consumer pattern ensures that producers no longer produce objects when the buffer is full and consumers no longer consume objects when the buffer is empty. The implementation mechanism is to keep producers in a wait state when the buffer is full and consumers in a wait state when the buffer is empty. When a producer produces an object it wakes up a consumer, and when a consumer consumes an object it wakes up a producer.

Three implementations: Wait and notify, await and signal, and BlockingQueue.

  • Wait and notify
//wait and notify import java.util.linkedList; public class StorageWithWaitAndNotify { private final int MAX_SIZE = 10; private LinkedList<Object> list = new LinkedList<Object>(); Public void produce() {synchronized (list) {while (list.size() == MAX_SIZE) {system.out.println (" store is full: synchronized "); try { list.wait(); } catch (InterruptedException e) { e.printStackTrace(); } } list.add(new Object()); System.out.println(" + list.size() "); list.notifyAll(); }} public void consume() {synchronized (list) {while (list.size() == 0) {system.out.println (" inventory = 0: consume "); try { list.wait(); } catch (InterruptedException e) { e.printStackTrace(); } } list.remove(); System.out.println(" + list.size() "); list.notifyAll(); }}}Copy the code
  • Await and signal
import java.util.LinkedList; import java.util.concurrent.locks.Condition; import java.util.concurrent.locks.ReentrantLock; class StorageWithAwaitAndSignal { private final int MAX_SIZE = 10; private ReentrantLock mLock = new ReentrantLock(); private Condition mEmpty = mLock.newCondition(); private Condition mFull = mLock.newCondition(); private LinkedList<Object> mList = new LinkedList<Object>(); public void produce() { mLock.lock(); While (mlist.size () == MAX_SIZE) {system.out.println (" buffer full, pause production "); try { mFull.await(); } catch (InterruptedException e) { e.printStackTrace(); } } mList.add(new Object()); System.out.println(" produced a new product, now capacity: "+ mlist.size ()); mEmpty.signalAll(); mLock.unlock(); } public void consume() { mLock.lock(); While (mlist.size () == 0) {system.out.println (" buffer empty, pause consumption "); try { mEmpty.await(); } catch (InterruptedException e) { e.printStackTrace(); } } mList.remove(); System.out.println(" consume a product, now size: "+ mlist.size ()); mFull.signalAll(); mLock.unlock(); }}Copy the code
  • BlockingQueue
import java.util.concurrent.LinkedBlockingQueue; public class StorageWithBlockingQueue { private final int MAX_SIZE = 10; private LinkedBlockingQueue<Object> list = new LinkedBlockingQueue<Object>(MAX_SIZE); Public void produce() {if (list.size() == MAX_SIZE) {system.out.println (" buffer is full, suspend production "); } try { list.put(new Object()); } catch (InterruptedException e) { e.printStackTrace(); } system.out.println (" println "+ list.size()); } public void consume() {if (list.size() == 0) {system.out.println (" buffer is empty, stop consume "); } try { list.take(); } catch (InterruptedException e) { e.printStackTrace(); } system.out.println (" + list.size()); }}Copy the code

Final, finally, Finalize difference

Final can modify classes, variables, and methods. The modifier class indicates that the class is not inheritable. A modifier variable means that it cannot be changed. The modifier means that the method cannot be overridden (override).

Finally is a mechanism to ensure that important code will execute. Try-finally or try-catch-finally is usually used to close a file stream.

Finalize is a method in the Object class that is designed to ensure that an Object completes the collection of a specific resource before garbage collection. The Finalize mechanism is now deprecated and was marked deprecated in JDK 9.

The singleton pattern in Java

There are several common implementations of the singleton pattern in Java: hunchman, double-judge slob, static inner class implementation singleton, and enumeration implementation singleton. Here we’ll focus on the double-judged slob and the singleton of static inner class implementations.

The lazy form of double judgment:

Public class SingleTon {// Volatile private static mInstance of volatile SingleTon; private SingleTon() { } public static SingleTon getInstance() { if (mInstance == null) { synchronized (SingleTon.class) { if (mInstance == null) { mInstance=new SingleTon(); } } } return mInstance; }}Copy the code

The double-judged lazy singleton satisfies both lazy initialization and thread safety. Synchronized wraps code to realize thread safety and improves the efficiency of program execution through dual judgment. The important thing to note here is that singleton instances need to have volatile decorations, which can be problematic in multithreaded situations without them. The reason is that mInstance=new SingleTon() is not an atomic operation. It contains three operations:

  1. Allocate memory to mInstance
  2. Call the constructor of the SingleTon to initialize the member variable
  3. Allocating mInstance to allocated memory space (mInstance is no longer null at this step)

We know that instruction reordering occurs in the JVM, and the normal order of execution is 1-2-3, but instruction reordering can result in 1-3-2. We consider such A situation. When thread A performs step 3 of 1-3-2 and suspends, thread B calls getInstance and goes to the outermost if judgment. Since the outermost if judgment is not wrapped with synchronized, the sentence can be executed. Since thread A has executed Step 3, mInstance is no longer null, and thread B directly returns mInstance. But in fact, we know that A complete initialization must go through all three steps, and since thread A only takes two steps, it is bound to report an error.

The solution is to use volatile to modify mInstance. We know that volatile serves two purposes: to ensure visibility and to prevent instruction reordering. The key here is to prevent instruction reordering.

Singletons for static inner class implementations:

class SingletonWithInnerClass { private SingletonWithInnerClass() { } private static class SingletonHolder{ private static SingletonWithInnerClass INSTANCE=new SingletonWithInnerClass(); } public SingletonWithInnerClass getInstance() { return SingletonHolder.INSTANCE; }}Copy the code

Lazy initialization is implemented because the loading of the external class does not cause the inner class to be loaded immediately, and the inner class is loaded only when getInstance is called. Since classes are only loaded once, and class loading is thread-safe, all our requirements are met. Singletons for static inner class implementations are also the most recommended approach.

Java reference type differences, specific usage scenarios

In Java, there are four types of reference types: strong reference, soft reference, weak reference, and virtual reference.

  • Strong references: A strong reference is a reference created through a new object, and the garbage collector will not reclaim the object it points to even if it is out of memory.

  • Soft references: Soft references are implemented via SoftRefrence, which has a shorter lifetime than strong references, and the garbage collector reclaims objects referenced by soft references before they run out of memory and throw OOM. A common use scenario for soft references is to store memory sensitive caches that are reclaimed when memory runs out.

  • Weak References: Weak references are implemented through WeakRefrence, which has a shorter lifetime than soft references and is reclaimed by GC as soon as the weakly referenced object is scanned. Weak references are also commonly used to store memory sensitive caches.

  • Virtual References: Virtual references are implemented through FanttomRefrence, which has the shortest life cycle and can be reclaimed at any time. If an object is referenced only by a virtual reference, we cannot access any properties and methods of the object through a virtual reference. It just makes sure that objects do something after Finalize. A common use scenario for virtual references is to track the activity of objects being garbage collected, and a system notification is received before an object associated with a virtual reference is collected by the garbage collector.

The difference between Exception and Error

Both exceptions and errors are derived from Throwable. In Java, only Throwable objects can be thrown or caught. Throwable is the basic Exception handling type.

Exception and Error represent Java’s classification of different Exception situations. An Exception is an unexpected condition that can and should be caught during normal operation of a program and handled accordingly.

Error refers to the situation that is unlikely to occur under normal circumstances, and most of the errors will make the program in an abnormal and irreparable state. The common OutOfMemoryError is a subclass of Error, since it is not normal and therefore not convenient or necessary to catch.

Exception is also classified as checked Exception and unchecked Exception.

  • Checked Exceptions must be explicitly caught in code as part of the compiler’s checks.
  • Unchecked exceptions are runtime exceptions, such as null-pointer exceptions and array outlaws, that are generally avoidable logic errors and are not mandatory by the compiler.

volatile

In general, the concept of a memory model cannot be mentioned without reference to volatile. As we all know, in the running of the program, each instruction is executed by the CPU, and the execution process of the instruction is bound to involve data reading and writing. The problem with this is that the execution speed of the CPU is much faster than the reading and writing speed of the main memory, so reading and writing data directly from main memory reduces the CPU efficiency. To solve this problem, we have the concept of a cache, a cache in every CPU, which reads data from main memory beforehand and then flusher it to main memory at the appropriate time after the CPU has done the calculation.

This running mode is fine in a single thread, but can cause cache consistency issues in multiple threads. For a simple example: I = I +1, execute this code in two threads, assuming the initial value of I is 0. We expect two threads to run with a 2, so we have a situation where both threads read I from main memory into their respective caches, where I in both threads is zero. After the execution of thread 1, I =1 is obtained, and it is flushed to main memory, thread 2 starts execution. Since I in thread 2 is 0 in the cache, the I flushed to main memory after execution of thread 2 is still 1.

Therefore, this leads to the problem of cache consistency for shared variables. In order to solve this problem, a cache consistency protocol is proposed: When a CPU writes to a shared variable, it tells other cpus to set their internal shared variable to invalid. When other cpus read a shared variable in the cache, they discover that the shared variable is invalid, and it reads the latest value from main memory.

In Java multithreaded development, there are three important concepts: atomicity, visibility, and orderliness.

  • ** Atomicity: ** One or more operations will either not be performed, or they will all be performed.
  • Visibility: Changes made to a shared variable (a member variable or static variable in a class) in one thread are immediately visible to other threads.
  • Orderliness: The order in which programs are executed follows the order in which code is executed.

Declaring a variable volatile ensures visibility and order. Visibility, as I mentioned above, is essential in multithreaded development. Again, for efficiency of execution, instruction reordering sometimes occurs, so that in a single thread the output of instruction reordering is consistent with our code logic output. Problems can occur in multithreading, however, and volatile can partly prevent instruction reordering.

Volatile works by adding a lock prefix to the generated assembly code. The prefix acts as a memory barrier that serves three purposes:

  • Make sure that instructions are not rearranged behind the screen in front of the screen.
  • Flush shared variables in the cache to main memory immediately after modification.
  • Caches in other cpus are invalid when a write operation is performed.

Internet related interview questions

The HTTP status code

What’s the difference between HTTP and HTTPS? How does HTTPS work?

HTTP is a hypertext transfer protocol, while HTTPS can be simply understood as the secure HTTP protocol. HTTPS encrypts data by adding SSL to HTTP to ensure data security. HTTPS provides two functions: establishing secure information transmission channels to ensure data transmission security; Verify the authenticity of the site.

The differences between HTTP and HTTPS are as follows:

  • HTTPS requires a CA to apply for a certificate, which is rarely free and therefore costs a certain amount of money
  • HTTP is plaintext transmission with low security. HTTPS uses SSL to encrypt data based on HTTP, ensuring high security
  • The default port used for HTTP is 80. The default port used by HTTPS is 443

HTTPS workflow

When it comes to HTTPS, encryption algorithms fall into two categories: symmetric encryption and asymmetric encryption.

  • Symmetric encryption: encryption and decryption are using the same secret key, the advantage is fast, the disadvantage is low security. Common symmetric encryption algorithms include DES and AES.

  • Asymmetric encryption: Asymmetric encryption has a secret key pair, divided into public and private keys. Generally speaking, the private key can be kept by itself and the public key can be disclosed to each other. The advantages of the private key are higher security than symmetric encryption, but the disadvantages are lower data transmission efficiency than symmetric encryption. Information encrypted using a public key can be decrypted only by the corresponding private key. Common asymmetric encryption includes RSA.

In formal use scenarios, symmetric encryption and asymmetric encryption are generally used together. Asymmetric encryption is used to complete the transfer of secret keys, and then symmetric secret keys are used to encrypt and decrypt data. The combination not only ensures security, but also improves data transmission efficiency.

The HTTPS process is as follows:

  1. The client (usually a browser) first issues a request to the server for encrypted communication
    • Supported protocol versions, such as TLS 1.0
    • A client-generated random number random1, which is later used to generate the “conversation key”
    • Supported encryption methods, such as RSA public key encryption
    • Supported compression methods
  2. The server receives the request and responds
    • Verify the version of the encrypted communication protocol used, such as TLS 1.0. If the browser does not match the version supported by the server, the server turns off encrypted communication
    • A server generates a random number random2 that is later used to generate the “conversation key”
    • Confirm the encryption method used, such as RSA public key encryption
    • Server certificate
  3. The client first verifies the certificate after receiving it
    • First verify the security of the certificate
    • After the authentication passes, the client will generate a random number pre-master secret, and then encrypt it with the public key in the certificate, and pass it to the server
  4. The server receives the content encrypted with the public key and decrypts it using the private key on the server side to obtain the random number pre-master secret. Then according to radom1, Radom2 and pre-master secret, a symmetric encryption secret key is obtained through certain algorithm. Use symmetric keys for subsequent interactions. The client also uses radom1, Radom2, pre-master Secret, and the same algorithm to generate symmetric secret keys.
  5. The symmetric secret key generated in the previous step is then used in subsequent interactions to encrypt and decrypt the transmitted content.

TCP three-way handshake process

Android interview questions

How many ways are there to communicate between processes

AIDL, broadcast, file, socket, pipe

Broadcast the difference between static and dynamic registration

  1. Dynamically registered broadcasts are not resident broadcasts, which means that broadcasts follow the Activity’s life cycle. Notice that the broadcast receiver is removed before the Activity ends. Static registration is resident, which means that when an application is closed, it is automatically run by system calls if a message is broadcast.
  2. When the broadcast is ordered: the highest priority is received first (static or dynamic). For broadcast receivers of the same priority, dynamic takes precedence over static
  3. The same priority of the same type of broadcast receiver, static: scan first before scan later, dynamic: register first before register later.
  4. When broadcast is default broadcast: priority is ignored and dynamic broadcast receivers take precedence over static broadcast receivers. The same priority of the same type of broadcast receiver, static: scan first before scan later, dynamic: register first before register later.

Android performance optimization tool use (this problem is suggested to cooperate with the Performance optimization in Android)

Common performance optimization tools for Android include Android Profiler, LeakCanary, and BlockCanary that come with Android Studio

The Android Profiler can detect performance problems in three areas: CPU, MEMORY, and NETWORK.

LeakCanary is a third-party memory leak detection library, and once our project is integrated, LeakCanary will automatically detect memory leaks during application run and output them to us.

BlockCanary is also a library for third-party detection of UI lag. After project integration, Block will automatically detect UI lag during application running and output it to us.

Class loaders in Android

  • PathClassLoader
  • DexClassLoader

The only difference is that DexClassLoader can specify optimizedDirectory, while PathClassLoader cannot. The default is NULL. This optimizedDirectory is used to store the odex file. If null is passed, the odex file will be placed in a default location inside the system. Odex file is the optimized product of DEX file for different mobile phones. Although DEX file can run on all Android phones, Odex has made corresponding optimization for different mobile phones, making it more efficient and faster. Android actually deprecated optimizedDirectory after 8.0, so PathClassLoader and DexClassLoader are identical after 8.0.

What are the categories of animation in Android, and what are their characteristics and differences

Android Animation can be roughly divided into three categories: frame Animation, Tween Animation and Property Animation.

  • Frame animation: Configure a group of images through XML and play them dynamically. Rarely used.
  • Tween Animation: it can be roughly divided into four types of operations: rotation, transparency, scaling and displacement. Rarely used.
  • Property Animation: Property Animation is the most commonly used type of Animation and is more powerful than tween Animation. There are roughly two usage types for property animations, ViewPropertyAnimator and ObjectAnimator. The former works well for general purpose animations such as rotation, displacement, scaling and transparency, and is easy to useView.animate()You get the ViewPropertyAnimator, and then you can animate it. The latter is suitable for animating our custom controls, of course we should add the corresponding in the custom View firstgetXXX()setXXX()Getter and setter methods for the corresponding property, which is called after changing the property in the custom View in the setter methodinvalidate()To refresh the View’s drawing. After the callObjectAnimator.ofProperty type () returns an ObjectAnimator, calledstart()Method to start animation.

The difference between tween animation and attribute animation:

  • The tween animation is the parent container constantly drawing the view, making it look like it’s moving the effect, but actually the view is still there.
  • By constantly changing the value of the properties inside the view, you really change the view.

Handler mechanism

When it comes to Handler, there are several classes that are closely related to it: Message, MessageQueue, and Looper.

  • The Message. There are two member variables of interest in Message: target and callback.

    • Target is the Handler object that sends the message
    • Callback is when calledhandler.post(runnable)Is passed to a task of type Runnable. The essence of the POST event is to create a Message, assigning the runnable we passed to the created Message callback member variable.
  • MessageQueue. The MessageQueue is the obvious queue for messages, and of interest is the next() method in MessageQueue, which returns the next message to be processed.

  • The stars. The Looper message poller is the core that connects the Handler to the message queue. If you want to create a Handler in a thread, Looper is created with looper.prepare (), and then looper.loop () is called to start polling. Let’s focus on these two methods.

    • prepare().This method does two things: it passes firstThreadLocal.get()Gets the Looper in the current thread, or if it is not null, a RunTimeException is thrown, meaning that one thread cannot create two LoOpers. If null, go to the next step. The second step is to create a Looper and pass itThreadLocal. Set (stars).Bind the Looper we created to the current thread. It is important to note that the creation of the message queue actually takes place in the Looper constructor.
    • loop().This method opens polling for the entire event mechanism. The essence of it is to start an endless cycle of passingMessageQueue next ()Method to get the message. It is called when it gets the messagemsg.target.dispatchMessage()Let’s do the processing. Actually, we mentioned that when we talked about Message,msg.targetThis is actually the handler that sends this message. The essence of this code is to callThe handler dispatchMessage ().
  • The Handler. With all this foreshadowing, here comes the most important part. Handler analysis focuses on two parts: sending messages and processing messages.

    * Send messages. In fact, besides sendMessage, there are different ways to send messages, such as sendMessageDelayed, Post and postDelayed. But they all essentially call sendMessageAtTime. EnqueueMessage is called in the sendMessageAtTime method. The enqueueMessage method does two things: it binds the message to the current handler with msg.target = this. Messages are then enqueued via queue.enqueuemessage.

    • Process messages.The core of message processing is thisdispatchMessage()This method. The logic behind this method is simple: judge firstmsg.callbackNull or not. If not, this runnable is executed. If it is empty, ours will be executedhandleMessageMethods.

Android Performance Optimization

In my opinion, performance optimization in Android can be divided into the following aspects: memory optimization, layout optimization, network optimization, installation package optimization.

  • Memory optimization: The next question is.

  • Layout optimization: The essence of layout optimization is to reduce the hierarchy of the View. Common layout optimization schemes are as follows

    • Choosing a RelativeLayout in preference to a LinearLayout and a RelativeLayout will reduce the View’s hierarchy
    • Extract and use commonly used layout components\< include \> The label
    • through\< ViewStub \> Tag to load uncommon layouts
    • use\< Merge \> Tag to reduce the nesting level of the layout
  • Network optimization: Common network optimization schemes are as follows

    • Minimize network requests and merge as many as you can
    • Avoid DNS resolution. Searching by domain name may take hundreds of milliseconds and there may be the risk of DNS hijacking. You can add dynamic IP address update based on service requirements, or switch to domain name access when IP address access fails.
    • Large amounts of data are loaded in pagination mode
    • Network data transmission uses GZIP compression
    • Add network data to the cache to avoid frequent network requests
    • When uploading images, compress them as necessary
  • Installation package optimization: The core of installation package optimization is to reduce the volume of APK. Common solutions are as follows

    • Obfuscation can reduce APK volume to some extent, but the actual effect is minimal
    • Reduce unnecessary resource files in the application, such as pictures, and try to compress pictures without affecting the effect of the APP, which has a certain effect
    • When using SO library, the v7 version of SO library should be retained first, and other versions of SO library should be deleted. The reason is that in 2018, the V7 version of the SO library can meet most of the market requirements, maybe eight or nine years ago phone can not meet, but we don’t need to adapt to the old phone. In practice, the effect of reducing the size of APK is quite significant. If you use a lot of SO libraries, for example, a version of SO with a total of 10 MB, then only keep the V7 version and delete the Armeabi and V8 SO libraries, the total size can be reduced by 20 MB.

Android Memory Optimization

In my opinion, Memory optimization for Android is divided into two parts: avoiding memory leaks and expanding memory.

The essence of a memory leak is that an object with a longer life refers to an object with a shorter life.

Common memory leaks
  • Memory leak caused by singleton pattern. The most common example is when you create this singleton and you pass in a Context, you pass in an Activity Context, and because of the static properties of the singleton, its life cycle is loaded from the singleton class until the end of the application, So even after finishing the incoming Activity, our singleton still holds a reference to the Activity, causing a memory leak. The solution is simple: don’t use an Activity Context. Use an Application Context to avoid memory leaks.
  • Memory leaks caused by static variables. Static variables are placed in the method area, and their lifetime is from class loading to the end of the program. As you can see, the lifetime of static variables is very long. The most common example of a memory leak caused by static variables is when we create a static variable in an Activity that requires passing in a reference to the Activity’s this. In this case, even if the Activity calls Finish, memory leaks. The reason is that because this static variable lives almost the same as the entire application life cycle, it holds a reference to the Activity, causing a memory leak.
  • ** Memory leaks caused by non-static inner classes. Non-static inner classes can cause memory leaks by holding references to external classes. The most common example is the use of handlers and Threads in activities. Handlers and Threads created using non-static inner classes hold references to the current Activity while executing delayed operations, which can cause memory leaks if the Activity is terminated while executing delayed operations. There are two solutions: the first is to use a static inner class, where you invoke an Activity with a weak reference. The second method is called in the Activity’s onDestroyhandler.removeCallbacksAndMessagesTo cancel the delayed event.
  • Memory leaks caused by using resources that are not shut down in time. Common examples are: various data streams are not closed in a timely manner, Bitmap is not closed in a timely manner, etc.
  • The third-party library cannot be unbound in time. Several libraries provide registration and unbinding functions, the most common being EventBus, which we all know is registered in onCreate and unbound in onDestroy. If unchained, EventBus is a singleton pattern that holds references to activities forever, causing memory leaks. Also common is RxJava, which is called in the onDestroy method after doing some delay with the Timer operatordisposable.dispose()To cancel the operation.
  • Memory leak caused by property animation. A common example is when you exit an Activity during a property animation execution, and the View object still holds a reference to the Activity, causing a memory leak. The solution is to cancel the property animation by calling the animation’s Cancel method in onDestroy.
  • Memory leak caused by WebView. WebView is special in that it causes a memory leak even when its destroy method is called. In fact, the best way to avoid WebView memory leakage is to make the WebView Activity in another process, when the Activity ends kill the current WebView process, I remember Ali Nail WebView is another process started. This should also be used to avoid memory leaks.

Expand the memory

Why expand our memory? Sometimes we have to use a lot of third party commercial SDKS in our actual development. These SDKS are actually good and bad. The large SDK may leak less memory, but the small SDK quality is not very reliable. So the best way to deal with this situation that we can’t change is to expand memory.

There are two common ways to expand memory: one is to add the largeHeap=”true” attribute under Application in the manifest file, and the other is to start multiple processes in the same Application to increase the total memory space of an Application. The second method is actually quite common, for example, I have used a twitter DK, a twitter Service is actually in a separate process.

Memory optimization in Android is generally open source and throttling, open source is to expand memory, throttling is to avoid memory leaks.

Binder mechanism

In Linux, processes are independent of each other in order to avoid interference with other processes. There is also user space and kernel space within a process. The isolation here is divided into two parts, inter-process isolation and intra-process isolation.

If there is isolation between processes, there is interaction. Interprocess communication is IPC, and user space and kernel space communication is system call.

In order to ensure the independence and security of Linux, processes cannot directly access each other. Android is based on Linux, so it also needs to solve the problem of interprocess communication.

There are many ways to communicate between Linux processes, such as pipes, sockets, and so on. Why Is Binder used for Android interprocess communication and not Linux

There are two main considerations in the existing methods: performance and safety

  • Performance. Performance requirements on mobile devices are more stringent. Traditional Linux interprocess communication such as pipes, sockets, etc., replicates data twice with binders. Binder is superior to traditional process communication in terms of performance.

  • Security. Traditional Linux process communication does not involve authentication between the communicating parties, which can cause some security issues. The Binder mechanism incorporates authentication to improve security.

Binder is based on the CS architecture and has four main components.

  • The Client. Client process.
  • Server. Server process.
  • ServiceManager. Provides the ability to register, query, and return proxy service objects.
  • Binder. Mainly responsible for establishing Binder connections between processes, data interaction between processes and other low-level operations.

The main flow of Binder mechanism is as follows:

  • The server uses Binder drivers to register our services with the ServiceManager.
  • Clients use Binder drivers to query services registered with the ServiceManager.
  • The ServiceManager returns a proxy object to the server using the Inder driver.
  • After receiving the proxy object from the server, the client can communicate with the other processes.

The principle of LruCache

The core principle of LruCache is the effective use of LinkedHashMap, which has a LinkedHashMap member variable inside. There are four methods worth focusing on: constructors, GET, PUT, and trimToSize.

  • Constructor: Two things are done in the constructor of LruCache, setting maxSize and creating a LinkedHashMap. Note here that LruCache sets the accessOrder of the LinkedHashMap to true, which is the order in which the output of the LinkedHashMap is iterated. True means output in order of access, false means output in order of addition. Since we usually output in order of addition, accessOrder is false by default, but our LruCache needs to output in order of access. So explicitly set accessOrder to true.

  • Get method: Essentially the get method that calls the LinkedHashMap, and since we set accessOrder to true, each call to the GET method places the current element we are visiting at the end of the LinkedHashMap.

  • The put method: essentially calls the LinkedHashMap put method. Due to the nature of LinkedHashMap, each call to the Put method also puts the new element at the end of the LinkedHashMap. After the addition, the trimToSize method is called to ensure that the added memory does not exceed maxSize.

  • TrimToSize method: Inside the trimToSize method, a while(true) loop is started, which continuously deletes elements from the top of the LinkedHashMap until the deleted memory is less than maxSize and breaks out of the loop.

In fact, we can summarize here, why is this algorithm called least recently used algorithm? The principle is simple: each of our put or get is treated as a visit, and due to the nature of LinkedHashMap, the elements accessed are placed at the end of each visit. When our memory reaches the threshold, the trimToSize method is triggered to remove the element at the head of the LinkedHashMap until the current memory is less than maxSize. The reason for removing the front element is obvious: the most recently accessed elements are placed at the tail, and the elements at the front must be the least recently used elements, so they should be removed first when memory is low.

DiskLruCache principle

Design an asynchronous loading frame for images

To design a picture loading framework, we must use the idea of three level cache of picture loading. The three-level cache is divided into memory cache, local cache and network cache.

Memory cache: The Bitmap is cached in memory, which is fast but has small memory capacity. Local cache: Caches images to files, which is slower but larger. Network cache: Get pictures from the network. The speed is affected by the network.

If we were designing an image loading framework, the process would look something like this:

  • After getting the image URL, first look for the BItmap in memory, if found directly load.
  • If it is not found in the memory, it is searched from the local cache. If it can be found in the local cache, it is directly loaded.
  • In this case, the image will be downloaded from the network. After downloading, the image will be loaded, and the downloaded image will be put into the memory cache and local cache.

Here are some basic concepts. If it is a specific code implementation, there are probably several aspects of the file:

  • First we need to determine our memory cache, which is usually LruCache.
  • To determine the local cache, DiskLruCache is usually used. It should be noted that the file name of the image cache is usually a string of URLS encrypted by MD5, so as to avoid the file name directly exposing the URL of the image.
  • Once the memory cache and local cache have been determined, we need to create a new class MemeryAndDiskCache, of course, whatever the name is, which contains the LruCache and DiskLruCache mentioned earlier. In the MemeryAndDiskCache class, we define two methods, one is getBitmap, the other is putBitmap, corresponding to the image acquisition and cache, the internal logic is also very simple. In getBitmap, bitmaps are obtained by memory and local priority. In putBitmap, memory is cached first and then cached locally.
  • After the cache policy class is identified, we create an ImageLoader class that must contain two methods, one to display imagesdisplayImage(url,imageView)The other is to get pictures from the InternetdownloadImage(url,imageView). The first thing to do in the show picture method is to passImageView.setTag(url)Bind url to imageView, this is to avoid the image mismatch bug caused by imageView reuse when loading network images in the list. Then the cache will be fetched from MemeryAndDiskCache. If it exists, load it directly. If not, the fetch image from the network method is called. There are many ways to get images from the Internet, and I generally use them hereOkHttp+Retrofit. When you get a picture from the network, judge it firstimageView.getTag()If the url is consistent with the image, the image is loaded; if not, the image is not loaded. In this way, the asynchronous loading of images in the list is avoided. At the same time, MemeryAndDiskCache will be used to cache the image after obtaining the image.

Event distribution mechanism in Android

When our finger touches the screen, the event actually goes through a process like Activity -> ViewGroup -> View to the View that finally responds to our touch event.

When it comes to event distribution, the following methods are essential: dispatchTouchEvent(), onInterceptTouchEvent(), onTouchEvent. Next, follow the Activity -> ViewGroup -> View process to outline the event distribution mechanism.

When our finger touches the screen, an Action_Down event is triggered, and the Activity on the current page responds first, going to the Activity’s dispatchTouchEvent() method. The logic behind this method is simply this:

  • callGetWindow. SuperDispatchTouchEvent ().
  • If the previous step returns true, return true; Otherwise return its ownOnTouchEvent ().

If getWindow().superdispatchTouchEvent () returns true, it means that the current event has been handled, so you don’t need to call your own onTouchEvent. Otherwise, the event is not handled and the Activity needs to handle it itself, calling its own onTouchEvent.

The getWindow() method returns an object of type Window, as we all know. In Android, PhoneWindow is the only implementation class for Window. So this essentially calls superDispatchTouchEvent() from PhoneWindow. `

In the method in the actual call mDecor PhoneWindow. SuperDispatchTouchEvent (event). The mDecor is DecorView, which is a subclass of FrameLayout and calls super.DispatchTouchEvent () in superDispatchTouchEvent() in DecorView. It should be obvious at this point that DecorView is a subclass of FrameLayout, which is a subclass of ViewGroup that essentially calls the dispatchTouchEvent() of the ViewGroup.

Now that our event has been passed from the Activity to the ViewGroup, let’s examine the event handling methods in the ViewGroup.

The logic in dispatchTouchEvent() in ViewGroup looks like this:

  • throughonInterceptTouchEvent()Check whether the current ViewGroup blocks events. The default ViewGroup does not block events.
  • If intercepted, return its ownonTouchEvent();
  • If not intercepted, according tochild.dispatchTouchEvent()The return value of. If true, return true; Otherwise return its ownonTouchEvent(), where unprocessed events are passed up.

In general, the onInterceptTouchEvent() of the ViewGroup returns false, that is, no intercepting. The important thing to note here is the sequence of events, such as Down events, Move events…… Up event, from Down to Up is a complete sequence of events, corresponding to the finger from press Down to lift the series of events, if the ViewGroup intercepts the Down event, then subsequent events will be handed to the ViewGroup onTouchEvent. If the ViewGroup intercepts something other than a Down event, an Action_Cancel event is sent to the previous View that handled the Down event, notifying the child View that the subsequent sequence of events has been taken over by the ViewGroup. The child View can be restored to its previous state.

Here is a common example: in a Recyclerview clock there are a lot of buttons, we first press a Button, then slide a distance and then loosen, at this time Recyclerview will follow the slide, will not trigger the Button click event. In this example, when we press the button, the button receives the Action_Down event, and normally the subsequent sequence of events should be handled by the button. But when we swipe a little bit, the Recyclerview senses that this is a swipe, intercepts the sequence of events, and goes through its own onTouchEvent() method, which is reflected on the screen as a swipe of the list. The button is still in the pressed state, so it needs to send an Action_Cancel to tell the button to restore its previous state during interception.

Event distribution eventually goes to the View’s dispatchTouchEvent(). There is no onInterceptTouchEvent() in the dispatchTouchEvent() of a View, which is easy to understand. A View is not a ViewGroup and does not contain other child views, so there is no intercepting or intercepting. Ignoring a few details, the View dispatchTouchEvent() directly returns its own onTouchEvent(). If onTouchEvent() returns true, the event is processed, otherwise unprocessed events are passed up until a View handles the event or until the Activity’s onTouchEvent() terminates.

People here often ask the difference between onTouch and onTouchEvent. First, both methods are in the View’s dispatchTouchEvent() logic:

  • If touchListener is not null, and the View is enable, and onTouch returns true, it will return true if all three conditions are metonTouchEvent()Methods.
  • If one of these conditions is not satisfied, it will go toonTouchEvent()Methods. So the onTouch sequence is before the onTouchEvent.

View drawing process

The view is drawn from the performTraversals() method of ViewRootImpl, which calls mview.measure (), mview.layout () and mview.draw () in sequence.

The drawing process of View is divided into three steps: measurement, layout and drawing, which correspond to three methods: Measure, layout and draw.

  • Measurement phase. The measure method will be called by the parent View. After some optimization and preparation in the measure method, the onMeasure method will be called for actual self-measurement. The onMeasure method does different things in a View and ViewGroup:

    • The View. The onMeasure method in the View calculates its size and stores it via setMeasureDimension.
    • The ViewGroup. The onMeasure method in the ViewGroup calls all the child IEW’s measure methods for self-measurement and saving. Then calculate its own size from the size and position of the child View and save it.
  • Layout stage. The Layout method is called by the parent View. The Layout method saves the size and position passed in by the parent View and calls onLayout for the actual internal layout. OnLayout also does different things in View and ViewGroup:

    • The View. Since the View has no child views, the onLayout of the View does nothing.
    • The ViewGroup. The onLayout method in the ViewGroup calls the Layout methods of all child Views, passing them dimensions and positions, and letting them complete their own internal layout.
  • Drawing phase. The draw method does some scheduling, and then calls the onDraw method to draw the View itself. The scheduling flow of draw method is roughly like this:

    • ** Draws the background. * * correspondingdrawBackground(Canvas)Methods.
    • ** Draws the body. * * correspondingonDraw(Canvas)Methods.
    • Draw the child View.The correspondingdispatchDraw(Canvas)Methods.
    • Draw slide correlation and foreground.The correspondingonDrawForeground(Canvas).

Android source code common design patterns and their own common design patterns in development

How does Android interact with JS

In Android, the interaction between Android and JS is divided into two aspects: Android calls the method in JS, JS calls the method in Android.

  • Android js. There are two ways to tune JS on Android:

    • Webview.loadurl (” method name in javascript: JS “). The advantage of this method is that it is very simple, but the disadvantage is that there is no return value. If you need to get the return value of the JS method, you need js to call the method in Android to get the return value.
    • WebView. EvaluateJavaScript (the method name “in the” javascript: js, ValueCallback). The advantage of this method over loadUrl is that the ValueCallback callback gets the return value of the JS method. The disadvantage is that Android4.4 only has this method, poor compatibility. However, in 2018, most apps on the market require a minimum version of 4.4, so I think this compatibility is not a problem.
  • Js Android. There are three ways to tune Android with JS:

    • WebView. AddJavascriptInterface (). This is the official solution for JAVASCRIPT to call Android methods. Note that the @javascriptInterface annotation should be added to Android methods for JAVASCRIPT to avoid security vulnerabilities. The downside of this solution is that Android4.2 had security holes before, but they have been fixed since. Again, compatibility is not an issue in 2018.
    • Override WebViewClient’s shouldOverrideUrlLoading() method to intercept the URL, parse the URL, and call the Android method if it meets the requirements of both parties. The advantage is to avoid the security vulnerability before Android4.2, the disadvantage is also very obvious, can not directly get the return value of the call Android method, only through Android call JS method to get the return value.
  • Overwrite the WebChromClient onJsPrompt() method, as in the previous method, after the URL is parsed, if the two rules are met, then the Android method can be called. Finally, if a return value is required, result.confirm(” Return value of Android method “) will be used to return the return value of Android to JS. The advantage of the method is that there are no vulnerabilities, there are no compatibility restrictions, and it is also easy to obtain the return value of the Android method. The important thing to note here is that in addition to onJsPrompt there are onJsAlert and onJsConfirm methods in WebChromeClient. So why not choose the other two methods? The reason is that onJsAlert has no return value, while onJsConfirm has only true and false return values. Meanwhile, in front-end development, prompt method is rarely called, so onJsPrompt is used.

Principle of thermal repair

Activity startup process

SparseArray principle

SparseArray, generally speaking, is a data structure used in Android to replace HashMap. To be precise, it replaces a HashMap with a key of type Integer and a value of type Object. Note that SparseArray only implements the Cloneable interface, so it cannot be declared with a Map. Internally, SparseArray consists of two arrays. One is mKeys of type int[], which holds all the keys. The other is mValues of type Object[], which holds all values. SparseArray is most commonly compared to a HashMap. SparseArray uses less memory than a HashMap because it consists of two arrays. As we all know, add, delete, change, search and other operations need to find the corresponding key-value pair first. SparseArray is addressed internally by binary search, which is obviously less efficient than the constant time complexity of HashMap. Binary search, the other thing to mention here is binary search is that the array is already sorted, and that’s right, SparseArray is going to sort it up by key. Taken together, SparseArray takes up more space than HashMap, but is less efficient than HashMap, and is a typical time-swap space suitable for smaller storage. From a source code perspective, I think the things to watch out for are SparseArray’s remove(), PUT (), and GC () methods.

  • remove().The SparseArrayremove()Instead of just deleting and then compressing the array, the value to be deleted is set to the static property of DELETE, which is essentially an Object, The mGarbage property in SparseArray is also set to true, which is convenient for calling itself when appropriategc()Method to compress arrays to avoid wasting space. This improves efficiency by overwriting the DELETE with the value to be added if the key to be added in the future is equal to the deleted key.
  • The gc ().In the SparseArraygc()Methods have absolutely nothing to do with the JVM’s GC. The inside of the ‘ ‘gc()’ method is actually a for loop, which compresses the array by moving the non-DELETE key-value pairs forward and overwriting the DELETE key-value pairs. At the same time, mGarbage is set to false to avoid wasting memory.
  • Put ().The put method is the logic that overwrites the value if a binary lookup finds a key in the mKeys array. If the index is not found, the key index that is closest to the key to be added will be retrieved. If the value corresponding to the index is DELETE, the new value can be directly overwritten by DELET. In this case, the movement of array elements can be avoided, thus improving efficiency. If value is not DELETE, mGarbage is judged, if true, mGarbage is calledgc()Method to compress the array, then find the appropriate index, move the indexed key-value pair back, insert a new key-value pair, this process may trigger the array expansion.

How to avoid OOM loading images

We know that the size of the Bitmap in memory is calculated as: pixels long * pixels wide * memory per pixel. There are two ways to avoid the OOM: scale down the length and width and reduce the amount of memory per pixel.

  • Scale down the length and width. We know that bitmaps are created using the factory method of BitmapFactory,DecodeFile (), decodeStream(), decodeByteArray(), decodeResource(). Each of these methods takes a parameter of type Options, which is an inner class of BitmapFactory that stores BItmap information. Options has one property: inSampleSize. We can modify inSampleSize to reduce the length and width of the image, thus reducing the amount of memory used by BItma P. Note that the inSampleSize size needs to be a power of 2. If it is less than 1, the code forces inSampleSize to be 1.
  • Reduce memory occupied by pixels. Options has a property inPreferredConfig, which is the defaultARGB_8888Represents the size occupied by each pixel. We can do this by changing it toRGB_565orARGB_4444To reduce memory in half.

A larger load

Loading a large hd image, such as Riverside Scene at Qingming Festival, is impossible to display on the screen at first, and considering the memory condition, it is impossible to load all the images into the memory at one time. This time you need to load the local, Android has a responsible for local loading class: BitmapRegionDecoder. Use method is simple, through BitmapRegionDecoder. NewInstance () to create objects, called after decodeRegion (the Rect the Rect, BitmapFactory. Options Options). The first argument rect is the area to display, and the second argument is Options, an inner class in BitmapFactory.

Android tripartite library source code analysis

Because the source code analysis is too large, so here I posted my source code analysis link (nuggets).

OkHttp

OkHttp source code analysis

Retrofit

Retrofit source analysis 1 Retrofit source analysis 2 Retrofit source analysis 3

RxJava

RxJava source code analysis

Glide

Glide source code analysis

EventBus

EventBus source code analysis

The general process is as follows: Register:

  • Gets the subscriber’s Class object
  • Use reflection to find a collection of event handlers in subscribers
  • Traversing the collection of event handler methods, calledThe subscribe (subscriber, subscriberMethod)Method, within the SUBSCRIBE method:
    • Get the eventType eventType for processing through subscriberMethod
    • Bind subscriber and method subscriberMethod together to form a Subscription object
    • throughsubscriptionsByEventType.get(eventType)Get Subscription collection
      • If the Subscription collection is empty, a new collection is created. This step is intended to delay the initialization of the collection
      • Take the Subscription collection and iterate through it, adding new Subscription objects to the appropriate location by comparing the priority of event handling
    • throughtypesBySubscriber.get(subscriber)Gets a collection of event types
      • If the set of event types is empty, a new set is created. This step is intended to delay the initialization of the set
      • Take the set of event types and add the new event type to the set
    • Check whether the current event type is sticky
    • If the current event type is not sticky,The subscribe (subscriber, subscriberMethod)To this end
    • If it is sticky, it determines the inheritance property of an event in EventBus. Default is true
      • If the event inheritance is true, the Map type stickEvents are iterated over and the isAssignableFrom method is used to determine whether the current event is the parent of the iterated event, and if so, the event is sent
      • If event inheritance is false, passstickyEvents.get(eventType)Gets the event and sends it

Post:

  • postSticky
    • Add events to a collection of Map type stickyEvents
    • Call the POST method
  • post
    • Adds an event to the event queue of the current thread
    • Events are continually pulled from the event queue through a while loop and sent by calling the postSingleEvent method
    • In postSingleEvent, event inheritance is determined. Default is true
      • Event inheritance to true, find the current event the father of all types and call postSingleEventForEventType method to send events
      • Event inheritance is false and only events of the current event type are sent
        • In the postSingleEventForEventType, throughsubscriptionsByEventType.get(eventClass)Gets a collection of Subscription types
        • Iterate over the collection, calling postToSubscription to send the event
          • There are four cases in postToSubscription
            • POSTING, callinvokeSubscriber(subscription, event)Handling events is essentiallymethod.invoke()reflection
            • MAIN, if in the MAIN thread directly invokeSubscriber processing; Instead, switch to the main thread via handler and call invokeSubscriber to handle the event
            • BACKGROUND, if not in the main thread directly invokeSubscriber handle events; Instead, start a thread in which invokeSubscriber is called to handle events
            • ASYNC, starts a thread in which invokeSubscriber is called to process events

Unregister:

  • Delete all subscription associated with the subscriber in subscriptionsByEventType
  • Deletes all types associated with subscribers in typesBySubscriber

Data structures and algorithms

Handwritten fast row

Handwriting merge sort

Handwriting heap and heap sort

Let’s talk about the difference between sorting algorithms (time and space complexity)

What puzzles have you solved at work and what fulfilling projects have you undertaken (this question will definitely come up, so be sure to be prepared)

In fact, this problem depends on daily accumulation. For me, the most rewarding thing is the development of KCommon project, which greatly improves my development efficiency.

Understand dug 👉 autumn autumn recruit the campaign more job information for job searching, with a gifts | the nuggets skill in writing essay