Class loading process

Class loading in Java is divided into three steps: load, link, and initialize.

  • Load. Loading is the process of reading bytecode data from various data sources into JVM memory and mapping it to JVM approved data structures, known as Class objects. The data sources can be Jar files, Class files, and so on. If the format of the data is not the structure of ClassFile, ClassFormatError is reported.
  • Link. Linking is a core part of class loading, which is divided into three steps: validation, preparation, and parsing. Validation. Validation is an important step in securing the JVM. The JVM needs to verify whether the byte information complies with the specification to prevent malicious information and non-standard data from endangering the JVM operation security. If the validation fails, VerifyError is reported. To prepare. This step creates static variables and sets up memory for them. Parsing. This step replaces the symbolic reference with a direct reference.
  • Initialization. Initialization assigns values to static variables and executes logic in static code blocks.

Parental delegation model

Class loaders can be broadly divided into three categories: startup class loaders, extension class loaders, and application class loaders.

  • Start the class loader to load jar files in jre/lib.
  • The extension class loader mainly loads JAR files under jre/lib/ext.
  • The application class loader mainly loads files in the classpath.

The so-called parent delegate model is that when a class is loaded, the parent class loader is used first, and the child class loader is used only when the parent class loader cannot be loaded. The goal is to avoid reloading classes.

Collection classes in Java

The principle of HashMap

The inside of a HashMap can be seen as a composite structure of arrays and linked lists. The array is divided into buckets. The hash value determines how key-value pairs are addressed in an array. Key-value pairs with the same hash value form a linked list. Note that when the length of the list exceeds the threshold (default is 8), the tree will be triggered and the list will become a tree structure.

There are four methods to understand the principles of HashMap: Hash, PUT, Get, and resize.

  • The hash method. Shift the high value of the hashCode value of the key to the low value for xOR operation. The reason for doing this is that some key hashCode differences are concentrated at the high end, and hash addressing ignores the high end above capacity, which effectively avoids hash collisions.
  • The put method. The put method has the following steps:
    • The hash method is used to obtain the hash value and address the hash value.
    • If there is no collision, put it directly into the bucket.
    • If a collision occurs, it is placed behind the bucket as a linked list.
    • When the length of the list exceeds the threshold, the tree is triggered and the list is converted into a red-black tree.
    • If the array length reaches a threshold, the resize method is called to expand the capacity.


  • The get method. The get method has the following steps:
    • The hash method is used to obtain the hash value and address the hash value.
    • If it is equal to the key addressed to the bucket, return the corresponding value.
    • If there is a conflict, there are two situations. If it is a tree, call getTreeNode to get the value; If it is a linked list, find the corresponding value through a loop.


  • The resize method. Resize does two things:
    • Expand the original array by 2 times
    • Recalculates the index value and puts the original node back into the new array. This step disperses the previously conflicting nodes into a new bucket.


The difference between “sleep” and “wait”

  • The sleep method is a static method in the Thread class, and wait is a method in the Object class
  • Sleep does not release the lock, while wait does
  • Sleep can be used anywhere, while Wait can only be used in synchronized methods or blocks of synchronized code
  • In sleep, time must be passed, and in wait, either notify or notifyAll – is used to wake up

The difference between volatile and Synchronize

Final, finally, Finalize difference

  • Final can modify classes, variables, and methods. The modifier class indicates that the class is not inheritable. A modifier variable means that it cannot be changed. The modifier means that the method cannot be overridden (override).
  • Finally is a mechanism to ensure that important code will execute. Try-finally or try-catch-finally is usually used to close a file stream.
  • Finalize is a method in the Object class that is designed to ensure that an Object completes the collection of a specific resource before garbage collection. The Finalize mechanism is now deprecated and was marked deprecated in JDK 9.

Java reference type differences, specific usage scenarios

In Java, there are four types of reference types: strong reference, soft reference, weak reference, and virtual reference.

  • Strong references: A strong reference is a reference created through a new object, and the garbage collector will not reclaim the object it points to even if it is out of memory.
  • Soft references: Soft references are implemented via SoftRefrence, which has a shorter lifetime than strong references, and the garbage collector reclaims objects referenced by soft references before they run out of memory and throw OOM. A common use scenario for soft references is to store memory sensitive caches that are reclaimed when memory runs out.
  • Weak References: Weak references are implemented through WeakRefrence, which has a shorter lifetime than soft references and is reclaimed by GC as soon as the weakly referenced object is scanned. Weak references are also commonly used to store memory sensitive caches.
  • Virtual References: Virtual references are implemented through FanttomRefrence, which has the shortest life cycle and can be reclaimed at any time. If an object is referenced only by a virtual reference, we cannot access any properties and methods of the object through a virtual reference. It just makes sure that objects do something after Finalize. A common use scenario for virtual references is to track the activity of objects being garbage collected, and a system notification is received before an object associated with a virtual reference is collected by the garbage collector.

The difference between Exception and Error

  • Both exceptions and errors are derived from Throwable. In Java, only Throwable objects can be thrown or caught. Throwable is the basic Exception handling type.
  • Exception and Error represent Java’s classification of different Exception situations. An Exception is an unexpected condition that can and should be caught during normal operation of a program and handled accordingly.
  • Error refers to the situation that is unlikely to occur under normal circumstances, and most of the errors will make the program in an abnormal and irreparable state. The common OutOfMemoryError is a subclass of Error, since it is not normal and therefore not convenient or necessary to catch.
  • Exception is also classified as checked Exception and unchecked Exception.
    • Checked Exceptions must be explicitly caught in code as part of the compiler’s checks.
    • Unchecked exceptions are runtime exceptions, such as null-pointer exceptions and array outlaws, that are generally avoidable logic errors and are not mandatory by the compiler.


——————– Internet related interview questions ——————-

What’s the difference between HTTP and HTTPS? How does HTTPS work?

HTTP is a hypertext transfer protocol, while HTTPS can be simply understood as the secure HTTP protocol. HTTPS encrypts data by adding SSL to HTTP to ensure data security. HTTPS provides two functions: establishing secure information transmission channels to ensure data transmission security; Verify the authenticity of the site.

The differences between HTTP and HTTPS are as follows:

  • HTTPS requires a CA to apply for a certificate, which is rarely free and therefore costs a certain amount of money
  • HTTP is plaintext transmission with low security. HTTPS uses SSL to encrypt data based on HTTP, ensuring high security
  • The default port used for HTTP is 80. The default port used by HTTPS is 443

HTTPS workflow

When it comes to HTTPS, encryption algorithms fall into two categories: symmetric encryption and asymmetric encryption.

  • Symmetric encryption: encryption and decryption are using the same secret key, the advantage is fast, the disadvantage is low security. Common symmetric encryption algorithms include DES and AES.
  • Asymmetric encryption: Asymmetric encryption has a secret key pair, divided into public and private keys. Generally speaking, the private key can be kept by itself and the public key can be disclosed to each other. The advantages of the private key are higher security than symmetric encryption, but the disadvantages are lower data transmission efficiency than symmetric encryption. Information encrypted using a public key can be decrypted only by the corresponding private key. Common asymmetric encryption includes RSA. In formal use scenarios, symmetric encryption and asymmetric encryption are generally used together. Asymmetric encryption is used to complete the transfer of secret keys, and then symmetric secret keys are used to encrypt and decrypt data. The combination not only ensures security, but also improves data transmission efficiency.

The HTTPS process is as follows:

1. The client (usually a browser) sends an encrypted communication request to the server

  • Supported protocol versions, such as TLS 1.0
  • A client-generated random number random1, which is later used to generate the “conversation key”
  • Supported encryption methods, such as RSA public key encryption
  • Supported compression methods 2. The server receives the request and responds
  • Verify the version of the encrypted communication protocol used, such as TLS 1.0. If the browser does not match the version supported by the server, the server turns off encrypted communication
  • A server generates a random number random2 that is later used to generate the “conversation key”
  • Confirm the encryption method used, such as RSA public key encryption
  • Server certificate 3. The client authenticates the certificate after receiving it
  • First verify the security of the certificate
  • After passing the verification, the client will generate a random number pre-master secret, and then encrypt it with the public key in the certificate, and then pass it to the server 4. The server receives the content encrypted with the public key and decrypts it using the private key on the server side to obtain the random number pre-master secret. Then according to radom1, Radom2 and pre-master secret, a symmetric encryption secret key is obtained through certain algorithm. Use symmetric keys for subsequent interactions. The client also uses radom1, Radom2, pre-master Secret, and the same algorithm to generate symmetric secret keys. 5. Then use the symmetric secret key generated in the previous step to encrypt and decrypt the transmitted content in the subsequent interaction.

TCP three-way handshake process

—————–Android interview question ——————-

How many ways are there to communicate between processes

AIDL, broadcast, file, socket, pipe

Broadcast the difference between static and dynamic registration

  • Dynamically registered broadcasts are not resident broadcasts, which means that broadcasts follow the Activity’s life cycle. Notice that the broadcast receiver is removed before the Activity ends. Static registration is resident, which means that when an application is closed, it is automatically run by system calls if a message is broadcast.
  • When the broadcast is ordered: the highest priority is received first (static or dynamic). For broadcast receivers of the same priority, dynamic takes precedence over static
  • The same priority of the same type of broadcast receiver, static: scan first before scan later, dynamic: register first before register later.
  • When broadcast is default broadcast: priority is ignored and dynamic broadcast receivers take precedence over static broadcast receivers. The same priority of the same type of broadcast receiver, static: scan first before scan later, dynamic: register first before register later.

Android performance optimization tool use (this problem is suggested to cooperate with the Performance optimization in Android)

  • Common performance optimization tools for Android include Android Profiler, LeakCanary, and BlockCanary that come with Android Studio
  • The Android Profiler can detect performance problems in three areas: CPU, MEMORY, and NETWORK.
  • LeakCanary is a third-party memory leak detection library, and once our project is integrated, LeakCanary will automatically detect memory leaks during application run and output them to us.
  • BlockCanary is also a library for third-party detection of UI lag. After project integration, Block will automatically detect UI lag during application running and output it to us.

Class loaders in Android

The PathClassLoader can load only the apK DexClassLoader that has been installed in the system, jar, APK, and dex, and apK that has not been installed from the SD card

What are the categories of animation in Android, and what are their characteristics and differences

Android Animation can be roughly divided into three categories: frame Animation, View Animation and Object Animation.

  • Frame animation: Configure a group of images through XML and play them dynamically. Rarely used.
  • View Animation: It can be roughly divided into four types of operations: rotation, transparency, scaling and displacement. Rarely used.
  • Object Animation: Property Animation is one of the most commonly used Animation, which is more powerful than tween Animation. There are roughly two usage types for property animations, ViewPropertyAnimator and ObjectAnimator. The former is suitable for general purpose animation, such as rotation, displacement, scaling, and transparency, and is easy to use using ViewPropertyAnimator via viet.animate () and then perform the corresponding animation. The latter is suitable for animating our custom controls. Of course, first we should add the corresponding getXXX() and setXXX() property getter and setter methods to our custom View. The important thing to note here is that invalidate() is called to refresh the drawing of the View after changing the properties in the custom View in setter methods. The objectAnimator.of attribute type () returns an ObjectAnimator, and the start() method is used to start the animation.

The difference between tween animation and attribute animation:

  • The tween animation is the parent container constantly drawing the view, making it look like it’s moving the effect, but actually the view is still there.
  • By constantly changing the value of the properties inside the view, you really change the view.

Handler mechanism

When it comes to Handler, there are several classes that are closely related to it: Message, MessageQueue, and Looper.

  • The Message. There are two member variables of interest in Message: target and callback.
    • Target is the Handler object that sends the message
    • Callback is a runnable type task passed in when handler.post(runnable) is called. The essence of the POST event is to create a Message, assigning the runnable we passed to the created Message callback member variable.


  • MessageQueue. The MessageQueue is the obvious queue for messages, and of interest is the next() method in MessageQueue, which returns the next message to be processed.
  • The stars. The Looper message poller is the core that connects the Handler to the message queue. If you want to create a Handler in a thread, Looper is created with looper.prepare (), and then looper.loop () is called to start polling. Let’s focus on these two methods.
    • Prepare (). This method does two things: it first fetches Looper from the current thread through threadlocal.get (), and if it is not empty, throws a RunTimeException, meaning that no thread can create two Looper. If null, go to the next step. The second step is to create a Looper and pass threadLocal.set (Looper). Bind the Looper we created to the current thread. It is important to note that the creation of the message queue actually takes place in the Looper constructor.
    • Loop (). This method opens polling for the entire event mechanism. The essence of this is to start an endless loop of getting messages through MessageQueue’s next() method. After get news will call MSG. Target. DispatchMessage () to do the processing. MSG. Target is actually the handler that sends this Message. The essence of this code is to call handler’s dispatchMessage().


  • The Handler. With all this foreshadowing, here comes the most important part. Handler analysis focuses on two parts: sending messages and processing messages.
    • Send a message. In fact, besides sendMessage, there are different ways to send messages, such as sendMessageDelayed, Post and postDelayed. But they all essentially call sendMessageAtTime. EnqueueMessage is called in the sendMessageAtTime method. The enqueueMessage method does two things: it binds the message to the current handler with msg.target = this. Messages are then enqueued via queue.enqueuemessage.
    • Process messages. At the heart of message processing is the dispatchMessage() method. If msg.callback is null, then execute the runnable. If empty, our handleMessage method is executed.


Android Performance Optimization

In my opinion, performance optimization in Android can be divided into the following aspects: memory optimization, layout optimization, network optimization, installation package optimization.

  • Memory optimization: The next question is.
  • Layout optimization: The essence of layout optimization is to reduce the hierarchy of the View. Common layout optimization schemes are as follows
    • Choosing a RelativeLayout in preference to a LinearLayout and a RelativeLayout will reduce the View’s hierarchy
    • Extract commonly used layout components using the < include > tag
    • Load uncommon layouts via the < ViewStub > tag
    • Use the < Merge > tag to reduce the nesting level of the layout


  • Network optimization: Common network optimization schemes are as follows
    • Minimize network requests and merge as many as you can
    • Avoid DNS resolution. Searching by domain name may take hundreds of milliseconds and there may be the risk of DNS hijacking. You can add dynamic IP address update based on service requirements, or switch to domain name access when IP address access fails.
    • Large amounts of data are loaded in pagination mode
    • Network data transmission uses GZIP compression
    • Add network data to the cache to avoid frequent network requests
    • When uploading images, compress them as necessary


  • Installation package optimization: The core of installation package optimization is to reduce the volume of APK, the common solutions are as follows
    • Obfuscation can reduce APK volume to some extent, but the actual effect is minimal
    • Reduce unnecessary resource files in the application, such as pictures, and try to compress pictures without affecting the effect of the APP, which has a certain effect
    • When using SO library, the v7 version of SO library should be retained first, and other versions of SO library should be deleted. The reason is that in 2018, the V7 version of the SO library can meet most of the market requirements, maybe eight or nine years ago phone can not meet, but we don’t need to adapt to the old phone. In practice, the effect of reducing the size of APK is quite significant. If you use a lot of SO libraries, for example, a version of SO with a total of 10 MB, then only keep the V7 version and delete the Armeabi and V8 SO libraries, the total size can be reduced by 20 MB.


Android Memory Optimization

In my opinion, Memory optimization for Android is divided into two parts: avoiding memory leaks and expanding memory. The essence of a memory leak is that an object with a longer life refers to an object with a shorter life.

Common memory leaks

  • Memory leak caused by singleton pattern. The most common example is when you create this singleton and you pass in a Context, you pass in an Activity Context, and because of the static properties of the singleton, its life cycle is loaded from the singleton class until the end of the application, So even after finishing the incoming Activity, our singleton still holds a reference to the Activity, causing a memory leak. The solution is simple: don’t use an Activity Context. Use an Application Context to avoid memory leaks.
  • Memory leaks caused by static variables. Static variables are placed in the method area, and their lifetime is from class loading to the end of the program. As you can see, the lifetime of static variables is very long. The most common example of a memory leak caused by static variables is when we create a static variable in an Activity that requires passing in a reference to the Activity’s this. In this case, even if the Activity calls Finish, memory leaks. The reason is that because this static variable lives almost the same as the entire application life cycle, it holds a reference to the Activity, causing a memory leak.
  • Memory leaks caused by non-static inner classes. Non-static inner classes can cause memory leaks by holding references to external classes. The most common example is the use of handlers and Threads in activities. Handlers and Threads created using non-static inner classes hold references to the current Activity while executing delayed operations, which can cause memory leaks if the Activity is terminated while executing delayed operations. There are two solutions: the first is to use a static inner class, where you invoke an Activity with a weak reference. The second method is in Activity onDestroy call handler. RemoveCallbacksAndMessages to cancel delay event.
  • Memory leaks caused by using resources that are not shut down in time. Common examples are: various data streams are not closed in a timely manner, Bitmap is not closed in a timely manner, etc.
  • The third-party library cannot be unbound in time. Several libraries provide registration and unbinding functions, the most common being EventBus, which we all know is registered in onCreate and unbound in onDestroy. If unchained, EventBus is a singleton pattern that holds references to activities forever, causing memory leaks. Also common with RxJava is the call to disposable.Dispose () in onDestroy after doing some delay with the Timer operator to cancel the operation.
  • Memory leak caused by property animation. A common example is when you exit an Activity during a property animation execution, and the View object still holds a reference to the Activity, causing a memory leak. The solution is to cancel the property animation by calling the animation’s Cancel method in onDestroy. Memory leak caused by WebView. WebView is special in that it causes a memory leak even when its destroy method is called. In fact, the best way to avoid WebView memory leakage is to make the WebView Activity in another process, when the Activity ends kill the current WebView process, I remember Ali Nail WebView is another process started. This should also be used to avoid memory leaks.

Expand the memory

Why expand our memory? Sometimes we have to use a lot of third party commercial SDKS in our actual development. These SDKS are actually good and bad. The large SDK may leak less memory, but the small SDK quality is not very reliable. So the best way to deal with this situation that we can’t change is to expand memory. There are two common ways to expand memory: one is to add the largeHeap=”true” attribute under Application in the manifest file, and the other is to start multiple processes in the same Application to increase the total memory space of an Application. The second method is actually quite common, for example, I have used a twitter DK, a twitter Service is actually in a separate process. Memory optimization in Android is generally open source and throttling, open source is to expand memory, throttling is to avoid memory leaks.

Binder mechanism

In Linux, processes are independent of each other in order to avoid interference with other processes. There is also user space and kernel space within a process. The isolation here is divided into two parts, inter-process isolation and intra-process isolation. If there is isolation between processes, there is interaction. Interprocess communication is IPC, and user space and kernel space communication is system call. In order to ensure the independence and security of Linux, processes cannot directly access each other. Android is based on Linux, so it also needs to solve the problem of interprocess communication. There are many ways to communicate between Linux processes, such as pipes, sockets, and so on. There are two main reasons why Binder is used for Android interprocess communication rather than Linux: performance and security

  • Performance. Performance requirements on mobile devices are more stringent. Traditional Linux interprocess communication such as pipes, sockets, etc., replicates data twice with binders. Binder is superior to traditional process communication in terms of performance.
  • Security. Traditional Linux process communication does not involve authentication between the communicating parties, which can cause some security issues. The Binder mechanism incorporates authentication to improve security.

Binder is based on the CS architecture and has four main components.

  • The Client. Client process.
  • Server. Server process.
  • ServiceManager. Provides the ability to register, query, and return proxy service objects.
  • Binder. Mainly responsible for establishing Binder connections between processes, data interaction between processes and other low-level operations.

The main flow of Binder mechanism is as follows:

  • The server uses Binder drivers to register our services with the ServiceManager.
  • Clients use Binder drivers to query services registered with the ServiceManager.
  • The ServiceManager returns a proxy object to the server using the Inder driver.
  • After receiving the proxy object from the server, the client can communicate with the other processes.

The principle of LruCache

The core principle of LruCache is the effective use of LinkedHashMap, which has a LinkedHashMap member variable inside. There are four methods worth focusing on: constructors, GET, PUT, and trimToSize.

  • Constructor: Two things are done in the constructor of LruCache, setting maxSize and creating a LinkedHashMap. Note here that LruCache sets the accessOrder of the LinkedHashMap to true, which is the order in which the output of the LinkedHashMap is iterated. True means output in order of access, false means output in order of addition. Since we usually output in order of addition, accessOrder is false by default, but our LruCache needs to output in order of access. So explicitly set accessOrder to true.
  • Get method: Essentially the get method that calls the LinkedHashMap, and since we set accessOrder to true, each call to the GET method places the current element we are visiting at the end of the LinkedHashMap.
  • The put method: essentially calls the LinkedHashMap put method. Due to the nature of LinkedHashMap, each call to the Put method also puts the new element at the end of the LinkedHashMap. After the addition, the trimToSize method is called to ensure that the added memory does not exceed maxSize.
  • TrimToSize method: Inside the trimToSize method, a while(true) loop is started, which continuously deletes elements from the top of the LinkedHashMap until the deleted memory is less than maxSize and breaks out of the loop.

In fact, we can summarize here, why is this algorithm called least recently used algorithm? The principle is simple: each of our put or get is treated as a visit, and due to the nature of LinkedHashMap, the elements accessed are placed at the end of each visit. When our memory reaches the threshold, the trimToSize method is triggered to remove the element at the head of the LinkedHashMap until the current memory is less than maxSize. The reason for removing the front element is obvious: the most recently accessed elements are placed at the tail, and the elements at the front must be the least recently used elements, so they should be removed first when memory is low.

Design an asynchronous loading frame for images

To design a picture loading framework, we must use the idea of three level cache of picture loading. The three-level cache is divided into memory cache, local cache and network cache. Memory cache: The Bitmap is cached in memory, which is fast but has small memory capacity. Local cache: Caches images to files, which is slower but larger. Network cache: Get pictures from the network. The speed is affected by the network. If we were designing an image loading framework, the process would look something like this:

  • After getting the image URL, first look for the BItmap in memory, if found directly load.
  • If it is not found in the memory, it is searched from the local cache. If it can be found in the local cache, it is directly loaded.
  • In this case, the image will be downloaded from the network. After downloading, the image will be loaded, and the downloaded image will be put into the memory cache and local cache.

Here are some basic concepts. If it is a specific code implementation, there are probably several aspects of the file:

  • First we need to determine our memory cache, which is usually LruCache.
  • To determine the local cache, DiskLruCache is usually used. It should be noted that the file name of the image cache is usually a string of URLS encrypted by MD5, so as to avoid the file name directly exposing the URL of the image.
  • Once the memory cache and local cache have been determined, we need to create a new class MemeryAndDiskCache, of course, whatever the name is, which contains the LruCache and DiskLruCache mentioned earlier. In the MemeryAndDiskCache class, we define two methods, one is getBitmap, the other is putBitmap, corresponding to the image acquisition and cache, the internal logic is also very simple. In getBitmap, bitmaps are obtained by memory and local priority. In putBitmap, memory is cached first and then cached locally.
  • After the cache policy class is determined, we create an ImageLoader class. This class must contain two methods, one is to display the image displayImage(URL,imageView), The other one is to get the image from the web down image (URL,imageView). SetTag (URL) is used to bind url to ImageView in the method of showing images. This is to avoid the image dislocation bug caused by the reuse of ImageView when loading network images in the list. Then the cache will be fetched from MemeryAndDiskCache. If it exists, load it directly. If not, the fetch image from the network method is called. There are a lot of ways to get images from the web, and I usually use OkHttp+Retrofit here. When you get an image from the network, first check whether imageView.getTag() is consistent with the URL of the image. If so, the image is loaded; if not, the image is not loaded. In this way, the asynchronous loading of images in the list is avoided. At the same time, MemeryAndDiskCache will be used to cache the image after obtaining the image.

Event distribution mechanism in Android

When our finger touches the screen, the event actually goes through a process like Activity -> ViewGroup -> View to the View that finally responds to our touch event. When it comes to event distribution, the following methods are essential: dispatchTouchEvent(), onInterceptTouchEvent(), onTouchEvent. Next, follow the Activity -> ViewGroup -> View process to outline the event distribution mechanism. When our finger touches the screen, an Action_Down event is triggered, and the Activity on the current page responds first, going to the Activity’s dispatchTouchEvent() method. The logic behind this method is simply this:

  • Call getWindow. SuperDispatchTouchEvent ().
  • If the previous step returns true, return true; Otherwise return its own onTouchEvent(). If getWindow().superdispatchTouchEvent () returns true, it means that the current event has been handled, so you don’t need to call your own onTouchEvent. Otherwise, the event is not handled and the Activity needs to handle it itself, calling its own onTouchEvent.

The getWindow() method returns an object of type Window, as we all know. In Android, PhoneWindow is the only implementation class for Window. So this essentially calls superDispatchTouchEvent() from PhoneWindow. ` in PhoneWindow. This method in the actual call mDecor superDispatchTouchEvent (event). The mDecor is DecorView, which is a subclass of FrameLayout and calls super.DispatchTouchEvent () in superDispatchTouchEvent() in DecorView. It should be obvious at this point that DecorView is a subclass of FrameLayout, which is a subclass of ViewGroup that essentially calls the dispatchTouchEvent() of the ViewGroup. Now that our event has been passed from the Activity to the ViewGroup, let’s examine the event handling methods in the ViewGroup. The logic in dispatchTouchEvent() in ViewGroup looks like this:

  • Use onInterceptTouchEvent() to check whether the current ViewGroup intercepts events. The default ViewGroup does not.
  • If intercepted, return its own onTouchEvent();
  • If not, judge by the return value of child.dispatchTouchEvent(). If true, return true; Otherwise return its own onTouchEvent(), where unhandled events are passed up.

In general, the onInterceptTouchEvent() of the ViewGroup returns false, that is, no intercepting. The important thing to note here is the sequence of events, such as Down events, Move events…… Up event, from Down to Up is a complete sequence of events, corresponding to the finger from press Down to lift the series of events, if the ViewGroup intercepts the Down event, then subsequent events will be handed to the ViewGroup onTouchEvent. If the ViewGroup intercepts something other than a Down event, an Action_Cancel event is sent to the previous View that handled the Down event, notifying the child View that the subsequent sequence of events has been taken over by the ViewGroup. The child View can be restored to its previous state. Here is a common example: in a Recyclerview clock there are a lot of buttons, we first press a Button, then slide a distance and then loosen, at this time Recyclerview will follow the slide, will not trigger the Button click event. In this example, when we press the button, the button receives the Action_Down event, and normally the subsequent sequence of events should be handled by the button. But when we swipe a little bit, the Recyclerview senses that this is a swipe, intercepts the sequence of events, and goes through its own onTouchEvent() method, which is reflected on the screen as a swipe of the list. The button is still in the pressed state, so it needs to send an Action_Cancel to tell the button to restore its previous state during interception. Event distribution eventually goes to the View’s dispatchTouchEvent(). There is no onInterceptTouchEvent() in the dispatchTouchEvent() of a View, which is easy to understand. A View is not a ViewGroup and does not contain other child views, so there is no intercepting or intercepting. Ignoring a few details, the View dispatchTouchEvent() directly returns its own onTouchEvent(). If onTouchEvent() returns true, the event is processed, otherwise unprocessed events are passed up until a View handles the event or until the Activity’s onTouchEvent() terminates. People here often ask the difference between onTouch and onTouchEvent. First, both methods are in the View’s dispatchTouchEvent() logic:

  • If touchListener is not null, the View is enable, and onTouch returns true, the onTouchEvent() method will not be used.
  • If one of the above conditions is not met, it goes to the onTouchEvent() method. So the onTouch sequence is before the onTouchEvent.

View drawing process

The starting point for rendering a view is in the performTraversals() method of the ViewRootImpl class, In this method, the drawing process of mview.measure (), mview.layout () and mview.draw () View is divided into 3 steps: measurement, layout and drawing, corresponding to 3 methods measure, layout and draw respectively.

  • Measurement phase. The measure method will be called by the parent View. After some optimization and preparation in the measure method, the onMeasure method will be called for actual self-measurement. The onMeasure method does different things in a View and ViewGroup:
    • The View. The onMeasure method in the View calculates its size and stores it via setMeasureDimension.
    • The ViewGroup. The onMeasure method in the ViewGroup calls all the child IEW’s measure methods for self-measurement and saving. Then calculate its own size from the size and position of the child View and save it.


  • Layout stage. The Layout method is called by the parent View. The Layout method saves the size and position passed in by the parent View and calls onLayout for the actual internal layout. OnLayout also does different things in View and ViewGroup:
    • The View. Since the View has no child views, the onLayout of the View does nothing.
    • The ViewGroup. The onLayout method in the ViewGroup calls the Layout methods of all child Views, passing them dimensions and positions, and letting them complete their own internal layout.


  • Drawing phase. The draw method does some scheduling, and then calls the onDraw method to draw the View itself. The scheduling flow of draw method is roughly like this:
    • Draw the background. Corresponds to the drawBackground(Canvas) method.
    • Draw the body. Corresponds to the onDraw(Canvas) method.
    • Draw the child View. Corresponds to the dispatchDraw(Canvas) method.
    • Draw slide correlation and foreground. Corresponding onDrawForeground (Canvas).


How does Android interact with JS

In Android, the interaction between Android and JS is divided into two aspects: Android calls the method in JS, JS calls the method in Android.

  • Android js. There are two ways to tune JS on Android:
    • Webview.loadurl (” method name in javascript: JS “). The advantage of this method is that it is very simple, but the disadvantage is that there is no return value. If you need to get the return value of the JS method, you need js to call the method in Android to get the return value.
    • WebView. EvaluateJavaScript (the method name “in the” javascript: js, ValueCallback). The advantage of this method over loadUrl is that the ValueCallback callback gets the return value of the JS method. The disadvantage is that Android4.4 only has this method, poor compatibility. However, in 2018, most apps on the market require a minimum version of 4.4, so I think this compatibility is not a problem.


  • Js Android. There are three ways to tune Android with JS:
    • WebView. AddJavascriptInterface (). This is the official solution for JAVASCRIPT to call Android methods. Note that the @javascriptInterface annotation should be added to Android methods for JAVASCRIPT to avoid security vulnerabilities. The downside of this solution is that Android4.2 had security holes before, but they have been fixed since. Again, compatibility is not an issue in 2018.
    • Override WebViewClient’s shouldOverrideUrlLoading() method to intercept the URL, parse the URL, and call the Android method if it meets the requirements of both parties. The advantage is to avoid the security vulnerability before Android4.2, the disadvantage is also very obvious, can not directly get the return value of the call Android method, only through Android call JS method to get the return value.
    • Overwrite the WebChromClient onJsPrompt() method, as in the previous method, after the URL is parsed, if the two rules are met, then the Android method can be called. Finally, if a return value is required, result.confirm(” Return value of Android method “) will be used to return the return value of Android to JS. The advantage of the method is that there are no vulnerabilities, there are no compatibility restrictions, and it is also easy to obtain the return value of the Android method. The important thing to note here is that in addition to onJsPrompt there are onJsAlert and onJsConfirm methods in WebChromeClient. So why not choose the other two methods? The reason is that onJsAlert has no return value, while onJsConfirm has only true and false return values. Meanwhile, in front-end development, prompt method is rarely called, so onJsPrompt is used.


Activity startup process

SparseArray principle

SparseArray, generally speaking, is a data structure used in Android to replace HashMap. To be precise, it replaces a HashMap with a key of type Integer and a value of type Object. Note that SparseArray only implements the Cloneable interface, so it cannot be declared with a Map. Internally, SparseArray consists of two arrays. One is mKeys of type int[], which holds all the keys. The other is mValues of type Object[], which holds all values. SparseArray is most commonly compared to a HashMap. SparseArray uses less memory than a HashMap because it consists of two arrays. As we all know, add, delete, change, search and other operations need to find the corresponding key-value pair first. SparseArray is addressed internally by binary search, which is obviously less efficient than the constant time complexity of HashMap. Binary search, the other thing to mention here is binary search is that the array is already sorted, and that’s right, SparseArray is going to sort it up by key. Taken together, SparseArray takes up more space than HashMap, but is less efficient than HashMap, and is a typical time-swap space suitable for smaller storage. From a source code perspective, I think the things to watch out for are SparseArray’s remove(), PUT (), and GC () methods.

  • Remove (). The remove() method of SparseArray does not simply DELETE and then compress the array. Instead, the value to be deleted is set to the static property of DELETE, which is essentially an Object Object, The mGarbage property in SparseArray is also set to true, which makes it easier to compress arrays by calling its own GC () method when appropriate to avoid wasting space. This improves efficiency by overwriting the DELETE with the value to be added if the key to be added in the future is equal to the deleted key.
  • The gc (). The GC () method in SparseArray has absolutely nothing to do with the JVM’s GC. The inside of the ‘ ‘gc()’ method is actually a for loop, which compresses the array by moving the non-DELETE key-value pairs forward and overwriting the DELETE key-value pairs. At the same time, mGarbage is set to false to avoid wasting memory.
  • Put (). The put method is the logic that overwrites the value if a binary lookup finds a key in the mKeys array. If the index is not found, the key index that is closest to the key to be added will be retrieved. If the value corresponding to the index is DELETE, the new value can be directly overwritten by DELET. In this case, the movement of array elements can be avoided, thus improving efficiency. If value is not DELETE, mGarbage is determined; if true, gc() is called to compress the array. After that, the appropriate index is found, the indexed key-value pair is moved back, and a new key-value pair is inserted, which may trigger array expansion.

How to avoid OOM loading images

We know that the size of the Bitmap in memory is calculated as: pixels long * pixels wide * memory per pixel. There are two ways to avoid the OOM: scale down the length and width and reduce the amount of memory per pixel.

  • Scale down the length and width. We know that bitmaps are created using the factory methods of BitmapFactory, decodeFile(), decodeStream(), decodeByteArray(), decodeResource(). Each of these methods takes a parameter of type Options, which is an inner class of BitmapFactory that stores BItmap information. Options has one property: inSampleSize. We can modify inSampleSize to reduce the length and width of the image, thus reducing the amount of memory used by BItma P. Note that the inSampleSize size needs to be a power of 2. If it is less than 1, the code forces inSampleSize to be 1.
  • Reduce memory occupied by pixels. Options has a property inPreferredConfig, ARGB_8888 by default, representing the size of each pixel. We can reduce the memory by half by changing it to RGB_565 or ARGB_4444.

A larger load

Loading a large hd image, such as Riverside Scene at Qingming Festival, is impossible to display on the screen at first, and considering the memory condition, it is impossible to load all the images into the memory at one time. This time you need to load the local, Android has a responsible for local loading class: BitmapRegionDecoder. Use method is simple, through BitmapRegionDecoder. NewInstance () to create objects, called after decodeRegion (the Rect the Rect, BitmapFactory. Options Options). The first argument rect is the area to display, and the second argument is Options, an inner class in BitmapFactory.