preface

To be a good Android developer, you need a complete set ofThe knowledge systemHere, let us grow together into what we think ~.

This article is the advanced Android memory optimization, the difficulty can be said to have reached the level of purgatory, recommended memory optimization is not very familiar with the previous article: Android performance optimization within the storage optimization, which detailed analysis of the following major modules:

  • 1) Android memory management mechanism
  • 2) The significance of memory optimization
  • 3) Avoid memory leaks
  • 4) Optimize memory space
  • 5) Design and implementation of picture management module

Once you know the basics, it’s time to explore Android memory optimization.

This article is very long, so I suggest you save it and enjoy it

directory

  • A,Reknowledge memory optimization
    • 1. Phone RAM
    • 2. Latitude of memory optimization
    • 3. Memory problems
  • Second,Common Tool Selection
    • 1, the Memory Profiler
    • 2, Memory Analyzer
    • 3, LeakCanary
  • Three,Review of Android memory management mechanisms
    • 1. Java memory allocation
    • 2. Java memory reclamation algorithm
    • 3. Android memory management mechanism
    • 4, summary
  • Four,Memory jitter
    • 1. Why does memory jitter cause OOM?
    • 2, memory jitter to solve actual combat
    • 3. Common cases of memory jitter
  • Five,Memory optimization and systematic construction
    • 1. MAT review
    • 2. Establish a systematic picture optimization/monitoring mechanism
    • 3. Establish online application memory monitoring system
    • Create a global thread monitoring component
    • 5. Set up GC monitoring components
    • 6. Create an online OOM monitoring component: Probe
    • 7. Realize automatic Memory analysis of Profile Memory in stand-alone version
    • 8. Set up offline Native memory leak monitoring system
    • 9. Set the memory bottom pocket policy
    • 10. Deeper memory optimization strategies
  • Vi.Memory Optimized evolution
    • 1. Automated test phase
    • 2, LeakCanary
    • 3. Use an improved version of ResourceCanary based on LeakCannary
  • Seven,Memory optimization tool
    • 1, the top
    • 2, dumpsys meminfo
    • 3, LeakInspector
    • 4, the JHat
    • ART GC Log
    • 6, Chrome Devtool
  • Eight,Summary of Memory Problems
    • Inner classes are dangerous ways to code
    • 2. Problems with ordinary Hanlder internal classes
    • 3. The memory of the login interface is faulty
    • 4. Memory problems when using system services
    • 5. Load webView-type leaks into the garbage can process
    • 6. Unregister components when appropriate
    • 7, Handler/FrameLayout postDelyed method trigger memory problems
    • 8, images put in the wrong resource directory will also have memory problems
    • 9. When the list item is recycled, pay attention to releasing the image reference
    • 10. Use ViewStub for placeholder
    • 11. Pay attention to regularly clean outdated buried data of the App
    • 12. Handling memory leaks caused by anonymous inner class Runnable
  • Nine,Common memory optimization problems
    • 1. What is the process of your memory optimization project?
    • 2. What are your biggest feelings about memory optimization?
    • 3. How to detect all unreasonable places?
  • Ten,conclusion
    • 1. Optimize the general direction
    • 2. Optimize the details
    • 3. Summary of memory optimization systematization construction

Sixth, memory optimization evolution

1. Automated test phase

Hprof Dump is automatically triggered when the memory reaches the threshold, and the obtained Hprof is archived and analyzed manually by MAT.

2, LeakCanary

Testing and analysis reports are all together, and batch automated testing and post hoc analysis are not convenient.

3. Use an improved version of ResourceCanary based on LeakCannary

Matrix => ResourceCanary Implementation principle

The main function

At present, its main function has three parts, as follows:

1. Separate the two parts of the process of detection and analysis

Automated testing is done by the test platform, analysis is done offline by the server of the monitoring platform, and finally the relevant development is notified to solve the problem.

2. Cut Hprof files to reduce the overhead of transferring Hprof files and storing Hprof files in the background

All other data can be clipped on the client side. Generally, the Hprof size will be reduced to about 1/10 of its original size.

3. Add repeated Bitmap object detection

Convenient to reduce memory consumption by reducing the number of redundant bitmaps.

4, summary

More tools and components need to be implemented throughout the development phase to systematically improve automation and ultimately improve problem detection efficiency.

Memory optimization tools

In addition to the common Memory Profiler, MAT, and LeakCanary, there are several other Memory analysis tools that I will introduce to you.

1, the top

The top command is a commonly used performance analysis tool in Linux. It displays the resource usage of each process in the system in real time, similar to the Task manager in Windows. The top command provides real-time monitoring of the state of the system processor. It displays a list of the most CPU sensitive tasks on the system. This command can sort tasks by CPU usage, memory usage, and execution time.

Next, let’s type the following command to see how the top command is used:

quchao@quchaodeMacBook-Pro ~ % adb shell top --help usage: top [-Hbq] [-k FIELD,] [-o FIELD,] [-s SORT] [-n NUMBER] [-d SECONDS] [-p PID,] [-u USER,] Show process activity in real  time. -H Show threads -k Fallback sort FIELDS (default -S,-%CPU,-ETIME,-PID) -o Show FIELDS (def PID,USER,PR,NI,VIRT,RES,SHR,S,%CPU,%MEM,TIME+,CMDLINE) -O Add FIELDS (replacing PR,NI,VIRT,RES,SHR,S from default) -s Sort by field number (1-X, default 9) -b Batch mode (no tty) -d Delay SECONDS between each cycle (default 3) -n Exit after NUMBER iterations -p Show these PIDs -u Show these USERs -q Quiet (no header lines) Cursor LEFT/RIGHT to change sort, UP/DOWN move list, space to force update, R to reverse sort, Q to exit.Copy the code

Here, use top to display the process information only once to explain the meanings of each field in the process information.

The overall statistics area

The first four lines are the overall statistics area for the current system condition. Let’s look at what each line of information means.

Line 1: Tasks – Tasks (processes)

The details are as follows:

There are currently 729 processes running, 715 in sleep, 0 in stoped, and 8 in zombie.

Line 2: Memory state

The specific information is as follows:

  • 1), 5847124K total: total physical memory (5.8GB)
  • 2), 5758016K used: total memory in use (5.7GB)
  • 3), 89108K free: total free memory (89MB)
  • 4), 112428k buffers: Memory size of buffers (112M)

Line 3: swap Swaps partition information

The details are as follows:

  • 1) 2621436K total: Total amount of switching area (2.6GB)
  • 2), 612572K used: total amount of swap area used (612MB)
  • 3) 2008864K Free: Total amount of free swap area (2GB)
  • 4) 2657696K cached: Total amount of swap cached (2.6GB)

Line 4: CPU status information

The specific attributes are described as follows:

  • 1) 800% CPU: 8-core CPU
  • 2), 39% user: 39% CPU is used by the user process
  • 3) 0% NICE: Processes with a negative priority account for 0%
  • 4) 42% SYS — Kernel space occupies 42% of the CPU
  • 5) 712% idle: The wait time of other than IO wait time is 712%
  • 6) 0% IOW: IO wait time accounts for 0%
  • 7) 0% IRQ: hard interrupt time accounts for 0%
  • 8) 6% SIRQ-soft interrupt time accounted for 0%

For memory monitoring, we need to constantly monitor the third line of swap partition used in the top. If this value is constantly changing, it indicates that the kernel is constantly exchanging data between memory and swap, which is really insufficient memory.

Process (task) status monitoring

In the fifth line and below, the status monitoring of each process (task) is shown as follows:

  • 1) PID: indicates the PROCESS ID
  • 2), USER: process owner
  • 3) PR: Process priority
  • 4), NI: nice value A negative value indicates a high priority and a positive value indicates a low priority
  • 5), VIRT: total virtual memory used by the process. VIRT = SWAP + RES
  • 6) RES: the amount of physical memory used by the process and not swapped out. RES = CODE + DATA
  • 7), SHR: shared memory size
  • 8), S: process status. D = uninterruptible sleep state, R = running, S = sleep, T = track/stop, Z = zombie process
  • 9) %CPU – The percentage of CPU time occupied since the last update
  • %MEM: the percentage of physical memory used by the process
  • 11), TIME+ : the total CPU TIME used by the process, in 1/100 second
  • 12), ARGS: process name (command name/command line)

As can be seen from the above figure, the process of awesome-wanandroid is in the first row. Its process name is json.chao.com.w+, PID is 23104, process owner is U0_a714, priority PR is 10. Nice set NI to minus 10. The total virtual memory used by the process (VIRT) is 4.3GB, the physical memory used by the process (RES) is 138M, and the shared memory (SHR) is 66 MB. The process is in the sleeping state (S), and the CPU usage since the last update (%) is 21.2. The percentage of physical memory used by the process %MEM is 2.4%, and the CPU TIME used by the process TIME+ is 1:47.58/100 hours.

2, dumpsys meminfo

Four Memory Specifications

Before explaining the Dumpsys meminfo command, it’s important to understand the concept of the four most important memory metrics in Android, as shown in the following table:

Memory metrics English full name meaning equivalent
USS Unique Set Size Physical memory Process-exclusive memory
PSS Proportional Set Size Physical memory PSS = USS + includes shared libraries proportionally
RSS Resident Set Size Physical memory RSS= USS+ contains shared libraries
VSS Virtual Set Size Virtual memory VSS= RSS+ No actual physical memory allocated

VSS >= RSS >= PSS >= USS

RSS, like PSS, also contains shared memory for processes, but the trouble is that RSS does not divide the shared memory size evenly among the processes that use the shared memory, so that the total amount of RSS for all processes can exceed the physical memory by a large margin. VSS is a virtual address, and its upper limit is related to the accessible address space of the process, not to the memory usage of the current process. For example, a lot of map memory is also counted. As we all know, the map memory of file may correspond to a file or hard disk, or some strange device, which has little to do with the memory used by the process.

However, the biggest difference between PSS and USS lies in “shared memory” (for example, if two apps open the same file in MMAP mode, the part of memory used to open the file is shared). USS does not contain the memory shared between processes, while PSS does. This also causes USS to add up to less than the physical memory size for all processes due to the lack of shared memory.

The PSS graph was originally recommended to measure the physical memory footprint of an App, and USS was not added until Android 4.4. However, there is A big problem with PSS, which is “shared memory”. Consider the case that if both processes A and B use A shared SO library, then the amount of memory used to initialize the SO library will be equally divided between A and B. But A starts after B, so for B’s PSS curve, at the moment A starts, even if B does nothing, there will be A relatively large step down, which will cause fatal trouble for the behavior of analyzing software memory with graph.

Although USS does not have this problem, the memory application of Dalvik VM involves GC delay and various GC strategies, which will affect the abnormal fluctuation of the curve. For example, asynchronous GC is an important feature for Android 4.0 and above, but when does GC end? When does the curve “dip”? It’s impossible to predict. There is also GC policy, when to start increasing Dalvik VM’s pre-requited memory size (Dalvik starts with a nominal start memory size, which is reserved for the Java code runtime to avoid the Java runtime to requite again caused by the lag), However, this prerequest size is dynamic, which also causes USS to be up and down.

Dumpsys meminfo command parsing

Adb shell Dumpsys meminfo -h adb shell dumpsys meminfo -h adb shell dumpsys meminfo -h

quchao@quchaodeMacBook-Pro ~ % adb shell dumpsys meminfo -h
meminfo dump options: [-a] [-d] [-c] [-s] [--oom] [process]
-a: include all available information for each process.
-d: include dalvik details.
-c: dump in a compact machine-parseable representation.
-s: dump only summary of application memory usage.
-S: dump also SwapPss.
--oom: only show processes organized by oom adj.
--local: only collect details locally, don't call process.
--package: interpret process arg as package, dumping all
            processes that have loaded that package.
--checkin: dump data for a checkin
If [process] is specified it can be the name or
pid of a specific process to dump.
Copy the code

Next, we type adb shell dumpsys meminfo between us:

quchao@quchaodeMacBook-Pro ~ % adb shell dumpsys meminfo Applications Memory Usage (in Kilobytes): Uptime: Total PSS by Process: 308,049K: 257501238 Realtime: 257501238 Com.0700.mm (PID 3760 / activities) 225,081K: system (PID 2088) 189,038K: Com.android.systemui (PID 2297 / Activities) 188,877K: com.miui. Home (PID 2672 / Activities) 176,665K: Com. Plan. Kot32. Tomatotime (pid 22744 / activities) 175231 K: Json.chao.com.wanandroid (PID 23104 / activities) 236,918 K: com.tencent. Mobileqq (PID 23741)... Total PSS by OOM Adjustment: 432,013K: Native 76,700k: SurfaceFlinger (PID 784) 59,084K: [email protected] (pid 743) 26524 K: transport (pid 23418) : 25249 K logd (pid 597), 11413 (K) : Media. Codec (PID 1303) 10,648K: RILd (PID 1304) 9,283K: Media. Extractor (PID 1297)... Persistent 225,081K: System (PID 2088) 189,038K: com.android.systemui (PID 2297 / activities) Com.xiaomi. Finddevice (PID 3134) 39,098K: com.android. Phone (PID 2656) 25,583K: com.miui.daemon (PID 3078)... 219,795K: Foreground 175,231K: json.chao.com.wanandroid (PID 23104 / Activities) 44,564K: Com. Beautiful miui. Securitycenter. Remote (pid 2986) : 246529 K Visible 71002 K: Com. Sohu. Inputmethod. Sogou. Xiaomi (pid 4820) : 52305 K com. Beautiful miui. Miwallpaper (pid 2579), 40982 (K) : Com. Beautiful miui. Powerkeeper (pid 3218) : 24604 K com. Beautiful miui. SystemAdSolution (pid 7986), 14198 (K) : Com. Xiaomi. Metoknlp (pid 3506) : 13820 K com. Beautiful miui. Voiceassist: core (pid 8722), 13222 (K) : Com.miui.analytics (PID 8037) 7,046K: com.miui.hybrid:entrance (PID 7922) 5,104K: com.miui.wmSVC (PID 7887) 4,246K: Com. Android. Smspush (pid 8126) : 213027 K Perceptible 89780 K: com. Eg. Android. AlipayGphone (pid 8238), 49033 (K) : Com. Eg. Android. AlipayGphone: push (pid 8204) : 23181 K com. Android. The thememanager (pid 11057), 13253 (K) : Xiaomi. Joyose (PID 5558) 10,292K: com.android.updater (pid 3488) 9,807K: com.lbe.security.miui (pid 23060) 9,734K: Com. Google. Android. Webview: sandboxed_process0 (pid 11150) : 7947 K com. Xiaomi. Location. The fused (pid 3524), 308049 (K) : Backup 308,049K: com.tceh.mm (PID 3760 / activities) 74,250k: A Services 59,701k: Com. Tencent. Mm: push (pid 7234) : 9247 K com. Android. Settings: remote (pid 27053), 5302 (K) : Xiaomi. Drivemode (PID 27009) 199,638K: Home 188,877K: com.miui. Home (PID 2672 / activities) 10,761K: Com.miui.hybrid (PID 791) 2236k: B Services 2236k: com.tencent. Mobileqq :MSF (pid 7236k) 2236k: Com. Qualcomm. Qti. Autoregistration (pid 8786) : 4086 K com. Qualcomm. Qti. Callenhancement (pid 26958), 3809 (K) : Com. Qualcomm. Qti. StatsPollManager (pid 26993) : 3703 K com. Qualcomm. Qti. Smcinvokepkgmgr (pid 26976), 692588 (K) : Cached 176665 K: com. Plan. Kot32. Tomatotime 22744 / activities (pid) 126918 K: com. Tencent. Mobileqq (pid 23741), 72928 (K) : Com.tencent. Mm: Tools (PID 18598) 68,208K: com.tencent. Mm: Sandbox (pid 27333) 55,270K: Com.tencent. Mm :toolsmp (pid 18842) 24,477K: com.android. MMS (pid 27192) 23,865K: com.xiaomi. // Total PSS by category: 957,931k: Native 284,006k: Dalvik 199,750k: Unknown 193,236k:.dex mmap 191,521k: .art mmap 110,581K:.oat mmap 101,472K:.so mmap 94,984K: EGL mtrack 87,321K: Dalvik Other 84,924K: Gfx dev 77,300K: GL mtrack 64,963K:.apK mmap 17,112K: Other mmap 12,935K: Ashmem 3,364K: Stack 2,343K:.ttf mmap 1,375K: Other dev 1,071K:.jar Mmap 20K: Cursor 0K: Other mtrack Total RAM: 5,847,124K (Status normal) Free RAM: (692,588K cached PSS + 2,428,616K cached kernel + 117,492K cached ion + 472,628K free) Used RAM: 2,864,761K (2,408,529K used PSS + 456,232K kernel) Lost RAM: 17428K Physical Used for 625,388K in swap (2251,436 K total swap) Tuning: 256 (large 512), oom 322,560K, restore limit 107,520K (high-end-gFX)Copy the code

According to dumpsys Meminfo’s output, it can be summarized in the following table:

A typology Sorting indicators meaning
process PSS The process is displayed in descending order of the process’s PSS, one process per row, generally used for preliminary competitive analysis
OOM adj PSS Displays the memory status and killing sequence of all Android processes running in the current system. The processes closer to the bottom are more likely to be killed. The processes are sorted according to a set of complex algorithms, including the front and back, services or programs, visible or not, and aging
category PSS The total PSS distribution of various processes was divided into Dalvik/Native/.art mmap/.dex map and listed in descending order
total Total memory, free memory, available memory, other memory

In addition, to view the memory information of a single App process, we can type the following command:

Dumpsys meminfo <pid> // Outputs a process with the specified PID. Dumpsys meminfo --package <packagename> // Outputs a process with the specified packagenameCopy the code

Adb shell dumpsys meminfo 23104, where 23104 is the Awesome-WanAndroid App PID, looks like this:

quchao@quchaodeMacBook-Pro ~ % adb shell dumpsys meminfo 23104
Applications Memory Usage (in Kilobytes):
Uptime: 258375231 Realtime: 258375231

** MEMINFO in pid 23104 [json.chao.com.wanandroid] **
                Pss  Private  Private  SwapPss     Heap     Heap     Heap
                Total    Dirty    Clean    Dirty     Size    Alloc     Free
                ------   ------   ------   ------   ------   ------   ------
Native Heap    46674    46620        0      164    80384    60559    19824
Dalvik Heap     6949     6912       16       23    12064     6032     6032
Dalvik Other     7672     7672        0        0
       Stack      108      108        0        0
      Ashmem      134      132        0        0
     Gfx dev    16036    16036        0        0
   Other dev       12        0       12        0
   .so mmap     3360      228     1084       27
  .jar mmap        8        8        0        0
  .apk mmap    28279    11328    11584        0
  .ttf mmap      295        0       80        0
  .dex mmap     7780       20     4908        0
  .oat mmap      660        0       92        0
  .art mmap     8509     8028      104       69
 Other mmap      982        8      848        0
 EGL mtrack    29388    29388        0        0
  GL mtrack    14864    14864        0        0
    Unknown     2532     2500        8       20
      TOTAL   174545   143852    18736      303    92448    66591    25856

App Summary
                   Pss(KB)
                    ------
       Java Heap:    15044
     Native Heap:    46620
            Code:    29332
           Stack:      108
        Graphics:    60288
   Private Other:    11196
          System:    11957

           TOTAL:   174545       TOTAL SWAP PSS:      303

Objects
           Views:      171         ViewRootImpl:        1
     AppContexts:        3           Activities:        1
          Assets:       18        AssetManagers:        6
   Local Binders:       32        Proxy Binders:       27
   Parcel memory:       11         Parcel count:       45
Death Recipients:        1      OpenSSL Sockets:        0
        WebViews:        0

SQL
        MEMORY_USED:      371
 PAGECACHE_OVERFLOW:       72          MALLOC_SIZE:      117

DATABASES
    pgsz     dbsz   Lookaside(b)          cache  Dbname
        4       60            109      151/32/18  /data/user/0/json.chao.com.wanandroid/databases/bugly_db_
        4       20             19         0/15/1  /data/user/0/json.chao.com.wanandroid/databases/aws_wan_android.db
Copy the code

This command outputs a memory summary of the process, and we should focus on four points, which I’ll cover in the following sections.

1. Check the Heap Alloc of Native Heap and the Heap Alloc of Dalvik Heap

  • 1) Heap Alloc: represents the memory usage of native. If it continues to rise, there may be leakage.
  • Heap Alloc: represents the memory footprint of the Java layer.

2. Check the number of Views, Activities, and AppContexts

If Views, Activities, AppContexts continue to rise, it indicates the risk of memory leak.

SQL MEMORY_USED and PAGECACHE_OVERFLOW

  • MEMOERY_USED: represents the memory used by the database.
  • 2) PAGECACHE_OVERFLOW: indicates the cache that overflow is also used. The smaller the value, the better.

4. View DATABASES

  • 1) PGSZ: indicates the database paging size, which is all 4KB.
  • 2) Lookaside(b) : Indicates the number of Lookaside slots used.
  • 3) cache: 151/32/18 in the column respectively represent the number of paging cache hits/misses/number of paging cache. The number of misses should not be greater than the number of hits.

3, LeakInspector

LeakInspector is a one-stop memory leak solution for Tencent’s internal use. It integrates memory leak detection, automatic system Bug repair, automatic recovery of leaked Activity resources, automatic GC chain analysis, whitelist filtering and other functions of Android phones after long-term accumulation and refining. And deeply docking research and development process, automatic analysis of responsible persons and defect list of the full link system.

So how is LeakInspector different from LeakCanary?

There are four main differences between them, as follows:

First, the detection ability and principle are different

1. Detection ability

They all support leak detection for activities, Fragments, and other custom classes, but LeakInspector also adds Btiamp detection capabilities, as shown below:

  • 1) Detect whether there are pictures beyond the size of the View decode in the View, if so, report the Activity with problems and its corresponding View ID, and record its number and the average size of memory occupied.
  • 2) Detect whether the size of the picture exceeds the screen size of all mobile phones, and alarm will be reported if the picture violates the rules.

The implementation principle of this part can be implemented in ARTHook’s way. For those who are not clear, please take a closer look at the part of big picture detection.

2. Detection principle

LeakInspector uses WeakReference directly to detect whether the object has been released, while LeakCanary uses ReferenceQueue. The effect is the same.

And in view of the Activity, we usually use registerActivityLifecycleCallbacks Application to register the Activity of the life cycle, to rewrite onActivityDestroyed method. However, Android 4.0 does not provide this method. In order to avoid manually adding this code to onDestroy for each Activity, we can use reflection Instrumentation to intercept onDestory to reduce access costs. The code looks like this:

Class<? > clazz = Class.forName("android.app.ActivityThread"); Method method = clazz.getDeclaredMethod("currentActivityThread", null); method.setAccessible(true); sCurrentActivityThread = method.invoke(null, null); Field field = sCurrentActivityThread.getClass().getDeclaredField("mInstumentation"); field.setAccessible(true); field.set(sCurrentActivityThread, new MonitorInstumentation());Copy the code

Two, the leakage site treatment is different

1. Dump collection

Both can collect dumps, but LeakInspector provides a callback method that allows us to add more custom information, such as runtime Log, Trace, Dumpsys meminfo, and more, to assist in analyzing and locating the problem.

2. Whitelist definition

The whitelist here is designed to deal with leaks caused by systems and situations where business logic has a back door. If a whitelisted class is encountered during analysis, no further action is taken for this leak. The configuration differences are as follows:

  • 1) The LeakInspector whitelist is stored on the server as an XML configuration.

    • Advantages: Binding to a product or even a different version of the application, we can easily modify the corresponding configuration.
    • Disadvantages: Whitelisted classes do not discriminate between system versions.
  • 1), and LeakCanary whitelist is written directly into the AndroidExcludedRefs class of its source code.

    • Advantages: Very detailed definitions and system versions.
    • Disadvantages: Every change must be recompiled.
  • 2) LeakCanary’s system whitelist defines much more than LeakInspector does, as it does not automatically fix system leaks.

3, automatic repair system leakage

For system leaks, LeakInspector automatically fixes any system leaks it encounters through reflection by calling a leak-fixing method in onDestory. LeakCanary identifies system leaks, but it only provides an analysis of the problem, not a practical solution.

4. Recycle resources (Activity memory leak handling)

If a memory leak is detected, the LeakInspector will walk through the View of the Activity and release any data, such as images, to ensure that the leak will only leak a shell of the Activity, minimizing the impact on memory. The code looks something like this:

If (View instanceof ImageView) {// ImageView ImageButton recycleImageView(app, (ImageView) View); } else if (view instanceof TextView) {recycleTextView((TextView) view); } else if (View instanceof ProgressBar) { recycleProgressBar((ProgressBar) view); } else { if (view instancof android.widget.ListView) { recycleListView((android.widget.ListView) view); } else if (view instanceof android.support.v7.widget.RecyclerView) { recycleRecyclerView((android.support.v7.widget.RecyclerView) view); } else if (view instanceof FrameLayout) { recycleFrameLayout((FrameLayout) view); } else if (view instanceof LinearLayout) { recycleLinearLayout((LinearLayout) view); } if (view instanceof ViewGroup) { recycleViewGroup(app, (ViewGroup) view); }}Copy the code

Here, take recycleTextView as an example, its way of recycling resources is as follows:

private static void recycleTextView(TextView tv) { Drawable[] ds = tv.getCompoundDrawables(); for (Drawable d : ds) { if (d ! = null) { d.setCallback(null); } } tv.setCompoundDrawables(null, null, null, null); // Unfocus the Editor$Blink Runnable and fix the memory leak. tv.setCursorVisible(false); }Copy the code

Three, the post-processing is different

1. Analysis and presentation

After collecting the dump, LeakInspector will upload the dump file and * call the MAT command line for analysis * to get the LEAK GC chain. LeakCanary uses the open source component HAHA to analyze a GC chain. However, LeakCanary gets a GC chain that contains held class objects and generally does not need to open the Hporf with MAT to solve the problem. However, the GC chain obtained by LeakInpsector only has the class name, and MAT needs to open Hprof to locate the problem, which is not very convenient.

2. Follow up the closed-loop

LeakInspector submits a defect sheet after the dump analysis is complete and assigns the defect sheet to the owner of the corresponding class. If repeated problems are found, the old order is updated, and the state transition logic such as re-opening the order is available. However, LeakCanary will only alert the user in the notification bar, requiring the user to log the problem and take follow-up actions.

Four, with automated testing is different

LeakInspector works seamlessly with automated testing. When a memory leak is found during automated script execution, it can collect dumps and send them to the service for analysis and final bill of health, without human intervention. LeakCanary, on the other hand, notifies users of the results via notification bar, requiring human intervention to move to the next step.

4, the JHat

JHat is a Hprof analysis software launched by Oracle. It and MAT are called Java static memory analysis tools. Unlike MAT’s single-player analysis, jHat uses multi-player analysis. It is built into the JDK, and you can type the jhat command on the command line to see if there is a corresponding command.

quchao@quchaodeMacBook-Pro ~ % jhat
ERROR: No arguments supplied
Usage:  jhat [-stack <bool>] [-refs <bool>] [-port <port>] [-baseline <file>] [-debug <int>] [-version] [-h|-help] <file>

    -J<flag>          Pass <flag> directly to the runtime system. For
		    example, -J-mx512m to use a maximum heap size of 512MB
    -stack false:     Turn off tracking object allocation call stack.
    -refs false:      Turn off tracking of references to objects
    -port <port>:     Set the port for the HTTP server.  Defaults to 7000
    -exclude <file>:  Specify a file that lists data members that should
		    be excluded from the reachableFrom query.
    -baseline <file>: Specify a baseline object dump.  Objects in
		    both heap dumps with the same ID and same class will
		    be marked as not being "new".
    -debug <int>:     Set debug level.
		        0:  No debug output
		        1:  Debug hprof file parsing
		        2:  Debug hprof file parsing, no server
    -version          Report version number
    -h|-help          Print this help and exit
    <file>            The file to read

For a dump file that contains multiple heap dumps,
you may specify which dump in the file
by appending "#<number>" to the file name, i.e. "foo.hprof#3".
Copy the code

If the preceding output is displayed, the jhat command exists. It’s easy to use, just type jhat XXx. hprof from the command line, as shown below:

quchao@quchaodeMacBook-Pro ~ % jhat Documents/heapdump/new-33.hprof
Snapshot read, resolving...
Resolving 408200 objects...
Chasing references, expect 81 dots.................................................................................
Eliminating duplicate references.................................................................................
Snapshot resolved.
Started HTTP server on port 7000
Server is ready.
Copy the code

JHat is executed by parsing the Hprof file and then starting the HTTPSRV service, which by default listens on port 7000 for Web client links and maintains the data parsed by Hprof for continuous query operations by Web clients.

After starting the server, open the entry address 127.0.0.1:7000 to view the All Classes interface, as shown in the picture below:

There are two more important features of jHat, which are shown below:

1. Statistical tables

Open 127.0.0.1:7000/histo/ and the statistical interface is as follows:

All classes are listed in descending order of Total Size, and we can also see the number of instances for each Class.

2. OQL query

Open 127.0.0.1:7000/ OQL/and enter java.lang.String to query the number of String instances. The result is as follows:

JHat is more flexible than MAT and meets the needs of large teams for easy installation and teamwork. However, it is not suitable for small and medium-sized teams with efficient communication.

ART GC Log

GC Log is divided into Dalvik and ART GC Log. As for Dalvik GC Log, we have explained it in detail in the memory optimization of Android performance optimization in the previous article. Next, we will talk about ART GC Log.

ART’s log is very different from Dalvik’s log. In addition to the format, it is printed at a different time, and it is printed only on slow GC. Let’s look at this ART GC Log:

Explicit (full) concurrent mark sweep GC Freed 104710 (7MB) AllocSpace Objects, 21 (416KB) LOS Objects, 33% free,25MB/38MB Paused 1.230 ms total 67.216 ms
Causes of GC GC type Sampling method The amount released and the space occupied The number of large objects freed and the space occupied The percentage of free space in the heap and (number of objects)/(total heap space) Pause time consuming

Causes of GC

There are nine causes of GC:

  • Concurrent, Alloc, Explicit are basically the same as Dalvik.
  • 2) NativeAlloc: When allocating Native memory, such as allocating objects to Bitmaps or RenderScript, it will cause Native memory stress and trigger GC.
  • 3) Background: Background GC is triggered to reserve more space for subsequent memory application.
  • 4) CollectorTransition: collection caused by heap conversion, which is caused by a runtime GC switch. Collector transformations involve copying all objects from the free list space to the collision pointer space (and vice versa). Currently, collector transitions occur only when App changes the process state from a perceptible paused state to a perceptible non-paused state (and vice versa) on a device with small memory.
  • 5) HomogeneousSpaceCompact: HomogeneousSpaceCompact refers to the free list to compressed free list space, which usually occurs when the App has moved to the perceptible suspended process state. The main reason for this is to reduce memory usage and defragment heap memory.
  • 6), DisableMovingGc: not really trigger GC, concurrent pile compression occurs, due to the use of the GetPrimitiveArrayCritical, collection will be blocked. In general, it is strongly recommended that do not use GetPrimitiveArrayCritical.
  • 7), HeapTrim: This is not the cause of GC, but note that collection is blocked until the heap is clean.

GC type

There are three GC types:

  • 1) Full: Similar to Dalvik’s Full GC.
  • Partial: Similar to Dalvik’s Partial GC, Zygote Heap is not included in the policy.
  • 3), Sticky: another kind of local GC, select the local policy is the newly allocated object after the last garbage collection.

GC collection method

GC collection methods are as follows:

  • 1) Mark sweep: First record all objects, then start from GC ROOT to find indirect and direct objects and mark them. With all the previously recorded objects compared to the annotated objects, the remaining objects should be garbage collected.
  • Concurrent mark sweep: concurrent GC using a Mark Sweep collector.
  • 3) Mark Compact: When marking live objects, all live objects are compressed to one end of memory, and the other end can be recycled more efficiently.
  • 4) Semispace: During garbage scanning, move all referenced objects from one space to another, and then directly GC the remaining objects in the old space.

Through the GC log, we can know the amount of GC and its impact on the lag, and also can initially locate some problems such as unsolicited GC calls, insufficient allocated memory, excessive use of Weak Reference, etc.

6, Chrome Devtool

For HTML5 pages, grabbing JavaScript memory requires remote debugging using Chrome Devtools. There are two methods:

  • 1), directly grab the URL out of Chrome access.
  • 2) Remote debugging with Android H5.

Pure H5

1. Install Chrome on mobile phone, open USB debug mode, connect to computer through USB, open a page in Chrome, such as Baidu page. Then go to Chrome://inspect in the PC Chrome address bar, as shown below:

2. Finally, click the Inspect option directly under Chrome to bring up the Developer tools screen. As shown below:

Default Hybrid H5 debugging

The native browser for Android 4.4 and above is Chrome. You can use Chrome Devtool to debug WebView remotely. The prerequisite is to turn on the debugging switch in the App code, as shown in the following code:

If (Build) VERSION_SDK_INT > = Build. VERSION_CODES. KITKAT && is the debug mode) {WebView. SetWebContentsDebuggingEnabled (true); }Copy the code

The debugging method after opening is the same as the debugging method of pure H5 page. Directly open the H5 page in App, and then go to the INpSector page of PC Chrome to see the target debugging page.

Here is a summary of several common memory problems in JS:

  • 1) Closure closure function.
  • 2) Event monitoring.
  • 3) Improper use of variable scope, global variable reference can not be released.
  • 4) DOM node leakage.

To learn more about how to use Chrome Developer Tools, please check out the Chrome Developer Tools Chinese Manual.

Summary of memory problems

During our memory optimization process, many memory problems can be reduced to one type of problem. In order to solve similar memory problems quickly in the future, I boil them down to the following points:

Inner classes are dangerous ways to code

“** This 0” ∗∗, which is a peculiar member of the inner class, every instance of which has this0 “** ∗, which is a peculiar member of the inner class, every instance of which has this0, When its inner class needs to access its members, the inner class will hold this0 of the outer class, and by this0, by this0, by this0 can access all members of the outer class.

The solution is to unreference the inner class and outer class when the Activity is closed, triggering onDestory.

2. Problems with ordinary Hanlder internal classes

This is also an indirect reference to this$0, and the solution for this Handler generally boils down to the following three steps:

  • 1) Declare the inner class static to break references to this$0. Static describes inner classes that are independent of each other from the perspective of Java compilation. Neither class has the ability to access the other’s member variables.
  • 2. Use WeakReference to reference instances of external classes.
  • 3, outer class (such as Activity) destroyed when using removeCallbackAndMessages to remove callbacks and news.

Here, we need to pay attention to nulling WeakReference in the use process.

3. The memory of the login interface is faulty

If you do not call Finish () when jumping from the splash screen to the login screen, the splash screen will leak memory. In this case, you should be careful not to create such a memory Bug.

4. Memory problems when using system services

We usually use the getSystemService method to get system services, but when called in an Activity, the default Activity Context is passed to the system service. In some uncertain circumstances, some system services may raise an exception. To hold the incoming Context.

The solution is to use the Applicaiton Context directly to obtain system services.

5. Load webView-type leaks into the garbage can process

As we all know, for WebView, the reference chain generated by its network delay, engine Session management, Cookies management, engine kernel thread, HTML5 call system sound, video playback components can not be interrupted in time, resulting in memory problems can basically be described as “no solution”.

The solution is that we can load the WebView into another process. We want to set the android: Process attribute to AndroidManifes. We want to exit the process in the onDestory of the Activity.

6. Unregister components when appropriate

During normal development, we often need to register components such as broadcasts, timers, event buses, and so on during Activity creation. At this point we should unregister the component when appropriate, such as in onPause or onDestory.

7, Handler/FrameLayout postDelyed method trigger memory problems

Not only do we need to remove callbacks and messages using removeCallbackAndMessage in onDestory when using Handler’s sendMessage method, When using the postDelyed method of Handler/FrameLayout, we need to call removeCallbacks to remove the holding of the Runnable class by the delay Handler inside the implementation control.

8, images put in the wrong resource directory will also have memory problems

During resource adaptation, it is not possible to place a copy of each image in each drawable/mipmap directory because of the need to consider the slimming of APK. Many students do not know which directory to put the picture in. If you put the picture in a low resolution directory such as HDPI directory, it may cause memory problems. At this time, it is recommended to ask the designer to ask for high-quality pictures and put them in a high density directory such as XXhdPI directory, so that the “magnify” is less than 1 on a low dense screen. The memory is also controllable under the premise of ensuring the quality of the picture. You can also use drawable.createFromsream instead of getResources().getDrawable to get around Android’s default adaptation rules.

A process that has been returned to the background using the physical “back key” cannot be easily killed if it contains the following two points:

  • 1). The process contains the service startService, which itself calls startForeground (the lower version needs to be called through reflection).
  • 2) The main Activity does not implement the onSaveInstanceState interface.

However, it is recommended to actively save the interface process (in the background) after running for a period of time (e.g., 3 hours), and then restart it, which can effectively reduce memory load.

9. When the list item is recycled, pay attention to releasing the image reference

We should release the reference to the image when the item is not visible. If you are using a ListView, the data is rebound every time an item is reclaimed and reused, so simply release the image reference when the ImageView calls its onDetchFromWindow method. If you’re using RecyclerView, because it’s not visible when it’s recycled the first choice is to put it into mCacheView, but when the item is reused it doesn’t have to do bindViewHolder to rebind the data, Only when it is recycled into mRecyclePool and taken out for reuse will it re-bind data. So we need to release the image reference when the item is recycled into the RecyclePool. Here we just need to rewrite the Adapter onViewRecycled method as follows:

@Override public void onViewRecycled(@Nullable VH holder) { super.onViewRecycled(holder); if (holder ! = null) {// Do free image reference}}Copy the code

10. Use ViewStub for placeholder

We should use the ViewStub to lazily load resources that are not immediately needed, and there are many views that are not likely to appear that should be lazily loaded, so that memory can be allocated to them when they need to be used.

11. Pay attention to regularly clean outdated buried data of the App

The product or operation is constantly adding new buried points in each release for statistical purposes. So we need to periodically clean up obsolete burrows to optimize memory and CPU stress appropriately.

12. Handling memory leaks caused by anonymous inner class Runnable

We like to use the anonymous inner class Runnable when we do child threading. However, if an Activity’s tasks in the thread pool don’t finish in a timely manner, it can easily cause a memory leak when the Activity is destroyed. Because the anonymous inner class Runnable holds a reference to the Outer class, if the Runnable inside the Activity is not executed in a timely manner, the Activity around it cannot be released, causing a memory leak. From the above analysis, as long as there is no reference to the Activity when it exits, we can use reflection to kill Runnable before it enters the thread pool, as shown in the following code:

Field f = job.getClass().getDeclaredField("this$0");
f.setAccessible(true);
f.set(job, null);
Copy the code

This task is our Runnable object, and “this$0” is a reference to the external class mentioned above. Note that WeakReference is installed here, to execute the first get, if it is null, it means that the Activity has been recycled, the task will give up execution.

Common problems of memory optimization

1. What is the process of your memory optimization project?

1. Analyze the status quo and identify problems

We find that our APP may have a big problem in memory. The first reason is that our online OOM rate is relatively high.

Second, we see a lot of memory jitter in our Android Studio Profiler tool.

This is our a preliminary status quo, and then after we know the present status of the preliminary, for the confirmation of the problem, we through a series of investigation and research, we finally found our the following major problems existing in the project, such as: memory thrashing, memory, memory leaks, and we use the Bitmap is very straightforward.

2. Targeted optimization

For example, solving Memory jitter => using Memory Profiler tools (showing jagged graphs) => analyzing specific code problems (concatenating log strings in frequently called methods), also can talk about solving Memory leaks or Memory spills.

3. Efficiency improvement

In order not to increase the workload of business students, we use some tool classes or ARTHook’s big picture detection scheme without any intrusion. At the same time, we teach these techniques to people, and then let people work together to improve productivity.

We are familiar with the use of Profiler Memory and MAT, so we have written a series of solutions to a series of different problems and shared them with you. In this way, our entire team members became more aware of memory optimization.

2. What are your biggest feelings about memory optimization?

1, sharpening the knife does not mistakenly cut wood workers

At the beginning, we did not directly analyze the Memory problems in the code of the project. Instead, we first studied some official Documents of Google, such as the use of Memory Profiler tool and MAT tool. After we mastered these tools, When we encountered a memory problem in our project, we were able to quickly troubleshoot the problem and fix it.

2. Technical optimization must be combined with business code

In the beginning, we did the whole APP running phase of a memory, then, our memory consumption in some key module for some monitoring, however, found behind the monitor is not closely combined with our business code, such as after combing project, found that using multiple image gallery in our project, The memory caches of the multiple image libraries were definitely not public, which resulted in a very high memory usage for our entire project. So the technical optimization must be combined with our business code.

3, systematic improvement of solutions

In the process of memory optimization, we not only optimized the Android terminal, but also reported some data collection of our Android terminal to our server, and then transmitted to our APM background. In this way, It is convenient for our Bug tracker or Crash tracker to solve a series of problems.

3. How to detect all unreasonable places?

For example, one of our initial schemes for detecting large images was to inherit ImageView and override its onDraw method. However, as we promoted it, we found that a lot of developers didn’t accept it, because a lot of ImageView had been written before, and it was expensive to ask them to replace it. So then we thought, is there a solution that doesn’t have to be replaced, and we ended up with ARTHook Hook.

Ten,

For the special optimization of memory optimization, we should pay attention to two points, that is, the overall direction of optimization and optimization details.

1. Optimize the general direction

As for the overall direction of optimization, we should give priority to the areas with quick results, mainly including the following three parts:

  • 1) Memory leak
  • 2) Memory jitter
  • 3), Bitmap

2. Optimize the details

For optimization details, we should pay attention to the use of some system attributes or memory callbacks, etc., which can be broken down into the following six parts:

  • 1) The LargeHeap attribute
  • 2) onTrimMemory/onLowMemory
  • 3) Use optimized collections, such as SparseArray class cluster
  • 4) Use SharedPreference sparingly
  • 5) Be careful with external libraries
  • 6) Reasonable business architecture design

3. Summary of memory optimization systematization construction

In this article, we implemented the following ten components/strategies in addition to the core architecture of memory monitoring closed-loop:

  • 1) Use different memory and allocation reclamation strategies according to the device hierarchy.
  • 2) For low-end computers, the function or image loading format is degraded.
  • 3) A unified cache management component is implemented to solve the problem of cache abuse.
  • 4) The monitoring of large picture and repeated picture is realized.
  • 5) The foreground obtains the proportion of the current application memory to the maximum memory every certain time, and releases the application cache when the threshold is exceeded.
  • 6) Release memory when THE UI is hidden to increase the ability of the system to cache application processes.
  • 7) Effectively realize Bitmap monitoring within the application globally.
  • 8) Realize the global thread monitoring.
  • 9) GC monitoring is implemented for heavy scenarios of memory usage.
  • 10) Realized the offline monitoring of native memory leakage.

Finally, when the application memory exceeds the threshold, a comprehensive bottom-of-the-pocket policy is customized to restart the application process.

In general, it is very important and challenging to establish a comprehensive and systematic memory optimization and monitoring. In addition, the current memory optimization system of major companies is also in the process of evolution, the purpose of which is nothing more than: to achieve better function, deeper problem location, fast and accurate detection of online problems.

The way ahead is so long without ending, yet high and low I’ll search with my will unbending.

Reference links:

1, domestic Top team with you to play Android performance analysis and optimization chapter 4 memory optimization

2, Geek time Android development master class memory optimization

3. Wechat Android terminal memory optimization practice

4. Gmtc-android Memory leak automatic link analysis component Probe.key

5. Manage your app’s memory

6, Overview of memory management

7, Android memory optimization miscellaneous

8. Android performance optimization is embedded

9. Manage application memory

Android Mobile Performance In Action, Chapter 2 memory

11, a daily Linux command (44) : top command

12, Android memory analysis command

Thank you for reading this article and I hope you can share it with your friends or technical group, it means a lot to me.

I hope we can be friends inGithub,The Denver nuggetsTo share knowledge.