This paper takes iOS Memory as the theme and mainly introduces the Memory management of general operating system, iOS system Memory and APP Memory management from three aspects. The main contents are as follows:

IOS is based on BSD, so it’s important to understand the memory mechanisms of common desktop operating systems. On this basis, this paper will further analyze the iOS system level, including the overall memory mechanism of iOS and the memory usage of iOS system during operation. Finally, the granularity will be reduced to a single app in iOS, and the memory management strategy of a single app will be discussed.

Memory mechanism of the operating system

In order to better understand and analyze memory features on iOS fundamentally, we first need to properly understand the memory mechanisms common to the general operating system.

Von Neumann structure

The Von Neumann Architecture, a novel concept that had been proposed since 1945, separated memory from computing for the first time, leading to the modern computer with memory at its core.

In the von Neumann structure, memory plays an important role. It stores the instructions and data of a program and provides it to the CPU as needed when the program is running. It can be imagined that an ideal memory should take into account the characteristics of fast read/write speed, large capacity and low price, but you can’t have it both ways. The faster the read/write speed, the more expensive the memory, and the smaller the capacity.

But von Neumann structures have an unsurmountable problem known as the von Neumann bottleneck — at current technological levels, the rate of reading and writing between the CPU and the memory is much lower than the CPU’s efficiency. Simply put, the CPU is too fast, memory read and write speed is not fast enough, resulting in a waste of CPU performance.

Now that we don’t have access to perfect memory, how do we try to break through the bottleneck of von Neumann structures? The current solution is to use multilevel storage, to balance the memory read and write rate, capacity, price.

Hierarchical structure of memory

There are two main types of memory: volatile memory is faster and data will be lost after a power failure; Nonvolatile memory has more capacity, is cheaper, and does not lose data when power is cut off. Random-access memory RAM is also divided into two categories. SRAM is faster, so it is used as cache, and DRAM is used as main memory. Read only memory ROM was actually only read only at the beginning, but as it evolved it became readable and readable, but it kept its name.

The figure above is an example of multilayer memory, which we usually refer to as memory, but actually refers to L4 main memory. L1-l3 caches are faster than main memory, and they are already integrated into the CPU chip. The L0 register itself is one of the components of the CPU, and the read and write speed is the fastest, and the operation takes 0 clock cycles.

To put it simply, the hierarchical memory is really a caching idea. The bottom part of the pyramid is larger and cheaper, mainly to play with its storage properties; The cache part at the top of the pyramid has a fast read and write speed and is responsible for caching the frequently used part to optimize the overall read and write efficiency to a certain extent.

Why does caching improve efficiency? It is actually quite simple to understand logically, specifically because of the Principle of locality — the used contents of memory may be used many times in the future, and there is a high probability that nearby contents will also be used. When we put this content in the cache, we can save access time to memory in some cases.

CPU addressing mode

So, how does the CPU access memory? Memory can be thought of as an array, with an array element being a byte size of space and an array index being a so-called Physical Address. In the simplest and most direct way, the CPU directly accesses the corresponding memory through the physical address, which is also called physical addressing.

Physical addressing was later extended to support segmentation by adding a segment register to the CPU, which changed the physical address to the form “segment address” : “segment offset”, increasing the addressing range of physical addressing.

However, there are still some problems with physical addressing that supports fragmentation, one of the most serious of which is the lack of protection of the address space. In short, because the physical address is directly exposed, the process can access any physical address and the user process can do whatever it wants, which is very dangerous.

Modern processors use Virtual addressing, in which the CPU accesses a Virtual Address that is translated into a physical Address before it can access memory. The translation process is done by the Memory Management Unit (MMU) in the CPU.

The detailed process is shown in the figure above: first, the query will be carried out in TLB (Translation Lookaside Buffer), which is located in the CPU, the fastest query; If there is no hit, it is then queried in the Page Table, which is located in physical memory, so the query speed is slow. Finally, if the target page is not found in physical memory, the page is called missing and the disk is searched. Of course, if you can’t find it in the page table, something is wrong.

The translation process is actually similar to the memory tier mentioned in the previous article, which embodies the caching idea: TLB is the fastest, but also the smallest, followed by page table, and the slowest is hard disk.

Virtual memory

As mentioned earlier, with direct physical addressing, there is a serious problem with the lack of address space protection. So what’s the solution? In fact, after virtual addressing is used, because each translation takes place one at a time, some additional permission decisions can be added to the translation to protect the address space. So, for each process, the operating system can provide it with a separate, private, contiguous address space, which is called virtual memory.

The greatest significance of virtual memory is to protect the address space of the process, so that processes can not interfere with each other beyond their authority. For each process, the operating system “spoofs” with virtual memory, and processes can only manipulate the allocated portion of virtual memory. At the same time, the virtual memory visible to the process is a continuous address space, which makes it easier for programmers to manage the memory.

For a process, the only visible part of it is the virtual memory allocated to it, which can actually map to physical memory as well as any area of the hard disk. Since the read/write speed of hard disks is not as fast as that of memory, the OPERATING system uses the physical memory space first. However, when the physical memory space is insufficient, the operating system swaps part of the memory data to the hard disk for storage. This is known as the Swap memory Swap mechanism. With memory swapping, virtual memory actually uses hard disk space to expand memory space compared to physical addressing.

In summary, virtual memory has several meanings: it protects the address space of each process, simplifies memory management, and expands memory space with hard disk space.

paging

Based on the preceding ideas, virtual memory is mapped to physical memory. To facilitate mapping and management, both virtual and physical memory are divided into units of the same size. The smallest unit of physical memory is called a Frame, while the smallest unit of virtual memory is called a Page.

Note that pages and frames are of the same size and have a function-like mapping. The translation process mentioned earlier with TLB and page tables is actually very similar to function mapping.

The most important thing about paging is that it enables discrete use of physical memory. Due to the mapping process, the physical memory corresponding to the virtual memory can be stored at any time, which facilitates the operating system to manage the physical memory and maximize the use of the physical memory. At the same time, some Paging algorithms can also be used to add the frame addresses that are likely to be used into THE TLB or page table by taking advantage of the locality principle also existing in the translation process to improve the efficiency of translation.

IOS memory mechanism

According to the official document Memory Usage Performance Guidelines (now not updated), we can know that iOS Memory mechanism has the following features:

Using virtual memory

IOS, like most desktop operating systems, uses a virtual memory mechanism.

Memory is limited, but the available memory for a single application is large

For mobile devices, limited by objective conditions, the physical memory capacity itself is small, and the iPhone’S RAM itself is small, the latest iPhone XS Max is only 4GB, horizontal comparison of Mi 9 can reach 8GB, Huawei P30 is also 8GB. See the List of iPhones to see how much memory they have over the ages.

But unlike other phones, iOS allocates a lot of virtual memory to each process. According to the official documentation, iOS provides up to 4GB of addressible space for each 32-bit process, which is pretty big.

There is no memory swapping

Virtual memory is far greater than physical memory, so what happens if physical memory runs out? As we mentioned earlier, other desktop operating systems (such as OS X) have memory swapping, which allows you to swap a portion of your physical memory to the hard disk if you need to. This is one of the advantages of using virtual memory.

However, iOS does not support memory swapping, and most mobile devices do not support memory swapping. The bulk storage on mobile devices is usually Flash memory, which is much slower to read and write than the hard disk used on a computer. As a result, even using memory swapping on a mobile device does not improve performance. Second, mobile devices are often short of capacity and flash memory has a limited read/write life, so swapping flash memory in this situation is a bit of a luxury.

It should be noted that there are a few articles online that say iOS does not have a virtual memory mechanism, which should actually mean iOS does not have a memory swap mechanism, because in Windows, virtual memory sometimes refers to the size of the hard disk to provide for memory swapping.

Memory warning

When memory runs out, iOS will issue a memory warning, telling the process to clean up its memory. On iOS, a process corresponds to an app. The didReceiveMemoryWarning() method in the code is triggered when a memory warning occurs, and the app should clean up some unnecessary memory to free up some space.

OOM collapse

If your app is running Out of physical Memory after a Memory warning, you will have an OOM Crash.

On Stack Overflow, someone made a statistic about the maximum memory that a single app can use: iOS App Max Memory Budget. In the case of the iPhone XS Max, the total available memory is 3735 MB (a bit smaller than the hardware, since the system itself consumes some of that memory), while the available memory for a single app is 2039 MB, or 55%. When the app’s memory usage exceeds this threshold, the OOM crash will occur. As you can see, the available physical memory of a single app is actually very large. If OOM crashes occur, the vast majority of cases are due to the program itself.

Memory usage of the iOS system

After analyzing the characteristics of the iOS memory mechanism, we can realize that it is very important to control the memory used by the app. So specifically, what do we need to reduce? This is actually what is called the iOS Memory Footprint.

As mentioned above, memory pages are actually classified. Generally speaking, they are divided into clean memory and dirty memory. IOS also has the concept of compressed memory.

Clean memory & dirty memory

For normal desktop operating systems, clean Memory can be thought of as the part that can do Page Out. Page Out refers to the swapping of low-priority memory data to disk, but iOS does not have a memory swapping mechanism, so the definition of iOS is loosely defined. For iOS, clean memory refers to memory that can be recreated. It consists of the following categories:

  • Binary executable file of app

  • The _DATA_CONST segment in the framework

  • File mapped memory

  • Memory that has no data written to it

A memory-mapped file is a file that is loaded into memory when an app accesses it. If the file is read-only, the memory is clean memory. It is also important to note that the _DATA_CONST in the linked framework is not always a clean memory, but becomes dirty memory when the framework is used by the app.

Memory that has not been written is also clean memory. For example, in this code, only the parts that have been written are dirty Memory.

int *array = malloc(20000 * sizeof(int));
array[0] = 32
array[19999] = 64
Copy the code

All memory that is not clean Memory is dirty Memory. This memory cannot be recreated by the system, so the dirty memory occupies the physical memory until there is no more physical memory, and then the system starts to clean it up.

Compressed memory

When the physical memory is insufficient, iOS will compress part of the physical memory and decompress it when reading and writing is required to achieve the purpose of memory saving. The compressed memory is called compressed memory. Apple first used the technology on OS X and then on iOS.

In fact, with the development of virtual memory technology, many desktop operating systems have already applied memory compression technology, such as Memory Combining technology in Windows. This is essentially a way to swap CPU time for memory space, but compression takes less time and consumes more CPU. At the beginning of this article, however, we mentioned that due to excessive CPU power, in most scenarios, physical memory space is more important than CPU power, so memory compression techniques can be very useful.

According to OS X Mavericks Core Technology Overview, using compressed memory can compress the target memory to less than half of the original memory, while the time required for compression and decompression is very small. In OS X, compressed memory can also be shared with the memory swapping technology to improve the memory swapping efficiency. After all, the efficiency of compressed memory is significantly higher. However, iOS does not have memory swapping, so this benefit does not exist.

Compressed memory is also dirty memory.

Memory Usage Composition

For apps, the primary memory we care about is dirty memory, which also includes compressed memory. In the case of clean memory, as a developer, you usually don’t have to care.

When the memory usage is too large, the memory warning and OOM crash will happen, so we should try to reduce the memory usage as much as possible, and be prepared for memory warnings and OOM crash. Reducing memory footprint also improves startup speed on the side, with less memory to load and naturally faster startup times.

The normal thinking is that when an app listens to a memory warning, it should proactively clean up and free up some memory with low priority, which is essentially correct. However, due to the particularity of compressed memory, the actual size of memory occupancy will be considered to be a little more complex.

In the case above, when we receive a memory warning, we try to free part of the Dictionary, but the previous Dictionary is compressed because it is unused. After decompressing and releasing parts of the contents, the Dictionary is in an uncompressed state, which probably doesn’t reduce physical memory, and may even make it larger.

Therefore, it is recommended to use NSCache instead of NSDictionary for caching, because NSCache is not only thread safe, but also optimized for memory warnings when compressed memory exists, so that the system can automatically release memory.

IOS App memory management

As mentioned earlier, memory management at the iOS system level is done automatically by the operating system in most cases. In iOS, an app is a process, so the memory management that developers often talk about, such as MRC, ARC, etc., is actually in-process memory management, or language-level memory management. This part of the memory management language itself, the operating system will have some management strategies, but as a developer, many times still need to directly operate from the language level.

IOS app address space

As mentioned earlier, each process has its own virtual memory address space, known as the process address space. Now let’s simplify a little bit. The process address space for an iOS app looks something like this:

Each area actually stores content. The code area, constant area, and static area are automatically loaded and released by the system after the process is over, so the developer does not need to pay attention to them.

The stack area generally stores local variables and temporary variables, which are automatically allocated and released by the compiler. Each thread runs on a stack. The heap area is used for dynamic memory application and is allocated and freed by the programmer. In general, the stack is faster because it is automatically managed by the system, but it is not as flexible to use as the heap.

For Swift, value types are stored in the stack and reference types in the heap. Value types Struct, enum, and tuple are all value types. Int, Double, Array, Dictionary, and so on are actually implemented as constructs, which are also value types. Classes and closures are all reference types, so if we encounter classes and closures in Swift, we should be careful to consider their references.

Reference counting

Heap area needs programmer to manage, how to manage, record, recycling is a very worth thinking about the problem. IOS uses Reference Counting to save the number of times a resource is referenced. When the number of times a resource is referenced reaches zero, the space is released and reclaimed.

In iOS, Mannul Reference Counting (MRC) is used to manually manage Reference Counting, and object life cycle is managed by methods such as retain and release. However, since MRC is too cumbersome to maintain, ARC (Automatic Reference Counting) was proposed at WWDC 2011. Through the static analysis of the compiler, the Automatic Reference Counting management logic is automatically inserted to avoid the complicated manual management.

Reference counting is just one type of garbage collection, but others include the Mark Sweep GC, Tracing GC, and so on. In contrast, the reference count, which only records the number of references to an object, is actually a local information, but lacks global information. Therefore, it may cause circular reference problems, which requires extra attention at the code level.

So why does iOS use reference counting? By using reference counting first, objects can be recycled immediately at the end of their life cycle, rather than having to wait until after a global traversal. Secondly, in the case of insufficient memory, the tracing GC algorithm has a larger delay and lower efficiency. As the overall memory of iPhone is small, reference counting is a more reasonable choice.

A circular reference

A memory leak is a failure to free memory that can’t be used, wasting a lot of memory and possibly causing an application to crash. Circular references that ARC can cause are one of these and are the most common on iOS. You’re probably familiar with when circular references occur, typically in swift when using closures:

class viewController: UIViewController {
    var a = 10
    var b = 20
    var someClosure: (() -> Int)?
    
    func anotherFunction(closure: @escaping () -> Int) {
        DispatchQueue.main.asyncAfter(deadline: .now() + .seconds(5)) {
            print(closure)
        }
    }
    
    override func viewDidLoad() {
        super.viewDidLoad()
        someClosure = {
            returnself.a + self.b } anotherFunction(closure: someClosure!) }}Copy the code

In this code, the viewController will hold someClosure, and someClosure will hold the viewController because it needs to use self.a + self.b, which results in a circular reference. Note that closures are similar to classes in that they are reference types, and when you assign a closure to an attribute of a class, you are actually assigning a reference to the closure to that attribute.

The solution is as simple as changing a strong reference relationship in a circular reference to a weak one using the closure capture list provided by Swift. In fact, Swift requires that any member that uses self in a closure must not omit self. To alert you to circular reference problems that may occur in this case.

someClosure = { [weak self] in
    guard let self = self else { return0}return self.a + self.b
}
Copy the code

Weak and unowned

The weak keyword breaks a circular reference by replacing a strong reference with a weak one. There’s another keyword, unowned, that breaks circular references by replacing strong references with unfocused ones, but what’s the difference? A weak reference object can be nil while a master reference object cannot, resulting in a runtime error.

In the example above where weak is used, an extra step of unpacking with the Guard let is required. If you use unowned, you can skip the unpacking step:

someClosure = { [unowned self] in
    return self.a + self.b
}
Copy the code

Weak adds an additional layer at the bottom, indirectly wrapping the unowned reference in an optional container, which is clearer but has some performance impact, so unowned is faster.

But there’s a chance that crash happens if the object is nil without a primary reference, like in the example above, anotherFunction and we delay calling someClosure for 5 seconds, But if we pop the viewController in 5s, then unowned self will see that self has been released when called, and it will crash.

Fatal error: Attempted to read an unowned reference but the object was already deallocated

In a simple analogy, using a weak reference object is like an optional type that needs to be unpacked; Using a reference object with unowned is as if it has already been forcibly unpacked. You don’t need to unpack it, but if the object is nil, it crashes.

Under what circumstances can I use unowned? According to the Automatic Reference Counting document, no primary Reference is used when other instances have the same or longer life cycle.

Unlike a weak reference, however, an unowned reference is used when the other instance has the same lifetime or a longer lifetime.

In one case, if you have two objects that hold each other, one may be nil and the other may not be nil, you can use unowned. For example, in this example in the official document, each CreditCard must have its owner, and CreditCard must correspond to a Customer, so unowned is used here:

class Customer {
    let name: String
    var card: CreditCard?
    init(name: String) {
        self.name = name
    }
    deinit { print("\(name) is being deinitialized") }
}

class CreditCard {
    let number: UInt64
    unowned let customer: Customer
    init(number: UInt64, customer: Customer) {
        self.number = number
        self.customer = customer
    }
    deinit { print("Card #\(number) is being deinitialized")}}Copy the code

On the other hand, for closures, the capture of a closure can be defined as unowned when the closure and the captured instance always refer to each other and are destroyed at the same time. If a captured reference never becomes nil, use unowned instead of Weak.

If the captured reference will never become nil, it should always be captured as an unowned reference, rather than a weak reference.

For example, in the closure in the following example, if the asHTML is declared lazy, self must be initialized first; There is also no internal asHTML property, so once self is destroyed, the closure does not exist. In this case, use unowned:

class HTMLElement {

    let name: String
    let text: String?

    lazy var asHTML: () -> String = {
        [unowned self] in
        if let text = self.text {
            return "<\(self.name)>\(text)</\(self.name)>"
        } else {
            return "<\(self.name) />"
        }
    }

    init(name: String, text: String? = nil) {
        self.name = name
        self.text = text
    }

}
Copy the code

Generally speaking, the key point is that weak is more secure than unowned and can avoid unexpected crash, which is very beneficial to the project. So most of the time, just like we do with if lets and guard lets! As with coercive parsing, we often use Weak directly.

Does not result in a circular reference situation

Because closures often cause circular references and add weak and guard lets without errors, many times we run into closures and just use Weak, which is actually too crude.

For example, if a closure like the following is used in the viewController, the circular reference will not occur because the DispatchQueue will not be held:

DispatchQueue.main.asyncAfter(deadline: .now() + 2) {
    self.execute()
}
Copy the code

A more typical example is when using static functions:

Class APIClass {// function static func getData(params: String, completion:@escaping (String) -> Void) { request(method: .get, parameters: params) { (response)in
            completion(response)
        }
    }
}
class viewController {

		var params = "something"
		var value = ""

    override func viewDidLoad() {
        super.viewDidLoad()
        getData(params: self.params) { (value) in
            self.value = value
        }
    }
}
Copy the code

There is no circular reference, since self does not hold a static class, so there is no memory leak:

OOM collapse

Jetsam mechanism

IOS is a BSD derived system whose kernel is Mach. The mechanism for handling memory warnings and OOM crashes is called Jetsam, also known as Memorystatus. Jetsam always monitors overall memory usage, and when memory is low, it kills some processes based on priority and memory footprint and records them as JetSAMEvents.

As we can see from apple’s open-source kernel code Apple/Darwina-xnu, Jetsam maintains a priority queue, the details of which can be found in the BSD /kern/ kern_Memorystatus.c file:

static const char *
memorystatus_priority_band_name(int32_t priority)
{
	switch (priority) {
	case JETSAM_PRIORITY_FOREGROUND:
		return "FOREGROUND";
	case JETSAM_PRIORITY_AUDIO_AND_ACCESSORY:
		return "AUDIO_AND_ACCESSORY";
	case JETSAM_PRIORITY_CONDUCTOR:
		return "CONDUCTOR";
	case JETSAM_PRIORITY_HOME:
		return "HOME";
	case JETSAM_PRIORITY_EXECUTIVE:
		return "EXECUTIVE";
	case JETSAM_PRIORITY_IMPORTANT:
		return "IMPORTANT";
	case JETSAM_PRIORITY_CRITICAL:
		return "CRITICAL";
	}

	return ("?");
}
Copy the code

How do you monitor memory warnings and handle Jetsam events? First, the kernel initiates a thread with the highest kernel priority (95 /* MAXPRI_KERNEL */ is already the highest priority the kernel can assign to threads) :

Result = kernel_thread_start_priority(memoryStatus_thread, NULL, memoryStatus_thread); 95 /* MAXPRI_KERNEL */, &jetsam_threads[i].thread);Copy the code

This thread maintains two lists, a prioritized list of processes and a list of memory pages consumed by each process. At the same time, it listens for notifications from the kernel pageOut thread about overall memory usage and forwards a memory warning to each process in case of a memory emergency, which triggers the didReceiveMemoryWarning method.

Memorystatus_kill_on_VM_page_shortage can be used in synchronous or asynchronous mode. In synchronous mode, processes are killed immediately. Processes with lower priority are killed according to their priority. With the same priority, kill the processes that use a large amount of memory according to the memory size. In asynchronous mode, the current process is flagged and killed by a dedicated memory management thread.

How to detect OOM

OOM is divided into two categories: Foreground OOM/Background OOM, abbreviated FOOM and BOOM. And FOOM refers to the app in the foreground because of memory consumption is too large, and the system killed, directly manifested as crash.

Facebook’s open source FBAllocationTracker hooks methods such as malloc/free to record the allocation information of all instances at runtime, so as to find some instances of memory exceptions. This is similar to a better performance Allocation that runs within the app. However, the library can only monitor Objective-C objects, so it’s very limited, and because there’s no way to get the object stack information, it’s harder to locate the OOM specific reason.

Tencent’s open source OOMDetector records the memory allocation information of the current living object through malloc_logger_T, the lower-level interface of Malloc/Free, and traces the stack information according to the backtrace_symbols of the system. After that, data storage analysis is done according to Splay Tree, etc. For details, see this article: iOS wechat Memory Monitoring.

OOM Common Reasons

A memory leak

One of the most common causes is memory leaks.

UIWebview defects

UIWebView takes up a lot of memory, whether it’s to open a web page or execute a simple piece of JS code, and older versions of CSS animations cause a lot of problems, so it’s better to use WKWebView.

Big picture, big view

Zooming, drawing large pictures with high resolution, playing GIF images, and rendering views that are too large (such as ultra-long TextView), will take up a lot of memory, light will cause a lag, heavy may occur in the process of parsing and rendering OOM.

Memory analysis

There are a number of ways to analyze and detect memory usage and memory leaks.

  • Xcode Memory Gauge: In the Xcode Debug Navigator, you can take a look at the memory usage.
  • Instrument-allocations: You can view virtual memory Allocations, heap information, object information, call stack information, and VM Regions information. You can use this tool to analyze memory and optimize for it.
  • Instrument-leaks: Used to detect memory Leaks.
  • MLeaksFinder: Indicates by judgmentUIViewControllerDestroyed after the sonviewWhether they are also destroyed, memory leaks can be detected without hacking the code.
  • Instrument-vm Tracker: Displays the memory usage of different types of memory, such as the size of dirty memory, to help analyze causes such as excessive memory and memory leaks.
  • Instrument-virtual Memory Trace: Indicates Memory pages. For details, see WWDC 2016-Syetem Trace in Depth.
  • Memory Resource Exceptions: Starting with Xcode 10, the debugger can catch excessive Memory usageEXC_RESOURCE RESOURCE_TYPE_MEMORYException, and the breakpoint is where the exception was raised.
  • Xcode Memory Debugger: You can directly view the interdependencies between all objects in Xcode, which is very convenient to find circular reference problems. You can also export this information as a MemGraph file.
  • Memgraph + command line instructions: Combined with the memGraph file output in the previous step, you can analyze the memory situation with some instructions.vmmapYou can print out process information, VMRegions information, and so ongrepYou can view information about a specified VMRegion.leaksObjects in the heap can be traced to see memory leaks, stack information, and so on.heapIt prints out all the information in the heap to make it easier to track objects that use a lot of memory.malloc_historyYou can viewheapInstruction to easily find problems. Conclusion:malloc_history= = = > Creation;leaks= = = > the Reference;heap & vmmap= = = > Size.

Github links to articles

The resources

  1. What is memory – Eleven_YW
  2. The Heart of the Machine – Von Neumann structures
  3. Virtual memory thing – SylvanasSun
  4. stack overflow – Why don’t most Android devices have swap area as typical OS does?
  5. stack overflow – What is resident and dirty memory of iOS?
  6. How powerful is the memory compression technology in OS X Mavericks? – Rlei’s answer – Zhihu
  7. WWDC 2018: An in-depth look at iOS memory
  8. WWDC 2018 – iOS Memory Deep Dive
  9. How does reference counting maintain all object references in garbage collection? – RednaxelaFX’s answer – Zhihu
  10. Garbage collection algorithm: reference counting – good speed
  11. All About Memory Leaks in iOS
  12. Unowned or Weak? Lifetime and Performance
  13. How Swift Implements Unowned and Weak References
  14. “The Swift Programming Language” in Chinese-Swift GG
  15. IOS Memory Abort (Jetsam) principle explored
  16. OOM Exploration: XNU memory state management
  17. IOS wechat memory monitoring