Python code memory leak analysis analyzes some memory leaks, for practical troubleshooting, a brief introduction to the use of objgraph, GC and other tools. Combined with some actual situations, here to record the actual problems encountered in the troubleshooting of memory leaks.
Where did the memory go
Watch memory changes in Python processes on the server, and when the Python process grows at runtime, there is a potential leak risk. When looking for leaks in Python objects, such as objGraph, you usually need to be in a relatively stable state. If the current process is busy and creating objects frequently, even if ObjGraph sees a large number of objects, it's hard to tell if it's really leaking. When a process is no longer processing new processing request logic, it's easy to spot a problem by looking at the memory footprint of the current process and then searching for Python objects through objGraph. Currently available Python objects don't take up that much memory. Where did this memory go?
Guppy allows you to calculate the memory usage of Python objects in the current process and compare it to the memory usage seen in Top. It is common to find that the size of memory you get in Guppy is much smaller than the memory currently occupied by the process. Tools like GDB-heap return similar results. So what exactly is occupying that part of memory? It is most likely just memory that is used by Python's own allocation mechanism, see Cack-memory-leaks-python. How do you determine this?
Analyze process memory directly
Here you can find a script that dumps process memory,
#! /bin/bash grep rw-p /proc/$1/maps | sed -n 's/^\([0-9a-f]*\)-\([0-9a-f]*\) .*$/\1 \2/p' | while read start stop; do gdb --batch --pid $1 -ex "dump memory $1-$start-$stop.dump 0x$start 0x$stop"; doneCopy the code
To export the generated file, run the following command,
strings -n 10 ... Copy the code
To find the contents of the string in the file that exceed a certain length, and to deduce from the content data which parts of the content occupy memory. In the case of an actual memory leak, the information should be found in the file that corresponds to the heap. In the absence of leaks, the contents of the heap are difficult to parse. The reason for this is probably Python's memory mechanism. When this problem is obvious, the underlying problem may be that the process allocates too much memory, causing the process to constantly request memory from the system, resulting in the process's memory increasing, but when the process is idle, the leaking object cannot be seen directly from objGraph.
Analyze by objGraph
Go back to objGraph. Since the root of the problem is in Python code, it should still be possible to trace it through objGraph. As mentioned above, it's relatively easy to do analysis through ObjGraph when the process is idle, but to deal with this problem you need to step in when the process is busy.
Objgraph's objgraph.show_MOST_common_types returns the largest number of types. If it is a custom type, the problem is easy to deal with. What about native types such as dict, list, etc.? Native types tend to be numerous, and normal data leaks are jumbled together. In this case you cannot directly trace the chain of references through methods such as objgraph.find_backref_chain. A simple but effective way to do this is to export the data to a file and analyze it. If the leaked data can be found, it is easy to analyze in the code, even if there is no reference chain.
This section takes dict data as an example.
- Gets all the current dict objects
dict_list = objgraph.by_type('dict') Copy the code
- The obtained lists are written to a file in order, and a dict outputs a line with an uplink number
def save_to_file(self, name, items): with open(name, 'w') as outputs: for idx, item in enumerate(items): try: outputs.write(str(idx) + " ### ") outputs.write(str(item)) outputs.write('\n') except Exception as e: print e outputs.flush() Copy the code
Analyze the output file of the script and process it with grep, sed and awk commands. Analyze and count the number of dict with different lengths and the number of dict with the same structure (same key). The dict causing the problem is analyzed.
Get references from objGraph based on the line numbers of dict leaks, or analyze them directly from the code.
objgraph.find_backref_chain(dict_list[idx], objgraph.is_proper_module) Copy the code
When a process grows memory and does not return it to the system, there is usually either a real leak or more memory is being requested than released. The former is relatively easy to deal with. But the common thread is to look for the object that caused the problem. If a tool like ObjGraph can directly locate the object in question, then that's great. If you can't locate the data directly, consider the dumb option of exporting the data for analysis. The memory must be occupied by something. You can find the problem directly from the data. Finally, memory problems are usually caused by global singletons, global caches, and long-lived objects. With this part of the code handled, memory problems are not easy.