Abstract: Recently, the service encountered a memory leak problem, and the operation and maintenance students urgently called to solve the problem, so in addition to solving the problem, the system also recorded the common solutions to the memory leak problem.

This article is written by Lutianfei in Python Memory Leak Detection Tips.

Recently, the service encountered a memory leak problem, and the operation and maintenance students urgently called to solve the problem, so in addition to solving the problem, the system also recorded the common solutions to the memory leak problem.

First of all, we have made clear the phenomenon of this problem:

  1. The service was online once on 13th, but from 23rd, the memory continued to climb. After the instance was restarted, the climb speed was even faster.

  2. The services are deployed on two kinds of chips, A and B respectively, but almost all the pre-processing and post-processing share A set of code except model reasoning. While CHIP B has A memory leak warning, chip A has no abnormal.

Idea 1: study old and new source code and two – party library dependency differences

According to the above two conditions, the first thing that comes to mind is the problems introduced by the 13th update, which may come from two aspects:

  1. Since the research code

  2. Bipartite dependent code

From the above two perspectives:

  • On the one hand, the source code of the two versions was compared with Git historical information and BeyondCompare tool respectively, and the part of A and B chip codes processed separately was mainly read, and no abnormality was found.

  • On the other hand, using the PIP list command to compare the binary packages in the two mirror packages, only the versions dependent on the PYTZ time zone tool have changed.

After research and analysis, it is considered that the possibility of memory leakage caused by this package is not high, so it is temporarily put down.

At this point, digging through old and new source code changes to find memory leaks seems a bit of a stretch.

Idea 2: Monitor the memory change difference between the old and new versions

At present, the commonly used python memory detection tools include Pympler, Objgraph, tracemalloc and so on.

First, we observed the TOP50 variable types in the old and new services using the ObjGraph tool

The common objraph commands are as follows:

Objgraph.show_most_common_types (limit=50) # Increment objgraph.show_growth(limit=30)Copy the code

In order to better observe the change curve, I made a simple encapsulation, so that the data is directly exported to CSV file for observation.

stats = objgraph.most_common_types(limit=50)
stats_path = "./types_stats.csv"
tmp_dict = dict(stats)
req_time = time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())
tmp_dict['req_time'] = req_time
df = pd.DataFrame.from_dict(tmp_dict, orient='index').T

if os.path.exists(stats_path):
    df.to_csv(stats_path, mode='a', header=True, index=False)
else:
    df.to_csv(stats_path, index=False)
Copy the code

As shown in the picture below, I ran a batch of images on the old and new versions for an hour, and everything was as steady as the old dog, with no fluctuation in the number of types.

At this point, I thought of myself in general before testing or on-line will be a batch of abnormal format pictures taken to do a boundary verification.

Although these abnormalities, test students must have been verified before the line, but the dead horse as a live horse doctor took it to test.

The quiet data breaks down, as you can see in the red box below: dict, function, method, tuple, traceback, and other important types start to creep up.

At this time, mirror memory is also increasing and there is no convergence.

As a result, there is no way to confirm whether it is an online problem, but at least one bug has been located. The check_image_type method is printed only once in the exception stack. The check_image_type method is printed only once in the exception stack.

But the reality is that the check_image_type method circulates and reprints multiple times, and the number of reprints increases with the number of tests.

This section of exception handling code has been revisited.

Exception declarations are as follows:

The exception throwing code is as follows:

The problem

After thinking, I probably figured out the root of the problem:

Each exception instance is defined as a global variable, which is thrown when the exception is thrown. When the global variable is pushed onto the exception stack, it is not recycled.

So as the number of calls to incorrectly formatted images increases, so does the amount of information in the exception stack. And because the exception also contains the requested image information, memory increases by megabytes.

However, this part of the code has been online for A long time. If the problem is really caused here, why didn’t there be any problem before and why didn’t there be any problem on the A chip? With the above two questions in mind, we conducted two tests:

First of all, it was confirmed that this problem would also occur on the previous version and A chip.

Secondly, we checked the online call records and found that a new customer had just been connected recently, and a large number of pictures with similar problems were used to call the service of a certain office (most of which were B chips). We found some online examples, and we observed the same phenomenon in the logs.

As a result, the above questions are basically explained. After fixing this bug, the memory overflow problem no longer appears.

Advanced way of thinking

Reasonably, this seems like a good time to call it a day. But I asked myself a question: what if this line of log was not printed in the first place, or if the developer was lazy and did not print the exception stack?

With that in mind, I went on to investigate the ObjGraph and PyMpler tools.

Previously, memory leaks have been identified in the case of abnormal images, so let’s focus on what is different at this time:

Using the following command, we can see what variables are added to memory and how much memory is added each time an exception occurs.

  1. Objgraph. Show_growth (limit=20)

  1. Use the PyMpler tool

    from pympler import tracker tr = tracker.SummaryTracker() tr.print_diff()

With the following code, you can print out which references these new variables came from for further analysis.

gth = objgraph.growth(limit=20)
for gt in gth:
    logger.info("growth type:%s, count:%s, growth:%s" % (gt[0], gt[1], gt[2]))
    if gt[2] > 100 or gt[1] > 300:
        continue
    objgraph.show_backrefs(objgraph.by_type(gt[0])[0], max_depth=10, too_many=5,
                           filename="./dots/%s_backrefs.dot" % gt[0])
    objgraph.show_refs(objgraph.by_type(gt[0])[0], max_depth=10, too_many=5,
                       filename="./dots/%s_refs.dot" % gt[0])
    objgraph.show_chain(
        objgraph.find_backref_chain(objgraph.by_type(gt[0])[0], objgraph.is_proper_module),
        filename="./dots/%s_chain.dot" % gt[0]
    )
Copy the code

Using graphviz’s Dot tool, convert the graph format data generated above into the following image:

dot -Tpng xxx.dot -o xxx.png
Copy the code

Here, due to the large number of basic types such as dict, list, frame, tuple and method, it is difficult to observe, so filtering is done here first.

Memory added a call chain for ImageReqWrapper

Memory added a traceback call chain:

With this prior knowledge, it is natural to focus on traceback and its IMAGE_FORMAT_EXCEPTION counterpart.

However, it is important to consider why the above variables are not recycled when the service call is completed. In particular, all traceback variables cannot be recycled after IMAGE_FORMAT_EXCEPTION is called. In the meantime, with a few small experiments, I’m sure we’ll soon be able to locate the root of the problem.

Another, about the cache in python3 Exception lead to memory leaks, zhihu speak has a relatively clear: zhuanlan.zhihu.com/p/38600861

At this point, we can conclude that the exception stack, the request body, and other variables cannot be collected because the exception thrown cannot be collected, and the request body contains image information so that each such request will cause a MB-level memory leak.

In addition, python3 has a built-in memory analysis tool, tracemalloc, which can be used to observe the relationship between lines of code and memory, although it may not be accurate, but it can provide some clues.

import tracemalloc tracemalloc.start(25) snapshot = tracemalloc.take_snapshot() global snapshot gc.collect() snapshot1 =  tracemalloc.take_snapshot() top_stats = snapshot1.compare_to(snapshot, 'lineno') logger.warning("[ Top 20 differences ]") for stat in top_stats[:20]: if stat.size_diff < 0: continue logger.warning(stat) snapshot = tracemalloc.take_snapshot()Copy the code

Refer to the article

Testerhome.com/articles/19…

Blog.51cto.com/u\_3423936/…

Segmentfault.com/a/119000003…

www.cnblogs.com/zzbj/p/1353…

Drmingdrmer. Making. IO/tech/progra…

zhuanlan.zhihu.com/p/38600861

Click to follow, the first time to learn about Huawei cloud fresh technology ~