Are you back in shape after the holidays? So, after the empty head, first to learn to learn, welcome to continue to pay attention to Tencent cloud technology community.

The author:
Nick

Introduction:

Based on the development cycle of the project, the framework of scenario-based performance test that had been considered for some time was built, including power consumption performance test, memory leak test, UI fluency performance test, background interface performance test, APP startup speed test, etc. The solution was applied to the test of the project, and many problems were found in the product. The next seven or eight pages will be devoted to a detailed mental journey. To share the wheel or to summarize the memory.

The paper

Performance testing, in the field of communication equipment testing, is a very mature field, in which the IETF has developed a number of RFCS to regulate testing behavior. But in the four years I’ve been in mobile testing, performance testing has seemed like an afterthought; Performance issues, from project to project, are always in the “user report -> development concerns -> test replay” category.

Obviously, if there is a performance problem, if the normal process of “test found -> problem location -> development and modification” can be followed to the maximum extent, it will make a great contribution to product quality. This is the goal of the following introduction: During the testing process, test engineers can identify more key scenarios of the product, find more performance problems through scenario-based, engineered, and automated testing methods, and make performance bugs converge before the product is released.

Objectives and tactics

Try to summarize performance testing: use automated testing tools to simulate a variety of normal, peak, and abnormal load conditions to test the performance of the system. A successful performance test has the following characteristics:

  1. Accuracy of information provided to development (required);

  2. The test method is efficient and the test data is stable and reliable (essential);

  3. The analytical methods used are highly reliable (a must);

  4. Test proficiency tools to help develop and locate performance issues (optional).

The information provided to the development is accurate. If testers or users tell developers, “Your version is performing badly!” “And” It gets hot when you use it. Fix it!” Development of the heart of the students must be confused.

If the test phrased itself: “Information page, video viewing process is high power consumption, this version is 30% higher than the previous version of Jiffs.” In this way, the development team can assign a follower according to the module, know the specific path, and know the optimization target of power consumption (the 30% extra in this version), then the problem will be pushed forward more smoothly.

The test method is efficient and the test data is stable and reliable. Before designing the framework, the team performed performance tests, including long board performance tests (power consumption and memory consumption of bright screen background), hand-driven scene performance tests, and page-driven fluency tests.

1. Performance of the long board: the scene is too single, and the performance of the butler background process without any operation is basically checked;

2. Compared with the UI automation drive, manual testing cannot guarantee to collect a large sample of data (asking people to do the same operation repeatedly for 30 minutes is undoubtedly a terrible task for employees);

3. Page-driven fluency test often produces two completely different test results for the same version, and the test data is unstable, which makes it difficult to prove to the developer that there is a problem with the code. More on the pros and cons of fluency testing later in this article.

The analytical methods used are highly reliable. In traditional analysis schemes, performance items are often evaluated simply by means. The author believes that a reasonable selection of evaluation algorithms can also make your test report more convincing. A data series with a small number of burrs, as shown in the figure below, will significantly lower the mean value due to severe burr deviation. With one more burr or one less burr, the mean value will be very different. When the sample size is small, the performance data obtained by the two tests will often differ greatly. (How to solve this problem is detailed below).

FIG. 1 Fluency sample

Test proficiency tools to help develop and locate performance issues. By moving tests to the left and doing more, development can focus less on narrowing down problem accesses. In functional testing, a BUG can go from accidentally reappearing to finding the necessary path, reducing development time to locate the problem. Similarly, in performance testing, development can fix problems more quickly if the test can indicate which thread is the power hog and which object is the memory leak culprit. At the same time, in the process of positioning, the test not only improves its own ability, but also establishes its own technical image.

Performance test framework design

As shown in the figure below, the performance test framework designed this time includes four modules, including data collection, data analysis, UI automation and driving framework, which are decoupled independently. This design can reduce the cost of use case access and has good scalability.

Figure 2 schematic diagram of frame design

Data collection scheme

We need to directly reflect the quality of a performance through one or more data. So how do you collect data samples? Collecting those data samples is an essential part of a performance testing framework.

Ui-driven solution

The performance test of the mobile client is mainly to simulate user operations to create a user-like usage scenario and obtain CPU, MEM, and fluency data to measure the performance indicators of the application under test in this usage scenario.

UI automation framework of this framework, selected the Python version of UIAutomator (GitHub open source code). The main reasons are as follows:

  1. The data collection module needs to use ADB tools, do ADB output result processing, text analysis, Python has a great advantage in this aspect, the code volume is low;

  2. Xiaocong encapsulates the open source Python version of UIAutomator, which is very lightweight and has comprehensive functions. The direct use of open source projects can save a lot of framework development time.

Introduction to driver Framework

In this framework, the tester can directly drive the execution of one or more use cases with the following command line, so the testng-like logic is designed.

  • Python startTest.py -t 3 -c SwitchTabTest

  • Python startTest.py -t 3 -m SwitchTabTest,swipeDownTest

As shown below, the CaseExecutor class drives and organizes the suite_up(), set_up(), test(), tear_donw(), suite_down(), and other methods for each use case.

Figure 3 The driver portion of the junit class

The methods included in the use case are:

A) Suite_UP () : Used to perform initialization environment

B) set_UP () : it is mainly used to pull up the corresponding performance data collection thread and apply UI automatic initialization to the scene under test, such as flash screen sliding and entering the home page.

C) Test () : key logic of UI automation execution scenario, such as: test memory leak of “continuously playing different videos” scenario. In the test() method, the use case needs to use UIautomator to realize the logic of repeatedly clicking different videos to play.

D) tear_down() : this method is mainly used to notify the data collection thread to stop data collection and archive data;

E) Suite_down () : This method will clear out the environment, summarize all data into the report, and use data analysis algorithms to get content that can be used directly in the report.

Figure 4. Execution logic

As shown in Figure 4, the performance data collection thread continues to collect performance data while UI automation executes the scenario in Test ()

Note: the above five steps do not need to be implemented in each case, corresponding to the same special, except test(), the other four methods, have the same logic, abstract to the parent class can be implemented, so that the same special can be achieved under different scenarios, only need to write a test method.

Data analysis scheme

Once you have the data, you want to maximize the value of the data. Reasonable and appropriate data analysis scheme is particularly important. When I started doing performance testing, all I could think of was taking a bunch of samples, averaging them, and comparing them.

This framework attempts to provide richer data to evaluate various performance indicators in addition to averages. Include:

A) Median: the representative value of all unit marker values determined by its position among all marker values is not affected by the maximum or minimum value of the distributed sequence, thus improving the representativeness of the median to the distributed sequence to a certain extent. The median is used to evaluate the network delay sample and is significantly better than the mean. The reason is that if most of the delay is 20ms, and some abnormal sample values are above 2000ms, they will seriously increase the mean value, resulting in that the mean cannot fully represent the delayed data series.

B) Variance and standard deviation: combining the mean value to evaluate the data series, the degree of dispersion of the data series can be evaluated.

C) Distribution map or distribution table: Distribution map or distribution table can also better evaluate the quality of a data sequence, and be used to evaluate the performance of fluency, network bandwidth, network delay, etc., so as to provide a more intuitive and detailed comparison result.

Fig.5 Schematic diagram of fluency optimization effect

D) Graph: To evaluate memory performance, the optimal solution is the occupancy curve + average value.

FIG. 6 Memory footprint curve

F) Averages: The traditional average is still a powerful tool.

G) maximum and minimum values.

Necessary instructions

The framework uses open source code:

  1. Github.com/xiaocong/ui…

  2. testerhome.com/topics/6938

This is a short introduction to the specific code, but the next few articles will continue to explain how the specific logic is implemented.


reading

Network latency and bandwidth performance Special Test like Google Test Series 4: Technical part

Has been authorized by the author tencent cloud community released, reproduced please indicate the article source The original link: https://cloud.tencent.com/community/article/128959