Brief introduction: function decides present, performance decides future. Welcome to “Mobile Performance Test Platform on the Cloud” to learn about the capabilities and planning of the EMAS performance test platform.

1. Functionality determines the present and performance determines the future

Performance testing has always been a big problem in the field of mobile testing. The most intuitive manifestation of performance testing is the subjective experience of users when they use the App in the foreground. However, behind the decision of the quality of the experience, there are many technical changes.

  • When we were used to Nokia, smartphones came along; When we learned native development, hybrid came along; When the huge applications under the hybrid framework tend to mature, small programs appear in front of our eyes; Following live broadcasting, iot, AR, VR and artificial intelligence, new technologies and application scenarios are developing at an unimaginable speed. Performance testing techniques face huge challenges in the face of rapidly changing scenarios and development technologies, and while we are still struggling with how to test A, B has already come out.
  • Performance testing itself has developed increasingly mature solutions, such as online performance monitoring APM, offline performance collection tools; There are test technologies derived from each application scenario, such as pressure test, stability test, power consumption test, etc. There are also specific testing capabilities based on various performance metrics (memory, CPU, power, flow).

We are committed to creating an online and offline performance solution that helps developers identify, locate and solve a range of mobile performance issues. This article will focus on the capabilities and planning of the EMAS performance test platform. Again, functionality determines today and performance determines the future.

2. Performance testing tools on the cloud

Usually, when we conduct special tests (memory, CPU, power, flow, etc.), we need to prepare test models, test packages, test environments, and test data. The following problems may occur:

  • There are not enough model samples.
  • The Debug package does not necessarily reflect the performance of the production package, but Android Studio requires the debug package to test.
  • Android/iOS test environment construction and cross-platform difficulties.
  • Large amount of test data collation and analysis.

These problems are easy to lead to the low efficiency of the whole test, or even unable to implement the landing.

Based on the debugging ability of EMAS cloud real machine, MQC provides a more perfect and convenient performance testing tool on the cloud.

Yunzhen natural provides up to 600+ models of test, support debugging and testing of all installed applications, do not rely on any local environment configuration, test data upload statistics.

At the same time, the EMAS performance testing tool has the following features:

  • Mobile dual-terminal and cross-platform performance acquisition based on APP_Process and Instruments protocols;
  • No intrusion, short interval (acquisition interval is stable for 1s), low delay (performance data delay is less than 100ms), low power consumption (impact on equipment performance is less than 1%);
  • Application of + process test scheme, meet the requirements of hybrid, small program test.

3. Data Kanban on the cloud

The significance of performance data lies in that it measures and quantifies various common problems through technical means, which can help us find potential performance problems and risks as much as possible before the launch of product functions. MQC performance test platform will be stored in the cloud data, to as many dimensions as possible visualization to the user, the good version before the release of the pass.

3.1 the task

Each test task and performance data performed on the cloud server is directly saved as a test task, so that you can view and confirm the historical data again.

3.2 use case

In the actual testing process, it is easy to find that the performance data of different application scenarios are not comparable at all. In statistical methods, it is difficult to directly determine the qualitative and quantitative judgment by only looking at the average value of the performance data, which cannot affect the decision of development and product.

Even if the scene looks the same, different product decisions may lead to a large performance data gap: for example, most albums of cloud disk show compressed images based on traffic and performance considerations; Some of our native photo album software, the display is basically the original image, so the product choice led to a huge difference in memory overhead.

In the initial design of data Kanban, the experience in the construction of a functional automation use case platform was absorbed. Each one-time test task was stored in use cases, and the performance data was counted according to different dimensions of use cases. In EMAS mobile test console, different sub-accounts can view and manage the same app and use cases to meet the requirements of multi-user cloud collaboration.

3.3 Multidimensional aggregation

On the basis of use case dimension, MQC performance test platform provides multi-dimension data statistics, aggregation and analysis capabilities.

• Equipment classification

The device is divided into three levels: high, medium, and low based on the hardware performance. As different models have a great influence on the actual performance index of APP, this factor can greatly reduce the influence of hardware performance on the confidence degree of the index.

• Application version

For performance indicators, there are usually three criteria for determining problems:

  • Baseline metrics defined based on industry technology experience, which are usually the baseline criteria proposed by technology decision makers for development based on user experience, performance requirements, and big data analysis;
  • The horizontal comparison of APP performance indicators in the industry shows that learning excellent technology implementation in the industry has always been one of the important reasons for the rapid development of the Internet.
  • Vertical comparison with different versions of THE APP, quickly discover the optimization effect of the new version, the impact of new functions on the APP, etc.

• Index distribution

Indicator distribution can help developers quickly judge indicator ranges, locate possible abnormal tasks and abnormal indicator ranges, and refer to task reports in a more targeted manner.

Plan for the future

• Rich indicators: We will continue to improve the collection scheme of more performance indicators, such as electricity, GPU, temperature, etc. • Industry indicators: MQC will collate and statistics the performance indicators of various industries based on cloud developer data and expert test data as reference and share them; • Performance baseline: As mentioned above, there are usually three observation criteria for performance indicators: APP performance indicators in the same industry; Performance indicators of different versions from APP; Performance standards based on technical solutions and industry data. The definition of performance baseline can better constrain developers’ pursuit of extreme performance experience and minimize the probability of performance problems, such as OOM and ANR problems.

The original link

This article is the original content of Aliyun and shall not be reproduced without permission.