Ma Tao joined qunar’s technical team in 2013. Currently, he is in charge of H5 and small program application development in Destination Business Department. I am deeply interested in mobile terminal technology and front-end engineering, and dare to explore, practice and pursue perfection.

An overview of the

Typically, we write unit tests to verify code coverage during execution. Coverage results can be analyzed in terms of lines of code, logical judgments, and functional methods. The obtained values can be used to test our implementation of the system function, but also feedback the integrity of the program design.

However, for an older system without maintaining unit tests, it is not easy to collect coverage to verify system functionality and become familiar with system architecture. For this purpose, we made a lot of thinking and attempts to finally complete the stage goal. Next, I would like to share with you our implementation plan.

Realize the principle of

Coverage collection varies from language to language in terms of implementation mechanisms and even syntax specifications. Specific markers are inserted into lines of code according to certain rules, a step we call “code staking,” then the execution of those markers is collected during the execution of the case, and finally the output coverage is calculated and the output is formatted. The general process is shown in the figure:

Source compilation is optional and is compiled according to the source language features. In the Javascript ecosystem, there are complete third-party libraries available for basic operations such as code staking and coverage statistics. We chose IstanbulJS. Instead of using the command line tool: NYC, IstanbulJS has been redesigned and developed based on the IstanbulJS INTERFACE API. We have used IstanbulJS API versions 1.0 and 2.0 during development, and although there are some differences in usage methods, the functions are generally the same. For details, please refer to its official website. The API differences are not repeated here.

So once you have the tool, the next question is how do you specify cases? If it is an initial project, it is more controllable to write a relatively complete case manually with few features. What if you’re dealing with a system with unfamiliar functions or an older system with complex logic? Since the NodeJS project we are aiming at is a project running on the server side, we finally decide to sort cases through log playback, scheduled tasks and other forms by referring to the case collection methods of other server projects in the company. Although there is some redundancy in quantity, the cost is more manageable than supplementing unit tests.

details

After a general understanding of the implementation principle, we will introduce the details of our specific practice.

Code in the pile

Code staking is a prerequisite for coverage collection, which is a syntactic analysis of existing code and the insertion of preset markers at specified positions within the line. Let’s take a look at the comparison before and after:

The original file:

Documents after piling:

You can see that there is some extra logic in the code, which is actually counting the different dimensions of the code. There are a few things to note about this process:

  • The scope of the staked file, the specific scope is obtained by traversing the physical file directory of the project, without analyzing the file dependencies in the code line;

  • Whether to keep the source directory is an engineering consideration, ultimately depending on whether the next steps are done on the deployment machine? It is better to have a centralized platform to handle the next steps, which improves the efficiency of the deployment process and reduces size by removing source code.

  • The setting of the path path when the source file is staked, which is used for the final backtracking source generation report. To improve portability, it is best to use relative paths, which allow you to generate reports without limiting the source path to absolute paths. This is easy to specify in version 2.0 of the IstanbulJS API;

  • The performance of the piling process, which involves the choice of synchronous or asynchronous I/O, for the number of files or large project, can try to use multi-threading according to the actual situation (this should be based on the actual situation, some project files are not more than 10, some have thousands of files).

To collect data

Our process of collecting NodeJS coverage data is dynamic, and the coverage data can be updated in real time by different external requests after the service is started. Below is the previous demo as an example, by expanding the folded code part to see what it is!

Combined with the code of the piling part, you can basically understand the coverage collection logic of this file. In the process of program running, different request cases will execute different code logic, and at the same time, the coverage counting logic will be executed repeatedly, so as to finally complete the statistics of coverage.

Incidentally, the nodes used for coverage counting actually correspond to a set of abstract syntax trees for different dimensions.

Interested in the JS syntax can be in-depth understanding of the relevant knowledge.

We learned earlier that the data for each module is kept in its own module and then hung in the global namespace for all file sharing. So how do you get this data when the program is running? We tried in two directions:

The first is memory sharing, which is the first consideration since our services are generally daemon processes implemented through PM2. The Message Bus mechanism is used to pass coverage data from different processes in the form of messages. The data interaction is shown below:

Reading and processing data from memory can ensure extremely high real-time performance, but it also brings some problems:

  • The reliability is low, and the data in the memory is difficult to be recovered once lost.

  • Attention should be paid to the stability, mainly reflected in the large data set transmitted by multi-process service (coverage data in MB counts is common), the message deserialization consumption in PM2 is very large, and the poor control of message frequency is easy to cause great hardware pressure.

  • High coupling, function implementation is strongly dependent on PM2, the coupling degree is too high, can not be transplanted to other application scenarios.

The second is file storage. The memory data of each process is serialized and written to a file. The file is named according to the process ID to avoid conflicts. Data interaction changes are shown as follows:

The way files are stored clearly optimizes some of the previous problems:

  • Reliability increases, and even if the service fails, we can still recover from the data files. Just like a breakpoint, the efficiency gains are obvious;

  • Stability is still important, since I/O operations are involved, so reading and writing files needs to be carefully designed. Especially in the choice of write frequency and read time and synchronous asynchrony, one of the most common problems is that frequent operation of a data file leads to system I/O deadlock, instant consumption of a large number of resources;

  • The coupling is greatly reduced, and the file storage method is free from the dependency of process daemons and can theoretically be ported to any service. After a period of project practice, we decided to adopt the second option!

In fact, either solution also requires a prerequisite to complete data collection, that is, a specified module needs to be preloaded when the service is started. In order to achieve zero cost access for any project, we can use the default environment variable NODE_OPTIONS to introduce the preloaded module (because this setting will affect the global, it is recommended to remove after service startup).

The output report

This step is to output the results of the previously collected data as a summary or formatted document such as HTML. Here is a format:

The output format of the report is diverse and can be easily moved and stored after generation. In general, there are few scenarios that report changes, and if required, secondary development can be done based on file line-level data in the coverage data set. One thing to note in the report content is that files that are not referenced by the service startup script will not be indexed! Unlike staking, this report is generated based on the files actually executed when the program is running.

conclusion

In my opinion, coverage is an important indicator of project quality, which needs to be paid attention to in both development and testing, especially when the project is facing large changes. In a sense, whether the data collected by coverage can also be used for performance monitoring, code optimization, etc., is worth further digging.