preface

The first generation of automated coverage platforms was developed by test teams that wanted to capture code coverage while performing automated or black-box tests. With the business iteration, there are more and more stored codes, and many new problems are encountered in the process of use, such as:

  1. Incremental code coverage cannot be counted in order to quantify test integrity
  2. Combined coverage reports are not supported, and complete statistics are not available for multi-person, multi-environment collaborative testing
  3. Manual report generation and necessary information for report generation also need to be collected by human beings, resulting in low degree of automation between systems and low user efficiency

To address the above problems, the test r&d team developed the coverage platform version 2.0, which implemented incremental code coverage, including timed sampling, automatic merge reporting, and other functions to enable the team to accurately test.

The project design

Incremental code coverage is implemented based on Jacoco, the most widely used open source third-party code coverage tool for JVM virtual machines. Our design is mainly for JacocoCore module, Analyze module function expansion, add incremental line counting logic in data analysis, to achieve incremental coverage statistics. For the consideration of system construction, we integrated incremental coverage function into DevOps release process to improve quality quantification and risk constraint capability. The design scheme is as follows:Incremental coverage implementation scheme

The Rubik automation platform will automatically trigger timing sampling and data combination according to the release event pushed by the release system. In addition, the project management system will read statistical data according to certain rules, and the release process whose coverage is not up to the standard will be stuck.

Main Functions

CodeDiff data parsing

Incremental line data is the prerequisite for calculating incremental coverage. Rubik platform obtains package version and production package version of the tested site through the distribution system, and calls GitLabApi to obtain differential code data, which is in pure string format and parsed and converted into differential line data. The conversion logic is as follows: Public static int[] parseIncrLines(String diff) {/// /* parseIncrLines(String diff) { GitDiffHelper helper = new GitDiffHelper(diff); helper.parse(); return helper.newLines; } private void parse(){ if (diff == null || diff.length() == 0) { return; } // skip file information nextLineIfMinusFile(); nextLineIfPlusFile(); while (! Eof ()) {parseBlock(); }}

Sampling data analysis

Jacoco is achieved by adding counters of each dimension layer by layer, respectively:

  • Instruction counter (CounterImpl)
  • Line counter (LineImpl)
  • Method compute node (MethodCoverageImpl)
  • Class Compute Node (ClassCoverageImpl)
  • Package Compute Node (PackageCoverageImpl)
  • Module Compute Node (BundleCoverageImpl)
  • Site computing node (not provided by Jacoco, self-implemented)

Finally, site-level statistics are obtained by adding them layer by layer, starting with the underlying instruction counter. To implement incremental row statistics, we separate incremental rows from full rows and add incremental row counters in the compute node parent class (CoverageNodeImpl). public class CoverageNodeImpl implements ICoverageNode{ … /// /* full lineCounter // protected lineCounter; //* / protected counter; .

Add incremental row count logic to the original count logic as follows:

public class SourceNodeImpl extends CoverageNodeImpl implements ISourceNode{ private LineImpl[] lines; Private int[] diffLines; private int[] diffLines; . Private void incrementLine(Final ICounter Instructions, Final ICounter Branches, final int line){ ensureCapacity(line, line); final LineImpl l = getLine(line); final int oldTotal = l.getInstructionCounter().getTotalCount(); final int oldCovered = l.getInstructionCounter().getCoveredCount(); boolean isDiffLine; If (l == lineimpl.empty) {// Determine if isDiffLine = diffLines! = null && Arrays.binarySearch(diffLines, line) >= 0; } else { isDiffLine = l.isDiffLine(); } lines[line – offset] = l.increment(instructions, branches, isDiffLine); // Increment line counter: if (instructions.getTotalCount() > 0) { if (instructions.getCoveredCount() == 0) { if (oldTotal == 0) { lineCounter = lineCounter.increment(CounterImpl.COUNTER_1_0); / / increment line processing logic: processing line has covered the if (isDiffLine) {diffLineCounter = diffLineCounter. Increment (CounterImpl. COUNTER_1_0); } } } else { if (oldTotal == 0) { lineCounter = lineCounter.increment(CounterImpl.COUNTER_0_1); / / incremental processing logic: handle uncovered the if (isDiffLine) {diffLineCounter = diffLineCounter. Increment (CounterImpl. COUNTER_0_1); } } else { if (oldCovered == 0) { lineCounter = lineCounter.increment(-1, +1); / / increment line processing logic: processing line part covers the if (isDiffLine) {diffLineCounter = diffLineCounter. Increment (1, + 1); } } } } } }

In addition, Jacoco uses a fixed number of objects to represent 8^4 (4096) count instances in a four-dimensional array singleton to implement count caching to improve memory usage. Incremental row flags are added to distinguish full row counters. However, Jacoco’s own cache counters (Fix class) cannot accommodate incremental counts. So there is also the incremental row cache counter, the DiffFix class, which incurs the additional overhead of a fixed 4096 DiffFix objects, but the overall performance impact is almost negligible.

public abstract class LineImpl implements ILine{ … private final boolean isDiffLine; private static final LineImpl[][][][] SINGLETONS = new LineImpl[SINGLETON_INS_LIMIT + 1][][][]; private static final LineImpl[][][][] DIFF_SINGLETONS = new LineImpl[SINGLETON_INS_LIMIT + 1][][][]; Static {// full row count cache for (int I = 0; i <= SINGLETON_INS_LIMIT; i++) { SINGLETONS[i] = new LineImpl[SINGLETON_INS_LIMIT + 1][][]; for (int j = 0; j <= SINGLETON_INS_LIMIT; j++) { SINGLETONS[i][j] = new LineImpl[SINGLETON_BRA_LIMIT + 1][]; for (int k = 0; k <= SINGLETON_BRA_LIMIT; k++) { SINGLETONS[i][j][k] = new LineImpl[SINGLETON_BRA_LIMIT + 1]; for (int l = 0; l <= SINGLETON_BRA_LIMIT; l++) { SINGLETONS[i][j][k][l] = new Fix(i, j, k, l); }}}} // Increment line count cache for (int I = 0; i <= SINGLETON_INS_LIMIT; i++) { DIFF_SINGLETONS[i] = new LineImpl[SINGLETON_INS_LIMIT + 1][][]; for (int j = 0; j <= SINGLETON_INS_LIMIT; j++) { DIFF_SINGLETONS[i][j] = new LineImpl[SINGLETON_BRA_LIMIT + 1][]; for (int k = 0; k <= SINGLETON_BRA_LIMIT; k++) { DIFF_SINGLETONS[i][j][k] = new LineImpl[SINGLETON_BRA_LIMIT + 1]; for (int l = 0; l <= SINGLETON_BRA_LIMIT; l++) { DIFF_SINGLETONS[i][j][k][l] = new DiffFix(i, j, k, l); } } } } }

Coverage report

For readability, instead of using Jacoco’s native Html report, we independently developed a relatively concise incremental/full report as follows:

The overall environmental coverage is reported as follows:

Data consolidation

The testing process often goes through multiple releases, either because of batch tests or because of bug fixes. After each JVM startup, the previous sample data needs to be merged into the next sample data for further accumulation. The Rubik platform receives the release events and automatically merges them according to the following rules:

  • Sample the site one last time before releasing a new version
  • When the site releases a new version, sampling is performed immediately after the health check is passed and periodic sampling is enabled
  • Any sample is automatically merged forward. The forward lookup rule is: the last sampled data of the same site, the same environment, and the same code branch

Although can release work order in PAones project management platform in the look at the site coverage report, but want to look at the site of incremental coverage, real time user can login Rubik platform, specify your test environment, you can easily see all sites incremental coverage within measured environment (per hour time sampling, can be manually triggered real-time sampling). Thus relatively accurate control of testing progress, reduce the problem of missed testing. The effect is shown below.

Problems encountered in project implementation

Data merge problem

The phenomenon of

The coverage platform needs to provide analysis services for 400+ sites on average every day. In addition, including hourly periodic sampling, it completes more than 8,000 sampling analysis and report generation in a day. After a period of online operation, it is found that service response is slow or slow, or even unavailable occasionally.

Analysis of the

According to the memory analysis of exceptions, the main heap object is SessionInfoStore.

SessionInfoStore is the underlying class Jacoco uses for code analysis presentations, which contains all the execution class information. During the automatic merge process, Jacoco accumulates SessionInfo by default instead of merging it, resulting in an increase of 30% to 50% in the sample file data with each merge. With the continuous merging of sampling data, the memory consumed by loading files increases sharply until the concurrent loading of several files leads to memory exhaustion, and GC is frequently triggered. It is found through the redisk that a sample file can be expanded from the initial 10K to 800M to 1.5g when the problem occurs.

Optimization scheme

After investigation, it was found that SessionInfo was only needed in native Html reports, and the removal did not affect the presentation of self-research reports, nor did it destroy Jacoco data analysis process. Therefore, the corresponding logic in the merged data was directly removed by The excellent (CU) and elegant (BAO), and the problem was finally solved. The modified code is as follows: /// /* Deserialization of execution data from binary streams. /*/ public class ExecutionDataReader{ … // Rubik report no need to merge SessionInfo private void readSessionInfo() throws IOException{// if (sessionInfoVisitor == null) {// throw new IOException(“No session info visitor.”); // } // final String id = in.readUTF(); // final long start = in.readLong(); // final long dump = in.readLong(); // sessionInfoVisitor.visitSessionInfo(new SessionInfo(id, start, dump)); Private void readExecutionData() throws IOException{if (executionDataVisitor == null) {throw new IOException(“No execution data visitor.”); } final long id = in.readLong(); final String name = in.readUTF(); final boolean[] probes = in.readBooleanArray(); executionDataVisitor.visitClassExecution(new ExecutionData(id, name, probes)); }

Subsequent planning

Incremental coverage provides capability support for the quantification of test results, solves the problem of trust in test results to a certain extent, and also provides basic capability for the quality of the test team, which helps Xinye R&D Center further advance the systematic construction of Devops. Next, the efficiency r&d team will make more attempts in the direction of precision testing, including automatic regression range analysis, code call link, etc., welcome to continue to pay attention to.