In daily performance testing, flow replay is a very good way, not only can you quickly build test cases and infinitely close to the preset scenario, but also with the help of many excellent replay frameworks such as Goreplay, can greatly reduce the test threshold and save time cost. This method is very suitable for quick performance checks, to quickly find and solve problems with limited resources.

Another approach is to rebuild the request for the test scenario. Common single-interface test, multi-interface mixed test, link test, full link test and so on are based on this method. Single scenarios are more flexible than traffic playback, and the control of the test process is finer. It is easier to integrate CI/CD and perform performance inspection. But the biggest problem with such test cases is how to get closer to real traffic. From the point of view of a single interface, it is the parameter distribution when the user requests, and from the point of view of the service chain, the user operates the link bifurcation. If the use case is specific to certain scenarios, these factors need to be considered less, but in the case of full load testing in the outward direction and refined performance testing in the inward direction, the complexity can soar.

In an earlier article on how to unify the functional, automation, and performance test cases for interface testing, we talked about encapsulating all interfaces as methods and converting both functional and performance test objects into tests for this method. By constructing parameters, requesting methods, and processing return values, these three steps put the entire process of executing test cases into Java or Groovy scripts, so that the complexity of use cases is reduced and the readability is increased, and the extensibility of the test framework is greatly improved.

In practice, there is a necessary small stone: segmented randomness, which is to analyze a traffic model in a preset scene according to the online traffic. The main information is the ratio of interface requests and interface parameters. Then send different requests according to this ratio, carrying different parameters, making the pressure measurement flow more close to the real flow.

Function implementation

Here I write two methods, the first half of which overlap as follows:

  • First themapIt’s divided into two corresponding oneslist, one savekeyA savevalue

The second half varies:

The first:

  1. willvaluesInto the firstiFor the oldlistbeforeiThe sum of terms
  2. The random function goes from 1 to 1valuesThe last term (i.elistSum of all terms) an integer number
  3. The loop determines that the value falls into one of twoindexThe subscriptvalueBetween, take outkeysIn response to thekeyreturn
    /** * Randomly generates an object according to different probabilities * consumes more CPU **@param count
     * @param <F>
     * @return* /
    public static <F> F randomCpu(Map<F, Integer> count) {
        List<F> keys = new ArrayList<>();
        List<Integer> values = new ArrayList<>();
        count.entrySet().forEach(f -> {
            keys.add(f.getKey());
            values.add(f.getValue());
        });
        int t = 0;
        for (int i = 0; i < values.size(); i++) {
            t = t + values.get(i);
            values.set(i, t);
        }
        int r = getRandomInt(values.get(values.size() - 1));
        for (int i = 1; i < values.size(); i++) {
            if (r <= values.get(i)) return keys.get(i);
        }
        return null;
    }

Copy the code

The second:

  1. Traverse willvalues, the correspondingindexthekeycopyvalue-1 re-insertkeys
  2. From the newkeysA random one ofkeyreturn
    /** * Randomly generates an object according to different probabilities * consumes more memory **@param count
     * @param <F>
     * @return* /
    public static <F> F randomMem(Map<F, Integer> count) {
        List<F> keys = new ArrayList<>();
        List<Integer> values = new ArrayList<>();
        count.entrySet().forEach(f -> {
            keys.add(f.getKey());
            values.add(f.getValue());
        });
        for (int i = 0; i < values.size(); i++) {
            for (int j = 0; j < values.get(i) - 1; j++) { keys.add(keys.get(i)); }}return random(keys);
    }
Copy the code

test

The test script

Here I set up a couple of strings and their proportions, and then I do it N times to count the results.

    public static void test0(a) {
        Map<String, Integer> map = new HashMap<>();
        map.put("a".10);
        map.put("b".20);
        map.put("c".30);
        map.put("d".1);
        map.put("e".2);
        List<String> aa = new ArrayList<>();
        for (int i = 0; i < 1000000; i++) {
            String random = randomMem(map);
            aa.add(random);
        }
        CountUtil.count(aa);
    }
Copy the code

Console output

INFO->The current user: oker, working directory: / Users/oker/IdeaProjects funtester/system coding formats: utf-8, Mac OS X system version: 10.16
INFO->Element: A, number: 159066
INFO->Element: B, number: 318615
INFO->Element: C, number: 474458
INFO->Element: D, number: 16196
INFO->Element: E, number: 31665

Process finished with exit code 0

Copy the code

Basically the Settings are correct.

The project practice

Test item class

Here I have a virtual function class for a project, and then there are three interfaces, each interface has an int parameter, and the ratio of requests and distribution of parameters for the three interfaces is written in the code. The extra three list objects are intended to facilitate validation of the actual results and do not exist in the actual project.

package com.funtest.javatest;

import com.funtester.frame.SourceCode;
import com.funtester.utils.CountUtil;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;

import java.util.*;

/** * FunTester tests segmented random classes */
public class R_FunTester extends SourceCode {

    private static final Logger logger = LogManager.getLogger(R_FunTester.class);

    public static List<Integer> ts1 = new ArrayList<>();

    public static List<Integer> ts2 = new ArrayList<>();

    public static List<Integer> ts3 = new ArrayList<>();

    /** * method random value */
    public static Map<Integer, Integer> ms = new HashMap<Integer, Integer>() {{
        put(1.10);
        put(2.20);
        put(3.40);
    }};

    /** * Random value */
    public static Map<Integer, Integer> ps = new HashMap<Integer, Integer>() {{
        put(10.10);
        put(20.20);
        put(30.40);
    }};

    public void test1(int a) {
        ts1.add(a);
    }

    public void test2(int a) {
        ts2.add(a);
    }

    public void test3(int a) {
        ts3.add(a);
    }

    public void online(a) {
        Integer m = randomMem(ms);
        switch (m) {
            case 1:
                test1(randomMem(ps));
                break;
            case 2:
                test2(randomMem(ps));
                break;
            case 3:
                test3(randomMem(ps));
                break;
            default:
                break; }}}Copy the code

The test script

Here, I verified the randomness of method parameters through the size statistics of three lists and the statistics of elements in the list.

    public static void main(String[] args) {
        R_FunTester driver = new R_FunTester();
        range(1000000).forEach(f -> driver.online());
        output(ts1.size() + TAB + ts2.size() + TAB + ts3.size());
        CountUtil.count(ts1);
        CountUtil.count(ts2);
        CountUtil.count(ts3);
        test0();
    }
Copy the code

Console output

INFO->The current user: oker, working directory: / Users/oker/IdeaProjects funtester/system coding formats: utf-8, Mac OS X system version: 10.16
INFO-> 142168	286236	571596
INFO->Element: 20, number: 40563
INFO->Element: 10, number: 20468
INFO->Element: 30, number: 81137
INFO->Element: 20, number: 81508
INFO->Element: 10, number: 40873
INFO->Element: 30, number: 163855
INFO->Element: 20, number: 163643
INFO->Element: 10, number: 81117
INFO->Element: 30, number: 326836

Process finished with exit code 0

Copy the code

It’s all as expected. Finish the job! Keep moving the bricks!


FunTester.Tencent Cloud Author of the Year,Boss direct hire contract author.Official GDevOps media partner, non-famous test development.

  • FunTester test framework architecture diagram
  • FunTester test project architecture diagram
  • Soft start of performance test
  • Manual testing or automated testing?
  • Performance test error statistical practice
  • Performance test error statistical practice
  • Use Case Scheme of Distributed Performance Testing Framework (PART 1)
  • Automated testing for Agile teams
  • A complete guide to automated Testing Frameworks
  • Be the one who leaves the office most actively
  • May Day study Experience