After the previous preparation: comparison of performance testing frameworks, the remaining several testing frameworks are JMeter, K6, Locust and FunTester. The purpose of this test is to compare the generation capacity and resource consumption of several frameworks under each concurrency. This value tests the simplest GET interface and does not involve parameters or POST interfaces.

First the conclusion:

  • For low concurrency (100 threads),FunTesterResource consumption is slightly better, but at high concurrency (200 threads)K6frigging awesomegolangIt’s still wild. It’s a big advantage.
  • The FunTester testing framework had some advantage in resource consumption before reaching the inflection point in the performance of the service under test, but after the inflection point was reached, the advantage of K6 was significantly greater due to the frequent context switching of threads, with an overall difference of about two times.
  • Local testing also confirmed the above two points, but the QPS of the tested service was 60,000 +, while the QPS of the LAN service hovered at 15,000.

The preparatory work

This machine hardware 2.6 GHz six-core Intel Core I7, CPU statistics from the active monitor, 100% means that the consumption of a CPU thread, theoretically all CPU resources as 1200%, memory data also from the active monitor.

First OF all, I used the FunTester Moco Server framework architecture diagram test framework to set up a test service in the LAN environment, only a backpocket interface. The Groovy script looks like this:

import com.mocofun.moco.MocoServer

class TestDemo extends MocoServer{

    static void main(String[] args) {
        def log = getServerNoLog(12345)
        log.response("hello funtester!!!")
        def run = run(log)
        waitForKey("fan")
        run.stop()
    }
}
Copy the code

The performance of the same LAN service has no problem, and the difference with the local start service is that the local request is too fast, and the pressure measurement difference of various frames is not obvious enough. In addition, when testing local services, QPS is too high and average response time is too low, leading to large errors.

Script preparation

locust

The local Python version is 3.8. Locust download version is locust 1.5.3 by default.

The Locust framework only needs a written Python script, and the test script is shared below.

The first version was very basic, but the test was too tight, as follows:

from locust import HttpUser, TaskSet, task
class UserBehavior(TaskSet):
    @task(1)
    def profile(self):
        self.client.get("/m")
class WebsiteUser(HttpUser):
    tasks = [UserBehavior]
    min_wait = 5000
    max_wait = 9000
Copy the code

The earth branch of FunTester has verified the use of FastHttpUser instead of HttpUser, and a similar framework, FastAPI, has been tested to double performance. The final version is as follows:

from locust.contrib.fasthttp import FastHttpUser
from locust import HttpUser, TaskSet, task
class UserBehavior(TaskSet) :
    @task(1)
    def profile(self) :
        self.client.get("/m")
class WebsiteUser(FastHttpUser) :
    tasks = [UserBehavior]
    min_wait = 5000
    max_wait = 9000

Copy the code

JMeter

The local Java SDK version 1.8.0_281 runs in command mode. GUI is in too pit.

Since JMeter does not use scripts, there is nothing to share. By default, you can configure the protocol, address, port, and interface path.

Configuration file contents:

 <stringProp name="HTTPSampler.domain">192.16880.169.</stringProp>
          <stringProp name="HTTPSampler.port">12345</stringProp>
          <stringProp name="HTTPSampler.protocol">http</stringProp>
          <stringProp name="HTTPSampler.contentEncoding"></stringProp>
          <stringProp name="HTTPSampler.path">/m</stringProp>
          <stringProp name="HTTPSampler.method">GET</stringProp>
          <boolProp name="HTTPSampler.follow_redirects">true</boolProp>
          <boolProp name="HTTPSampler.auto_redirects">false</boolProp>
          <boolProp name="HTTPSampler.use_keepalive">true</boolProp>
          <boolProp name="HTTPSampler.DO_MULTIPART_POST">false</boolProp>
          <stringProp name="HTTPSampler.embedded_url_re"></stringProp>
          <stringProp name="HTTPSampler.connect_timeout"></stringProp>
          <stringProp name="HTTPSampler.response_timeout"></stringProp>
Copy the code

K6

Although k6 is written in Golang, the test scripting language is JavaScript, which reads as follows:

import http from 'k6/http';
import { sleep } from 'k6';


export default function(a) {
  http.get('http://192.168.80.169:12345/m');
}
Copy the code

FunTester

Groovy SDK Version: Groovy Version: 3.0.8 JVM Java heap memory set to 1G, other parameters default.

This time the default use to see the Groovy test script, the running mode is also Groovy script running mode, yes, there can be run through Java, the advantage is to control the setting of JVM parameters, the measured effect is not big. This method will be compared and tested later when Alibaba Dragonwell is open source.

Here is the content of the script:


import com.funtester.config.Constant
import com.funtester.frame.execute.Concurrent
import com.funtester.frame.thread.RequestThreadTimes
import com.funtester.httpclient.ClientManage
import com.funtester.httpclient.FunLibrary
import com.funtester.utils.ArgsUtil
import org.apache.http.client.methods.HttpGet

class Share extends FunLibrary{

    public static void main(String[] args) {
        ClientManage.init(10.5.0, EMPTY, 0);
        def util = new ArgsUtil(args)
        int thread = util.getIntOrdefault(0.200);
        int times = util.getIntOrdefault(1.10000);
        String url = "http://192.168.80.169:12345/m";
        HttpGet get = getHttpGet(url);
        Constant.RUNUP_TIME = 0;
        RequestThreadTimes task = new RequestThreadTimes(get, times);
        new Concurrent(task, thread, "Local Fixed QPS Test").start(); testOver(); }}Copy the code

All ready, ready to test!!

Actual combat began

When I checked the data, many directly start from 100 threads concurrency, to multiply or even prime number growth to tens of thousands, but in actual use, the single machine is not used at all, my machine test performance inflection point is about 150, the final bottleneck point is within 200. So, I set four terms: 10, 50, 100 and 200.

First of all, 10 threads must be a breeze, and the hardware resources are sufficient to make a reference. 50 threads is a medium pressure, mainly compared with 10 threads, the performance of 100 threads is relatively high, but should not reach the inflection point, 200 threads should be over the inflection point, to reach the bottleneck point.

Of course, there are factors to take care of locust. After my preliminary test, there is no need to have multiple nodes.

10 threads

Test results:

The framework CPU memory QPS RT
JMeter 37.58 472.7 1040 9
K6 53.54 78.2 2302 4.26
locust 83.65 45.9 1049 8
FunTester 28.82 385.3 2282 4

Both JMeter and FunTester have high memory, which has always been the case. In the test results, THE QPS measured by K6 and FunTester were relatively high and close, JMeter and Locust were almost 50% nerfed, and the JMeter GUI was even worse. Except for LOCUST, the CPU data are almost the same, because this data is the sensory average recorded by my naked eyes. There are also large fluctuations in the actual test process, so errors are unavoidable.

Blind locust must have spent some time synchronizing test results. However, I tested JMeter first and then checked the results, so this problem should be eliminated.

At this point, locUST is almost out of the test, which is a bit low and consumes a lot of CPU, but locUST is still tested in the next round.

50 threads

Test results:

The framework CPU memory QPS RT
JMeter 120.71 776.7 3594. 13
K6 161.02 107.9 9805 5.02
locust 99.45 55.8 1424.52 27
FunTester 88.64 392.6 9773 5

The conclusion is similar to that of the previous round, and you can look at the data for details.

QPS in JMeter data fluctuated too much in the whole test process, the lowest was less than 2000, and the above was the highest QPS. In addition, I find that JMeter is not very good for port or connection utilization. Once THE QPS is high, the connection will be reported as abnormal soon, and the Internet will check that the port is not enough, so just change it and continue to increase threads and continue to crash. Maybe I didn’t have the right posture, but FunTester is good enough.

Upon verification, JMeter used approximately three times more ports than the number of threads. FunTester has used a little more than twice as much, and K6 has been steadily low, within 50. That’s something I’ll have to work on and refine.

In the next test, I abandoned Locust and JMeter. The error rate was too high. During the test, the poor readability of JMeter test cases was clearly revealed.

100 threads

There are only two strong ones left, test results:

The framework CPU memory QPS RT
K6 199.69 168.4 12631 7.84
FunTester 225.74 424.7 13604 7

The numbers are not that different, and K6 CPU consumption has gradually decreased to close to That of FunTester, indicating that we should be near the performance inflection point at this point.

200 threads

Test results:

The framework CPU memory QPS RT
K6 239.97 240.4 15354 12.94
FunTester 431.52 427.9 14940 13

Here you can see the K6’s advantages are very clear. A preliminary estimate of the bottleneck point was that the number of threads doubled, the QPS increased by only 10%, and the response time increased significantly.

Later, I changed the JVM startup parameters to increase the heap memory, and the actual effect was not significantly improved. I have to say that Golang coroutine is very powerful. There is an Ali Longjing solution for Java coroutine, which is open source and free. I have not tested it for the time being.

Now that the comparison test is over, please visit the Earth branch of FunTester for more information.

Goold Luck! FunTester !


FunTester.Tencent Cloud Author of the Year,Boss direct hire contract author.Official GDevOps media partner, non-famous test development, welcome to follow.

  • FunTester test framework architecture diagram
  • Arthas Quick start video demo
  • How to become a Full stack automation engineer
  • JsonPath utility class unit test
  • Socket interface fixed QPS performance test practice
  • Automated Testing trends in 2021
  • How to test the probabilistic algorithm P= P (1+0.1*N)
  • Appium 2.0 quick reference
  • There is no data to drive automated tests
  • Selenium parallel testing best practices
  • Selenium Test automation tips
  • Groovy handles headers in JMeter