Continue to share other aspects of error sources in the performance test error Analysis text.

Locking resources

This is relatively hidden because test scenarios that require locked resources are generally complex, and there are easy ways to mitigate this part of the work risk during the pre-run data preparation phase.

In the article how to perform performance tests on message queues, I used a LinkedBlockingQueue

collection object to prepare all the test data, and then, when performing the performance tests, each thread fetched the correct parameter value each time. However, in most cases, it is difficult to ensure that this mechanism will work well. Even if we use a model with a fixed number of requests to test, we can predict the amount of data in advance, and there is no way to prevent the test data from causing a response failure or assertion failure for a particular request.

However, this scenario is often used in daily work, especially when dealing with link pressure measurement, which requires one or more thread-safe objects to mark the status of link execution and branch switch status. For example, the branch problems in the link pressure measurement and the solutions to some problems mentioned in the practice of ThreadLocal link performance test. Another example is the use of thread-safe objects in the fixed QPS pressure test mode exploration, the fixed QPS pressure test in the initial article for recording the total number of requests and asynchronous request compensation scenarios. Avoid using the synchronized keyword and use the JDK’s thread-safe class objects instead.

Reference article:

  • The CountDownLatch class is used in performance tests
  • The CyclicBarrier class is applied in performance tests
  • Phaser classes are applied in performance tests
  • Thread-safe classes are used in performance testing
  • The thread synchronization CyclicBarrier class is applied at the performance test collection point

The machine performance

Here are the basic lessons will not be detailed, a few simple list I encountered.

Too little memory setting causes the system to consume CPU for GC processing, resulting in slow object creation, or the number of threads is set too high to match actual hardware resources. It may fail to create or consume too much CPU during thread switching.

The number of threads does not match other resource Settings, such as the number of ports and port reclamation mechanism, the maximum number of connections in the system is too low, and connection pool Settings are too low. All of this leads to unnecessary waiting during testing, which increases time consumption.

Error resolution

I came across two fairly large demos.

Verify numeric data. For example, the mobile phone number, which is usually verified using a re, Such as ^ [1] ([3] [0-9]) | ([4] [5-9]) | ([5] [0, 3, 5-9]) | ([6] [5, 6]) | ([7] [0 to 8]) | ([8] [0-9]) | ([9],8,9 [1])) [0-9] {8} $, ^[1]([3-9])[0-9]{9}$ In this case, we have no comparison to carry out such detailed verification in the test. For simple verification, we can completely convert the string into long type data, whether it is in a certain range. This scheme can improve the verification performance by 5-6 times, I personally test. For complex and rigorous scenes, post-processing mode can be implemented to reduce test errors.

Data extraction: Most interface responses are in JSON format (or object), but in most tools and frameworks, the response is converted to String format, and the data in the response result is extracted using regular expressions. The error mentioned in JMeter throughput error analysis comes from regular expression extraction of data. In addition to the use of tools and frameworks to provide regular extraction function, we also can be extracted by scripting language with a regular function, will reduce their use of part of the performance than tool itself, such as Java and Groovy article mentioned in the regular use of Groovy regular, simple to use, although there is no nature of ascension, but also can yet be regarded as an alternative. Because of the greater compatibility of the tool’s native solution, coupled with the unpredictability of expressions and response results, the lock has to deal with a lot of situations, leading to some performance sacrificing solutions.

Regex is generally avoided in data processing, whether in tools or scripts, for example, to extract fixed locations of content, simply slice the string through index. If logical judgments are required, it is possible to make multiple judgments using a scripting language. Never use re until the last minute.

PS: If the regular expression is written in a BUG, the performance will be more limited.

Exception handling

In performance testing, except for tools and frameworks, there are some failures to handle accidents. We also need to do our own assertion processing of the response results, including the regular expression extraction mentioned above and then comparing the expected values. During performance testing, it is inevitable that some requests fail or the response fails to meet expectations. The program naturally throws an exception, which is usually caught and handled by tools and frameworks.

Exception handling will consume more time again, do before a test, a Java from capture to empty processing, single-threaded mode consumes 300 ms of time, if this is the performance test the multithreaded mode, consumption is likely to be more, plus a number of errors may occur, so it is also a test one of the important sources of error.

Asynchronous end

Due to my personal preference, WHEN I use the fixed-thread request model, I usually choose the fixed-number of requests mode to write the test script, that is, each thread executes a fixed number of requests. In theory, since all threads are fair, the end condition will be reached at the same time after executing the same number of request tasks, thus ending the entire test task.

But the reality is that most of the time all threads don’t end at the same time. In my previous test scheme, each thread corresponds to a different test user, and even different test parameters. This exacerbates the difference in request response time between threads. For example, in A query list interface, user A has A total of 100 pieces of data, and user B has 1000 pieces of data. The query efficiency will naturally be different, and the difference will be greater when N times of requests are added.


FunTester.Tencent Cloud Author of the Year,Boss direct hire contract author.Official GDevOps media partner, non-famous test developer ER.

  • Automated Testing trends in 2021
  • Appium 2.0 quick reference
  • FunTester test framework architecture diagram
  • FunTester test project architecture diagram
  • Single link performance test practice
  • There are six steps to an automation strategy
  • Performance test framework QPS sampler implementation
  • Probe into branch problems in link pressure measurement
  • Don’t waste your thirst for knowledge
  • Pressure test results were corrected using microreference tests