JMH is a Java Microbenchmark Harness.

JMH is a benchmark tool for building, running, and analyzing Java and other JVM languages. It is also part of the OpenJDK project.

To run JMH benchmarks, the recommended approach is to build a test project using Maven. Generate relevant dependency information and simple test skeleton code. Because this approach is pure, the project is completely new, automatically generated, and not affected by other environments, so it is more reliable.

Create a JMH benchmark project using Maven

Create a JMH test project using the following Maven command.

$ mvn archetype:generate -DinteractiveMode=false\ -darchetypeGroupid = org.openJdk. JMH -DarchetypeArtifactId= JMH - Java -benchmark-archetype -DarchetypeVersion=1.21 \ -DgroupId=org.sample -DartifactId=test- Dversion = 1.0Copy the code

The three parameters starting with archetype are JMH dependency information, and JMH version number is 1.21.

- DarchetypeGroupId = org. Its. JMH - DarchetypeArtifactId = JMH - Java - benchmark - archetype - DarchetypeVersion = 1.21Copy the code

The last three parameters are information about the test project, which are the default package name, project name, and version number.

-DgroupId=org.sample -DartifactId=test- Dversion = 1.0Copy the code

1) The main code of pom.xml generated is as follows

<groupId>org.sample</groupId>
<artifactId>test</artifactId>
<version>1.0</version>
<packaging>jar</packaging>

<name>JMH benchmark sample: Java</name>

<! -- This is the demo/sample template build script for building Java benchmarks with JMH. Edit as needed. -->

<dependencies>
    <dependency>
        <groupId>org.openjdk.jmh</groupId>
        <artifactId>jmh-core</artifactId>
        <version>${jmh.version}</version>
    </dependency>
    <dependency>
        <groupId>org.openjdk.jmh</groupId>
        <artifactId>jmh-generator-annprocess</artifactId>
        <version>${jmh.version}</version>
        <scope>provided</scope>
    </dependency>
</dependencies>

<properties>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    <! -- JMH version to use with this project. -->
    <jmh.version>1.21</jmh.version>
    <! -- Java source/target to use for compilation. -->
    <javac.target>1.8</javac.target>
    <! -- Name of the benchmark Uber-JAR to generate. -->
    <uberjar.name>benchmarks</uberjar.name>
</properties>
Copy the code

2) Generate a mybenchmark.java class

By default, a test class is generated with a single testMethod testMethod() with an @benchmark annotation.

You can add the code logic you want to test inside the testMethod() method, which will be executed when the benchmark is executed.

package org.sample;

import org.openjdk.jmh.annotations.Benchmark;

public class MyBenchmark {

    @Benchmark
    public void testMethod(a) {
        // This is a demo/sample template for building your JMH benchmarks. Edit as needed.
        // Put your benchmark code here.}}Copy the code

Compile build project

Compile the project.

$ cd test/
$ mvn clean verify
Copy the code

The target information is as follows.

$ ls target/
benchmarks.jar  generated-sources/  maven-status/
classes/        maven-archiver/     test-1.0.jar
Copy the code

Benchmarks. Jar contains the class file of the code to be tested, as well as the JMH-related class file that you need to rely on to perform the test.

Perform benchmark tests

After you finish building the code, you can execute the test by executing the following command.

After executing the command, JMH scans and finds all the code to be tested, and executes the test methods accordingly.

In the process of execution, test related data will be output, which can be divided into three parts in general:

  1. Test environment configuration information;
  2. Details of each round of testing;
  3. Overall test results.

Test environment configuration information

Based on the output, you can learn about the test environment configuration as follows:

  • JMH version 1.21;
  • JDK version 1.8.0_121;
  • Warm-up: 5 warm-up iterations are carried out in each round of test, and the time of each iteration is 10s.
  • True benchmarking: 5 benchmark iterations per round of testing, each iteration lasting 10 seconds;
  • Timeout: The timeout time of each iteration is 10 minutes;
  • Thread: The number of threads to execute tests is 1, and each round of tests is executed in sequence.
  • Benchmark mode: Throughput tests by default;
  • Benchmarking method: org. Sample. MyBenchmark. TestMethod
$ java -jar target/benchmarks.jar
# JMH version: 1.21
JDK 1.8.0_121, Java HotSpot(TM) 64-bit Server VM, 25.121-B13
# invoker VM: C: \ SDK \ jdk1.8.0 _121 \ jre \ bin \ Java exe
# VM options: <none>
# Warmup: 5 iterations, 10 s each
# Measurement: 5 iterations, 10 s each
# Timeout: 10 min per iteration
# Threads: 1 thread, will synchronize iterations
# Benchmark mode: Throughput, ops/time
# Benchmark: org.sample.MyBenchmark.testMethod

# Run progress: 0.00% complete, ETA 00:08:20
# Fork: 1 of 5
# Warmup Iteration 1: 1576551518.400 Ops /s
# Warmup Iteration 2: 1529719054.021 Ops /s
# Warmup Iteration 3: 1666073877.889 Ops /s
# Warmup Iteration 4: 1640435331.734 Ops /s
# Warmup Iteration 5: 1638832864.366 Ops /sIteration 1: 1625641896.790 OPS /s Iteration 2: 1533553941.136 OPS /s Iteration 3: 1592753193.369 OPS /s Iteration 4: 1625641896.790 OPS /s Iteration 3: 1592753193.369 OPS /s Iteration 4: 1632034409.677 OPS/S Iteration 5: 1595397793.688 OPS /s# Run progress: 20.00% complete, ETA 00:06:44
# Fork: 2 of 5
# Warmup Iteration 1: 1464189837.888 OPS /s
# Warmup Iteration 2: 1568131253.159 Ops /s
# Warmup Iteration 3: 1512431773.674 Ops /s
# Warmup Iteration 4: 1624047095.614 Ops /s
# Warmup Iteration 5: 1599319656.890 Ops /sIteration 1: 1549565370.435 OPS /s Iteration 2: 1479685624.920 OPS /s Iteration 3: 1546268750.693 OPS /s Iteration 4: 1624076911.097 OPS/S Iteration 5: 1547120121.585 OPS /s# Run progress: 40.00% complete, ETA 00:05:03
# Fork: 3 of 5
# Warmup Iteration 1: 1635927468.533 Ops /s
# Warmup Iteration 2: 2117152863.952 Ops /s
# Warmup Iteration 3: 2191950165.947 Ops /s
# Warmup Iteration 4: 2117139604.170 Ops /s
# Warmup Iteration 5: 1966743425.584 Ops /sIteration 1: 2094680040.904 OPS /s Iteration 2: 2192052945.492 OPS /s Iteration 3: 2215953631.021 OPS /s Iteration 4 2187306852.799 OPS/S Iteration 5: 2225823062.910 OPS /s# Run progress: 60.00% complete, ETA 00:03:22
# Fork: 4 of 5
# Warmup Iteration 1: 2215218138.467 Ops /s
# Warmup Iteration 2: 2060564779.974 OPS /s
# Warmup Iteration 3: 2128581454.514 Ops /s
# Warmup Iteration 4: 2136226391.233 Ops /s
# Warmup Iteration 5: 2190998438.402 Ops /sIteration 1: 2149230777.286 OPS /s Iteration 2: 1962048343.572 OPS /s Iteration 3: 1632748373.818 OPS /s Iteration 4: Ops /s Iteration 5: 2137771926.471 OPS /s# Run progress: 80.00% complete, ETA 00:01:40
# Fork: 5 of 5
# Warmup Iteration 1: 2175401658.601 OPS /s
# Warmup Iteration 2: 1998795501.979 Ops /s
# Warmup Iteration 3: 2207762443.100 ops/s
# Warmup Iteration 4: 2158909861.991 Ops /s
# Warmup Iteration 5: 2172243775.496 Ops /sIteration 1: 2088490735.383 OPS /s Iteration 2: 2055344061.187 OPS /s Iteration 3: 2143537771.341 OPS /s Iteration 4: Ops/S Iteration 5: 2204700995.400 OPS /s Result"org.sample.MyBenchmark.testMethod": Ops /s [Average] (min, avg, Max) = (1479685624.920, 1893378276.674, Assuming normal distribution), STDEV = 293040954.374 CI (99.9%): [1673867239.493, 2112889313.856]# Run complete. Total time: 00:08:24

REMEMBER: The numbers below are just data. To gain reusable insights, you need to follow up on
why the numbers are the way they are. Use profilers (see -prof, -lprof), design factorial
experiments, perform baseline and negative tests that provide experimental control, make sure
the benchmarking environment is safe on JVM/OS/HW level, ask forreviews from the domain experts. Do not assume the numbers tell you what you want them to tell. Benchmark Mode Cnt Score Error Units myBenchmark. testMethod THRPT 25 1893378276.674 ± 219511037.182 OPS /sCopy the code

Details of each round of testing

A total of five rounds of tests were run, with the following output for each round:

  • The current testing progress and the remaining testing time;
  • The current round of testing;
  • Warm-up identification and serial number, and QPS;
  • Benchmark identification and serial numbers, and QPS.

Run multiple rounds of testing to ensure that the results are not random.

Each round is warmed up to prevent startup and run fluctuations

The baseline iteration after the warm-up is reflected in the test results.

Each iteration takes 10s, and each round takes 5 warm-up and 5 benchmark tests, which takes about 100s.

Overall test results

Summarize the overall picture of the benchmark, including test run times and QPS for each test method.

The Result block is the test Result of the benchmark target method, where (min, AVG, Max) are the minimum, average, and maximum QPS.

The benchmark ran for a total of 8 minutes and 24 seconds.

Mybenchmark.testmethod performed throughput (THRPT) tests with 25 benchmark iterations.

reference

github.com/openjdk/jmh

Javadevcentral.com/jmh-benchma…

Tutorials.jenkov.com/java-perfor…

About me

Public number: Binary road

Tutorial: 996 geek.com

Blog: binarylife. Icu