Tomcat performance optimization is only a part of the optimization, the overall server performance, but also involves the server itself, code and many other factors.

Measures of system performance, mainly response time and throughput:

  1. Response time: Time taken to perform an operation
  2. Throughput: indicates the number of Transactions supported by the system in a given period of time, expressed in TPS(Transactions PerSecond). In other words, transactions per second, a transaction is the process by which a client sends a request to the server and the server responds.

Tomcat optimization can start from two aspects:

  1. JVM virtual machine tuning (optimizing the memory model)

  2. Optimization of Tomcat’s own configuration (for example, shared thread pools? IO model? Bio, Nio)

Different production environments have different situations, so general optimization cannot be copied. It must be tailored to your production environment.

Tuning is a process of adjusting what works best for you based on your server.

This is just an idea.

Vm operation optimization

JAVA virtual machine operation optimization is mainlyMemory allocationGarbage collection mechanismThe optimization.

  • Memory directly affects the efficiency and throughput of the server

  • The JVM garbage collection mechanism causes program execution interruption to varying degrees.

    You can choose different garbage collection policies and adjust the GARBAGE collection policies of the JVM to greatly reduce the number of garbage collection, improve the efficiency of garbage collection, and improve the performance of the program.

1. Memory optimization

JVM memory optimizations are primarily optimizations for “heap memory.”

The following parameters can be adjusted for the Java VM memory:

parameter role Optimization Suggestions
-server Start the server in server mode You are advised to enable the server mode
-Xms Minimum heap memory Consistent with -xmx (since dynamically adjusting memory is also resource-intensive, keep it consistent from the start)
-Xmx Maximum heap memory Set it to 80% of the available memory
-XX:MetaspaceSize The initial value of the meta space
-XX:MaxMataspaceSize Meta space maximum memory The default is unlimited
-XX:NewRatio The ratio of the size of the young generation to the size of the old generation, an integer. Default is 2 I don’t need to modify it
-XX:SurvivorRatio Ratio of Eden zone size to Survivor zone size, integer, default is 8 (8:1) I don’t need to modify it

JAVA VM parameters:

Add the configuration file to catalina.sh and place it under the comment area.

JAVA_OPTS="-server -Xms2048m -Xmx2048m -XX:MetaspaceSize=256m -XX:MaxMetaspaceSize=512m"
Copy the code

How do I check whether the configuration takes effect?

Use JHSDB, a tool provided by the JVM to view memory maps

JHSDB jmap --heap --pid {process pid}Copy the code

2. Garbage Collection Strategy (GC)

Garbage collection performance indicators:

  • Throughput, working time as a percentage of total time, not just the time the program is running, but also the memory allocation time.
  • Pause time, the number of times an application has stopped responding due to garbage collection

The JVM has the following garbage collectors:

  • Serial Collector Serial Collector

    A single thread performs all garbage collection, suitable for single-cpu servers

  • Parallel Collector Parallel Collector

    Also known as a throughput collector (because throughput is the concern), it performs garbage collection for the young generation in parallel. This approach can significantly reduce the overhead of garbage collection (in this case, multiple garbage collection threads working in parallel, while the user’s thread is still in a waiting state). Suitable for multi-processor fire or multi-threaded hardware running on the large amount of data applications.

  • Concurrent collector Concurrent Collecotr

    Perform most of the garbage collection work concurrently to reduce the pause time for garbage collection.

    Refers to the simultaneous execution of the user thread and the garbage collection thread, but not necessarily in parallel, and may alternate. Suitable for applications where response time takes precedence over throughput.

    Because this collector minimizes pause times, it degrades application performance.

  • CMS Collector Concurrent Mark Sweep Collector

    The concurrent marker sweep collector is suitable for applications that are willing to shorten the garbage collection pause time and can afford to share processor resources with the garbage collection.

  • G1 Collector garbage-first Garbage Collector

    This came after JDK1.7.

    For multi-core servers with large amounts of memory, high throughput is possible while meeting garbage collection pause times.

You can view it using the JConsole tool provided with Java

Garbage collector parameters:

parameter describe
-XX:+UseSerialGC Enable serial collector
-XX:+UserParallelGC Enable the parallel garbage collector. When configured, -xx :+UseParallelOldGC is enabled by default
-XX:+UserParNewGC The younger generation uses the parallel collector, which is automatically enabled if -xx :+UseConcMarkSweepGC is set
-XX:ParallelGCThreads Number of threads recycled by young generation and old generation. The default depends on the number of cpus in the JVM
-XX:+UseConcMarkSweepGC For older generations, enable the CMS garbage collector. CMS or G1 is recommended when the parallel collector cannot meet the latency requirements of the application. When this is enabled, -xx :+UseParNewGC is automatically enabled
-XX:+UseG1GC Enable the G1 collector. Collector is a server type collector for multi-core, large-memory machines. It meets the GC pause time target with high probability while maintaining high throughput

In catalina.sh, add the following configuration:

JAVA_OPTS="-XX:+UseG1GC"
Copy the code

Tomcat optimization

  • Adjusting the thread pool

Adjust the Tomcat connector

Adjust the configuration of the connected machine in server.xml to improve the performance of the application server

parameter instructions
maxConnections Maximum connector. When this number is reached, the server will receive single requests and will not process them. Additional requests will block until the number of connections falls below maxConnections. You can view server limits by using ulimit -a. For more CPU intensive (computationally intensive), do not set this value too high a(CPU allocation per link)* B (connections)= C (CPU consumption). The CPU requirements are not too high. The recommended value is around 2000 (affected by the server).
maxThreads Set the maximum number of threads based on the server hardware
acceptCount Maximum number of queue requests. When the server receives maxConnections requests, Tomcat places the subsequent requests in the task queue. AcceptCount refers to the maximum number of requests queued in the task queue.

The maximum number of Tomcat requests to be processed is maxConnections + acceptCount

Disable AJP connectors

The AJP connector is set up for Apache integration. If this is not required, you can comment out the AJP configuration of server.xml

Adjusting I/O Mode

The default IO model used before Tomcat8 was BIO (blocking IO), which created a thread for each request and was not suitable for high concurrency.

The default IO model used after Tomcat8 is NIO (non-blocking IO).

You can configure NIO by adjusting the protocol parameter

<Connetcor prot="8080" 
           protocol="org.apache.coyote.http11.Http11NioProtocol"
           connectionTimeout="20000"
           redirectPort="8443"/>
Copy the code

When Tomcat concurrency performance is high or there is a bottleneck, you can try to use the APR (Apache Portable Runtime) mode to solve the asynchronous I/O problem from the operating system level. ARP and Native need to be installed on the operating system (because APR principle uses JNI technology to call IO interface at the bottom of the operating system)

Dynamic and static separation

Tomca is not good with static resources.

You can use Nginx+Tomcat deployment, where Nginx handles static resource access and Tomat handles dynamic resource access.