PS: In the case of limited server hardware resources, maximizing the performance of the server and improving the concurrent processing capability of the server are issues that many o&M technicians consider. To improve the load capacity of Linux system, you can use Nginx and other native web servers with strong concurrent processing capacity. If you use Apache, you can enable its Worker mode to improve its concurrent processing capacity. In addition, with cost savings in mind, Linux kernel-related TCP parameters can be modified to maximize server performance. Of course, the most basic to improve the load problem, or upgrade the server hardware, this is the most fundamental.

In Linux, after the TCP connection is disconnected, the port is released in TIME_WAIT state for a certain period of time. When a large number of concurrent requests are made, a large number of TIME_WAIT connections will be generated. If the connections cannot be disconnected in time, a large number of port and server resources will be consumed. At this point we can optimize TCP kernel parameters to clean up TIME_WAIT ports in time.

The approach described in this article works only for connections with a large TIME_WAIT state that consume system resources, but otherwise the effect may not be significant. You can run the netstat command to check the connection status of the TIME_WAIT state. Run the following combined commands to check the current TCP connection status and the number of connections: # netstat – n | awk ‘/ ^ TCP / {+ + S [$NF]} END {for (S) in a print a, S [a]} “this command will output similar to the following results: LAST_ACK 16 SYN_RECV 348 ESTABLISHED 70 FIN_WAIT1 229 FIN_WAIT2 30 CLOSING 33 TIME_WAIT 18098 As you can see here, there are more than 18,000 time_waits, which takes up more than 18,000 ports. Note that the number of ports is only 65535, occupying one or less will seriously affect the subsequent new connections. In this case, it is necessary to adjust the Linux TCP kernel parameters to make the system release TIME_WAIT connections faster. Open the configuration file with vim: #vim /etc/sysctl.conf

In this file, add the following lines: net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_tw_recycle = 1 net.ipv4.tcp_fin_timeout = 30

Type the following command for the kernel parameters to take effect: #sysctl -p

Briefly explain the meanings of the above parameters:

Net.ipv4. tcp_syncookies = 1 # SYN Cookies are enabled. When SYN overflow occurs, cookies are enabled to prevent a small number of SYN attacks. The default value is 0, indicating that the SYN wait queue is disabled. Net.ipv4. tcp_tw_reuse = 1 # Indicates that reuse is enabled. Allow time-Wait Sockets to be re-used for new TCP connections. Default is 0, indicating closure. Net.ipv4. tcp_TW_recycle = 1 Net.ipv4. tcp_fin_timeout # Change the system default TIMEOUT time.

After such adjustment, besides further improving the load capacity of the server, it can also defend against DoS, CC and SYN attacks with small traffic.

In addition, if you have a large number of connections, we can optimize the TCP port range to further improve the concurrency of the server. Again, in the above parameter file, add the following configuration: net.ipv4.tcp_keepalive_time = 1200 net.ipv4.ip_local_port_range = 10000 65000 net.ipv4.tcp_max_syn_backlog = 8192 Net.ipv4. tcp_max_TW_buckets = 5000 # You are advised to enable net.ipv4.tcp_max_tw_buckets = 5000 # only on servers with heavy traffic. It is not necessary to set these parameters on servers with low traffic.

Net.ipv4. tcp_keepalive_time = 1200 # Indicates the frequency at which TCP sends keepalive messages when Keepalive is enabled. The default value is 2 hours. The default value is 20 minutes. Net.ipv4. ip_local_port_range = 10000 65000 # Indicates the range of ports used for outbound connections. The default value is small: 32768 to 61000. Change it to 10000 to 65000. (Note: do not set the minimum value too low, or you may occupy the normal port!) Net.ipv4. tcp_max_syn_backlog = 8192 # Indicates the length of the SYN queue. The default is 1024. Increase the queue length to 8192 to accommodate more network connections waiting for connections. Net.ipv4. tcp_max_TW_BUCKETS = 6000 # Indicates the maximum number of time_waits that the system can keep at the same time. If this number is exceeded, time_waits will be cleared immediately and a warning message will be printed. Think 180000, change to 6000. For Apache, Nginx, and so on, the arguments above can do a good job of reducing the number of TIME_WAIT sockets, but not much for Squid. This parameter controls the maximum number of time_waits and prevents Squid servers from being dragged down by a large number of time_waits.

Net.ipv4. tcp_max_syn_backlog = 65536 # Maximum number of connection requests that have not yet received client confirmation. The default value is 1024 for a system with 128 MB of memory and 128 for a system with small memory. Net.core.net dev_max_backlog = 32768 # The maximum number of packets allowed to be sent to the queue if each network interface receives packets faster than the kernel can process them. Net.core. somaxconn = 32768 # The listen backlog for web applications limits the kernel parameters to 128 by default, while the NGX_LISTEN_BACKLOG defined by Nginx defaults to 511. So it is necessary to adjust this value.

net.core.wmem_default = 8388608 net.core.rmem_default = 8388608 net.core.rmem_max = 16777216 Wmem_max = 16777216 # Max socket buffer :873200 net.ipv4.tcp_timestsmps = 0 # Timestamps can avoid serial number winding. A 1Gbps link is bound to encounter a previously used serial number. Timestamps enable the kernel to accept such “abnormal” packets. I need to turn it off here. Net.ipv4. tcp_synack_retries = 2 # To open the peer connection, the kernel sends a SYN with an ACK that responds to the previous SYN. The second of the three handshakes. This setting determines how many SYN+ACK packets the kernel sends before abandoning the connection. Net.ipv4. tcp_syn_retries = 2 # Number of SYN packets to be sent before the kernel distries the connection. Tcp_tw_len = 1 net.ipv4.tcp_tw_reuse = 1 Allows time-wait Sockets to be reused for new TCP connections.

Net.ipv4. tcp_wmem = 8192 436600 873200 8192 436600 873200 net.ipv4.tcp_rmem = 32768 436600 873200 Tcp_mem = 94500000 91500000 92700000 # net.ipv4.tcp_mem[0]: lower than this value, TCP has no memory pressure. Net.ipv4. tcp_mem[1]: At this value, the memory pressure phase is entered. Net.ipv4. tcp_mem[2]: TCP refuses to allocate sockets if the value is higher than this value. The above units of memory are pages, not bytes. The optimal value is 786432 1048576 1572864

Net.ipv4. tcp_MAX_orphans = 3276800 # The maximum number of TCP sockets in a system that are not associated with any user file handle. If this number is exceeded, the connection is immediately reset and a warning message is printed. This limit is intended only to prevent simple DoS attacks and should not be relied upon or artificially reduced, but rather increased (if memory is added). Net.ipv4. tcp_fin_TIMEOUT = 30 # This parameter determines how long a socket remains in fin-WaIT-2 state if it is closed at the request of the local end. The peer end can go wrong and never close the connection, or even go down unexpectedly. By default, it is 60 seconds. The usual value for a 2.2 kernel is 180 seconds. You can press this setting, but keep in mind that even if your machine is a lightweight WEB server, there is a risk of running out of memory due to a large number of dead sockets. Fin-wait-2 is less dangerous than Fin-WaIT-1 because it can only eat 1.5K of memory at most. But they live longer.

With this optimized configuration, your server’s TCP concurrency capability will increase dramatically. The preceding configurations are for reference only and are applicable to the production environment based on the actual situation.