Small knowledge, big challenge! This article is participating in the creation activity of “Essential Tips for Programmers”.

Phenomenon of the problem

This is a Java-based Web application system. When data is added in the background, a message is displayed indicating that data cannot be added. Therefore, you log in to the server and check the Tomcat log.

java.io.IOException: Too many open files

According to this error message, the basic judgment is that there are not enough file descriptors available in the system. Since the Tomcat service room system is started by WWW user, log in to the system as WWW user and run the ulimit -n command to check the maximum number of file descriptors that can be opened in the system. The output is as follows:

$ ulimit -n
65535
Copy the code

You can see that the server has set the maximum open file descriptor to 65535. This value should be sufficient, but why does this error occur?

 

solution

This case involves the use of the ulimit command.

When using ulimit, there are several ways to use it:

1 Add to the user environment variable

If the user is using bash, add ulimit -u128 to the environment variable file. Bashrc or. Bash_profile in the user directory to limit the user to 128 processes

2 Add it to the startup script of the application program

If the application is Tomcat, add ulimit -n 65535 to the tomcat startup script startup.sh to limit the number of file descriptors a user can use to 65535

3 Run the ulimit command on the shell terminal

The resource restriction of this method only applies to the terminal executing the command, the setting is invalid after exiting or closing the terminal, and this setting does not affect other shell terminals

 

The solution

The ulimit setting is not a problem, so the setting must not take effect. Then check whether the WWW user environment variable to start Tomcat has added the Ulimit limit. There is no Ulimit for WWW users.

Check whether the ulimit limit is added to the Tomcat startup script startup.sh file. Conf file, so check limits. Conf file and do the following:

# cat /etc/security/limits.conf | grep www
www soft nofile 65535
www hard nofile 65535
Copy the code

The ulimit resource limit was added to the limits.conf file. The ulimit resource limit was added to the limits.conf file. Check the tomcat startup time as follows:

# uptime
Up 283 days
# pgrep -f tomcat
4667
# ps -eo pid,lstart,etime|grep 46674667 Sat Jul 6 09; 2013 77-05:26:02 33:39Copy the code

As you can see from the output, the server has not been restarted for 283 years, and Tomcat was started on July 6, 2013 at 9am, which is nearly 77 days. Continue to look at the modification time of limits.

[root@localhost]# stat /etc/security/limits.conf File: '/ etc/security/limits the conf' Size: 2508 Blocks: 8 IO Block: 4096 regular File Device: fd01h / 64769 d Inode: 1179853 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2013-07-12 12:20:01.565327377 +0800 Modify: 2013-07-12 12:00:08.964956856 +0800 Change: 2013-07-12 12:00:08.964956856 +0800 Birth: -Copy the code

Conf file was last modified on July 12, 2013, which was later than the tomcat startup time. The solution to the problem is simple: restart Tomcat.