Terminal command:

Netstat nalp | 8011 | grep wc – l # check 8011 port \ client connection

Ulimit -n 102400 # Change the maximum number of files \

\

Experience shared by netizens:

/etc/sysctl.conf Append the text fs.file-max = 2097152 fs.nr_open = 2097152 net.core.somaxconn = 65535 net.core.rmem_default = 65535 net.core.wmem_default = 65535 net.core.rmem_max = 8388608 net.core.wmem_max = 83886080 net.core.optmem_max = 40960 net.ipv4.tcp_rmem = 4096 87380 83886080 net.ipv4.tcp_wmem = 4096 65535 83886080 net.ipv4.tcp_mem = 8388608 8388608 83886080 \

After the modification is complete, enter the terminal command to enable sysctl -p\

Source: Unknown

When doing network services, the preparation of TCP concurrent server program is essential. TCP concurrency generally has several fixed design patterns, each of which has advantages and applications. The following is a brief discussion of the differences between these modes:

Single process, single thread

After accept, data on this connection is received, processed, sent, and no new connections are received until the processing of this connection is completed.

Pros: Simple.

Disadvantages: Because only one client is served, there is no possibility of concurrency.

Application: Used when serving only one client.

Multiple processes

When accept returns a success return, a process is forked for the connection to handle the data sent and received on the connection, and the process is terminated when the connection is finished.

Advantages: Relatively simple programming, no need to consider data synchronization between threads.

Disadvantages: High resource consumption. The cost of starting a process is much higher than that of starting a thread. At the same time, many processes need to be started to handle a lot of connections, which puts a lot of pressure on the system. The system’s process limit also needs to be considered.

Application: This is convenient when there is not much client data, such as less than 10 clients.

multithreading

Similar to multi-process, but starts one thread for each connection.

Advantages: Compared with multi-process mode, it can save some resources and be more efficient.

Disadvantages: Compared to the multi-process approach, increased programming complexity, because of the need to consider data synchronization and lock protection. Do not start too many threads in another process. In Linux, threads are actually processes inside the system, and thread scheduling is performed in accordance with process scheduling.

Application: Similar to multi-process mode, suitable for a small number of clients.

Select + multithreading

There is a thread dedicated to listening on the port, and when accept returns, it puts the descriptor into the descriptor set fd. One thread uses SELECT to train the descriptor set to receive data on the connection with the data, and the other thread is dedicated to sending data. Of course you can also receive and send with one thread. Descriptors can be set to non-blocking mode or blocking mode. Usually the connection is set up in non-blocking mode and the sending thread is isolated.

Advantages: This mode greatly increases the concurrency compared to the previous modes.

Disadvantages: The system generally implements a large array of descriptors. Each call to select will poll the array of descriptors, which can be inefficient when the number of connections is too large. The efficiency drops unacceptably above 1000 connections.

Application: Windows and general Unix on the TCP concurrency are using select, should say the application is very extensive.

Epoll way

Epoll was added after Linux2.6. When a thread receives a connection, it sets the connection to non-blocking mode, and adds epoll events to epoll management by setting epoll events to edge trigger mode. The receiving thread blocks the wait event function in epoll. Another thread is dedicated to sending data.

Advantages: Because of the advanced implementation of epoll, this method can achieve concurrency on a large scale. Our current application on a 3 year old Dell PC Server can achieve concurrent connections of 20,000, and the performance is also very good.

Disadvantages: Increased coding complexity due to threading and non-blocking. This method only works after the Linux 2.6 kernel.

Note:

1) If epoll events are set to horizontal triggering efficiency drops to a level similar to select.

2) Unix systems have a limit on the number of open descriptors for a single process and a limit on the number of open descriptors within the system. The number of open descriptors in the system is limited by both hard and soft. The hard limit varies depending on the configuration of the machine. The soft limit can be changed, but must be less than the system’s hard limit. In SUSE Linux, you can run the ulimit -n command as the root user to change the limit.

Application: Large-scale TCP concurrency in Linux.

 

Configure and develop Linux applications that support high concurrency TCP connections

Modify the maximum number of files that a user process can open

On Linux, the maximum number of concurrent TCP connections is limited by the number of files that can be opened by a single process. (This is because the system creates a socket handle for each TCP connection. Each socket handle is also a file handle. You can run the ulimit command to view the maximum number of files that can be opened by the current user process:

    [speng@as4 ~]$ ulimit -n

    1024

This means that each process of the current user is allowed to open a maximum of 1024 files at the same time. This 1024 files must be removed from the standard input, standard output, standard error, server listening socket, Unix domain socket for interprocess communication, etc. Then the number of files left available for client socket connection is about 1024-10=1014. This means that by default, a Linux-based communicator allows up to 1014 concurrent TCP connections.

 

For communication handlers that want to support a higher number of TCP concurrent connections, Linux’s soft limit and hardlimit on the number of files open simultaneously by the current user’s process must be modified. The soft limit means that Linux further limits the number of files that users can open at the same time within the range that the current system can bear. The hard limit is the maximum number of files that can be opened simultaneously on the system, calculated based on the state of the system’s hardware resources (mainly system memory). Usually the soft limit is less than or equal to the hard limit.

 

The simplest way to modify the above restrictions is to use the ulimit command:

    [speng@as4 ~]$ ulimit -n <file_num>

In the command above, specify in <file_num> the maximum number of files that can be opened for a single process. If information similar to Operation notpermitted is displayed, it indicates that the upper limit fails to be modified because the value specified in <file_num> exceeded the soft or hard limit of the number of files opened by the user in Linux. Therefore, you need to change the soft and hard limits that Linux imposes on users on the number of open files.

 

The first step is to modify the/etc/security/limits file, add the following line in the file:

    speng soft nofile 10240

    speng hard nofile 10240

Speng specifies which user’s open file limit should be changed, and the ‘*’ number can be used to indicate that the limit of all users should be changed. Soft or hard specifies whether to modify soft or hard limits. 10240 specifies the new limit value you want to change, the maximum number of open files (note that the soft limit value is less than or equal to the hard limit). Save the file after modification.

 

Step 2 modify the /etc/pam.d/login file and add the following line to the file:

    session required /lib/security/pam_limits.so

This tells Linux to call the pam_limits.so module after the user has logged in to the system to set the maximum limit on the number of resources that the user can use (including the maximum number of files that the user can open). The pam_limits. So the module from the/etc/security/limits the conf file reading configuration to set the limit. Save the file after modification.

 

Step 3 to check the maximum number of open files at the Linux system level, run the following command:

    [speng@as4 ~]$ cat /proc/sys/fs/file-max

    12158

This indicates that the Linux system can open a maximum of 12158 files at the same time (that is, the total number of files opened by all users), which is the Linux system hard limit. The maximum number of files opened by all users should not exceed this value. In general, this system-level hard limit is the best maximum number of open files that Linux can calculate at startup based on the state of the system’s hardware resources, and should not be modified without special need, unless you want to set a value beyond this limit for user-level open files. To modify this hard limit, modify the /etc/rc.local script by adding the following line:

    echo 22158 > /proc/sys/fs/file-max

This is to force Linux to set the hard limit on the number of open files at the system level to 22158 after the startup is complete. Save the file after modification.

 

After the above steps are complete, restart the system and you can generally set the maximum number of files that can be opened simultaneously on Linux to a specified value for a single process of a specified user. If you run the ulimit-n command after the restart to check that the maximum number of files that can be opened by a user is still lower than the maximum value set in the preceding step, it may be because the number of files that can be opened by a user has been limited by using the ulimit-n command in the /etc/profile login script. When you run the ulimit-n command to change the maximum number of files that can be opened at the same time, the new value must be less than or equal to the previous value set by the ulimit-n command. Therefore, it is impossible to use this command to increase the value. If so, open the /etc/profile script and look for ulimit-n to limit the maximum number of files a user can open at the same time. If so, delete the command, or change its value to a suitable value, and save the file. The user logs out and logs in to the system again.

By doing so, you remove system restrictions on the number of open files for communication handlers that support high-concurrency TCP connection processing.

 

Modify the TCP connection restriction of the network kernel

When the client communication processor supporting high concurrent TCP connections is written on Linux, it is sometimes found that although the system has lifted the limit on the number of files opened by the user at the same time, it is still impossible to establish new TCP connections when the number of concurrent TCP connections increases to a certain number. There are several reasons for this.

 

The first reason may be that the Linux network kernel has a limited range of local port numbers. In this case, you Can further analyze why the TCP connection fails to be established. The error message Can’t assign RequestedAddress is displayed. In addition, if you use tcpdump to monitor the network, you can find that there is no network traffic of SYN packets sent by the client during TCP connections. These cases indicate that the problem is a limitation in the native Linux system kernel. In fact, the root cause of the problem is that the Linux kernel TCP/IP protocol implementation module limits the range of local port numbers corresponding to all client TCP connections in the system (for example, the kernel limits the range of local port numbers from 1024 to 32768). When there are too many TCP client connections in the system at a certain time, each TCP client connection occupies a unique local port number (the local port number is specified in the local port range limit). If all the existing TCP client connections have occupied the local port number, There is no way to assign a local port number to the new TCP client connection, so the connect() call returns a failure with an error message of “Can’t assignrequested Address”. You can check the Linux kernel source code for the control logic. For example, check the tcp_ipv4.c file for the following functions:

    static int tcp_v4_hash_connect(struct sock *sk)

Notice the access control for the variable syscTL_LOCAL_port_range in the above function. The syscTL_LOCAL_port_range variable is initialized in the tcp.c file using the following function:

    void __init tcp_init(void)

The local port range set by default at kernel compilation time may be too small, so you need to modify this local port range limit.

First, modify the /etc/sysctl.conf file and add the following lines:

    net.ipv4.ip_local_port_range = 1024 65000

This indicates that the system has set the local port range to 1024 to 65000. Note that the minimum local port range must be greater than or equal to 1024; The maximum port range should be less than or equal to 65535. Save the file after modification.

Step 2 run the sysctl command:

    [speng@as4 ~]$ sysctl -p

If no error message is displayed, the new local port range is set successfully. If the port range is set according to the above, a single process can theoretically establish up to 60,000 TCP client connections at the same time.

 

The second reason TCP connections cannot be established may be because the Linux network kernel’s IP_TABLE firewall limits the maximum number of TCP connections traced. If you use tcpdump to monitor the network, you will also find that there is no NETWORK traffic of SYN packets sent by the client during the TCP connection. Since the IP_TABLE firewall in the kernel tracks the status of each TCP connection, the tracking information will be placed in the ConnTrackDatabase, which is located in the kernel memory. The size of this database is limited. When there are too many TCP connections in the system, the database capacity is insufficient. IP_TABLE could not establish trace information for the new TCP connection, and thus appeared to block on the connect() call. The kernel limit on the maximum number of TCP connections to be traced must be changed in the same way as the kernel limit on the range of local port numbers:

First, modify the /etc/sysctl.conf file and add the following lines:

    net.ipv4.ip_conntrack_max = 10240

This indicates that the system has set the limit on the maximum number of TCP connections traced to 10240. Note that this limit value should be kept as small as possible to save on kernel memory.

Step 2 run the sysctl command:

    [speng@as4 ~]$ sysctl -p

If no error message is displayed, the new maximum number of TRACED TCP connections is successfully modified. If the preceding parameters are set, a single process can establish a maximum of 10,000 TCP client connections at the same time.

 

Use programming techniques that support highly concurrent network I/O

When writing high-concurrency TCP connection applications on Linux, you must use appropriate network I/O technologies and I/O event dispatch mechanisms.

Available I/O technologies include synchronous I/O, non-blocking synchronous I/O(also known as reactive I/O), and asynchronous I/O. In the case of high TCP concurrency, if you use synchronous I/O, it will seriously block your program unless you create a thread for each TCP connection I/O. However, too many threads will cause huge overhead due to the scheduling of threads. Therefore, the use of synchronous I/O in high TCP concurrency situations is not desirable, so consider using non-blocking synchronous I/O or asynchronous I/O. Techniques for non-blocking synchronous I/O include the use of select(), poll(), epoll, and other mechanisms. The technique for asynchronous I/O is to use AIO.

 

From the perspective of the I/O event dispatch mechanism, the use of SELECT () is not appropriate because it supports a limited number of concurrent connections (typically less than 1024). In terms of performance, poll() is also inappropriate. Although poll() can support a high number of TCP concurrency, due to its “polling” mechanism, when the number of concurrency is high, its operation efficiency is quite low and there may be uneven ALLOCATION of I/O events, resulting in “hunger” phenomenon of I/O on some TCP connections. This is not the case with epoll or AIO. (Earlier Linux kernels implemented AIO by creating a thread for each I/O request in the kernel, which also had serious performance problems with high-concurrency TCP connections. But AIO has been improved in the latest Linux kernels).

 

To sum up, when developing Linux applications that support high concurrent TCP connections, epoll or AIO technology should be used as far as possible to realize I/O control over concurrent TCP connections, which will provide effective I/O guarantee for the support of high concurrent TCP connections by the upgrade program.

 

Design pattern of concurrent server in network programming

There are three design patterns for concurrent servers

1) Multiple processes. Each process serves one client. However, the process scheduling cost is high, resources cannot be shared, and inter-process communication mechanism is complex.

2) Multithreading. Each thread serves one client. Advantages are low overhead, simple communication mechanism and shared memory. However, the shared address space has low reliability. If a server is faulty, the system may crash. In addition, global sharing may bring competition, and shared resources must be mutually exclusive, which has high programming requirements.

3) Single process: occupies fewer process and thread resources, and the communication mechanism is simple. But the monitoring server and each sub-server knead together, the program structure is complex and unclear, and the programming is troublesome.

 

Example: multi-process concurrent server

/* Multi-process concurrent server. The program waits for the customer to connect, displays the customer's address once connected, then receives the customer's name and displays it. It then receives information (a string) from that customer. Each time a string is received, the string is displayed, the string is inverted, and the inverted character is sent back to the client. After that, continue to wait to receive information from the customer until the customer closes the connection. The service appliance has the ability to handle multiple customers simultaneously. */ #include<stdio.h>          /* These are the usual header files */
#include <strings.h>          /* for bzero() */
#include <unistd.h>         /* for close() */
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <arpa/inet.h>
  
#define PORT 1234   /* Port that will be opened */
#define BACKLOG 2   /* Number of allowed connections */
#define MAXDATASIZE 1000 
void process_cli(int connectfd, struct sockaddr_in client);
 
main()
{
        int listenfd, connectfd; /* socket descriptors */
        pid_t pid;
        struct sockaddr_in server; /* server's address information */
        struct sockaddr_in client; /* client's address information */
        int sin_size;
 
        /* Create TCP socket  */
        if ((listenfd = socket(AF_INET, SOCK_STREAM, 0)) == -1) {
           /* handle exception */
           perror("Creating socket failed.");
           exit(1);
        }
 
        int opt = SO_REUSEADDR;
        setsockopt(listenfd, SOL_SOCKET, SO_REUSEADDR, &opt, sizeof(opt));
        bzero(&server,sizeof(server));
        server.sin_family=AF_INET;
        server.sin_port=htons(PORT);
        server.sin_addr.s_addr = htonl (INADDR_ANY);
        if (bind(listenfd, (struct sockaddr *)&server, sizeof(struct sockaddr)) == -1) {
           /* handle exception */
           perror("Bind error.");
           exit(1);
           }
 
        if(listen(listenfd,BACKLOG) == -1){  /* calls listen() */
           perror("listen() error\n");
           exit(1);
           }
        sin_size=sizeof(struct sockaddr_in);
        while(1)
        {
          /*accept connection.what causes the acceptance? */
         if ((connectfd = accept(listenfd,(struct sockaddr *)&client,&sin_size))==-1) {
           perror("accept() error\n");
           exit(1);
           }
        /*  Create child process to service client */
        if ((pid=fork())>0) {
           /* parent process */
           close(connectfd);
           continue;
           }
        else if (pid==0) {
           /*child process*/
           close(listenfd);
           process_cli(connectfd, client);
           exit(0);    
           }
        else {
           printf("fork error\n");
 
           exit(0);
           }
        }
        close(listenfd);   /* close listenfd */        
}
 
 
void process_cli(int connectfd, struct sockaddr_in client)
{
        int num;
        char recvbuf[MAXDATASIZE], sendbuf[MAXDATASIZE], cli_name[MAXDATASIZE];
        printf("You got a connection from %s.  ",inet_ntoa(client.sin_addr) ); /* prints client's IP */
        /* Get client's name from client */
        num = recv(connectfd, cli_name, MAXDATASIZE,0);
        if (num == 0) {
           close(connectfd);
           printf("Client disconnected.\n");
           return;
           }
 
        cli_name[num - 1] = '\0';
        printf("Client's name is %s.\n",cli_name);
 
        while (num = recv(connectfd, recvbuf, MAXDATASIZE,0))
        {
                int i = 0;
                recvbuf[num] = '\0';
                printf("Received client( %s ) message: %s",cli_name, recvbuf);
                for (i = 0; i < num - 1; i++) {
                sendbuf[i] = recvbuf[num - i -2];
                }
 
                sendbuf[num - 1] = '\0';
                send(connectfd,sendbuf,strlen(sendbuf),0); /* send to the client welcome message */
        }
        close(connectfd); /*  close connectfd */
}
Copy the code

\

 

Example: multi-threaded concurrent server

#include <stdio.h>          /* These are the usual header files */
#include <string.h>          /* for bzero() */
#include <unistd.h>         /* for close() */
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <arpa/inet.h>
#include <pthread.h>
#include <stdlib.h>/* for exit in c++(.C/.cc) ; no need for c ?? */ #define PORT 1234 /* Port that will be opened */ #define BACKLOG 5 /* Number of allowed connections */ #define MAXDATASIZE 1000 //void process_cli(int connectfd, sockaddr_in client); // c only supports struct sockaddr_in, but c++ support sockaddr_in void process_cli(int connectfd, struct sockaddr_in client); /* function to be executed by the new thread */ void* start_routine(void* arg); typedef struct _ARG { int connfd; struct sockaddr_in client; }ARG; // it's better to use typedef struct main() { int listenfd, connectfd; /* socket descriptors */ pthread_t thread; //struct ARG *arg; // when no typedef,there should be struct for c code; no need for c++ ARG *arg; struct sockaddr_in server; /* server's address information */ struct sockaddr_in client; /* client's address information */ int sin_size; /* Create TCP socket */ if ((listenfd = socket(AF_INET, SOCK_STREAM, 0)) == -1) { /* handle exception */ perror("Creating socket failed."); exit(1); } int opt = SO_REUSEADDR; setsockopt(listenfd, SOL_SOCKET, SO_REUSEADDR, &opt, sizeof(opt)); bzero(&server,sizeof(server)); server.sin_family=AF_INET; server.sin_port=htons(PORT); server.sin_addr.s_addr = htonl (INADDR_ANY); if (bind(listenfd, (struct sockaddr *)&server, sizeof(struct sockaddr)) == -1) { /* handle exception */ perror("Bind error."); exit(1); } if(listen(listenfd,BACKLOG) == -1){ /* calls listen() */ perror("listen() error\n"); exit(1); } sin_size=sizeof(struct sockaddr_in); while(1) { /* Accept connection */ // if ((connectfd = accept(listenfd,(struct sockaddr *)&client,&sin_size))==-1) {// no problem for c if ((connectfd = accept(listenfd,(struct sockaddr *)&client,(socklen_t *)&sin_size))==-1) { perror("accept() error\n"); exit(1); } /* Create thread*/ arg = new ARG; arg->connfd = connectfd; //memcpy((void *)&arg->client, &client, sizeof(client)); // both ok! memcpy(&arg->client, &client, sizeof(client)); if (pthread_create(&thread, NULL, start_routine, (void*)arg)) { /* handle exception */ perror("Pthread_create() error"); exit(1); } } close(listenfd); /* close listenfd */ } void process_cli(int connectfd, sockaddr_in client) { int num; char recvbuf[MAXDATASIZE], sendbuf[MAXDATASIZE], cli_name[MAXDATASIZE]; printf("You got a connection from %s. ",inet_ntoa(client.sin_addr) ); /* Get client's name from client */ num = recv(connectfd, cli_name, MAXDATASIZE,0); if (num == 0) { close(connectfd); printf("Client disconnected.\n"); return; } cli_name[num - 1] = '\0'; printf("Client's name is %s.\n",cli_name); while (num = recv(connectfd, recvbuf, MAXDATASIZE,0)) { recvbuf[num] = '\0'; printf("Received client( %s ) message: %s",cli_name, recvbuf); for (int i = 0; i < num - 1; i++) { sendbuf[i] = recvbuf[num - i -2]; } sendbuf[num - 1] = '\0'; send(connectfd,sendbuf,strlen(sendbuf),0); } close(connectfd); /* close connectfd */ } void* start_routine(void* arg) { ARG *info; info = (ARG *)arg; /* handle client's requirement */ process_cli(info->connfd, info->client); delete arg will cause warning! the type for deleting should be the same as new allocated delete info; pthread_exit(NULL); }Copy the code

\

 

 

Example: single-threaded concurrent server

#include <stdio.h>          /* These are the usual header files */
#include <string.h>         /* for bzero() */
#include <unistd.h>/* for close() */ #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <arpa/inet.h> #include  <sys/time.h> #include<stdlib.h>#define PORT 6888 /* Port that will be opened */ #define BACKLOG 5 /* Number of allowed connections simutaniously*/ #define MAXDATASIZE 1000 typedef struct _CLIENT{ int fd; struct sockaddr_in addr; /* client's address information */ char data[1024]; }CLIENT; void process_cli(CLIENT *client, char* recvbuf, int len); void savedata(char* recvbuf, int len, char* data); main() { int i, maxi, maxfd, sockfd; int nready; size_t n; fd_set rset, allset; int listenfd, connectfd; /* socket descriptors */ struct sockaddr_in server; /* server's address information */ /* client's information */ CLIENT client[FD_SETSIZE]; char recvbuf[MAXDATASIZE]; int sin_size; /* Create TCP socket */ if( (listenfd = socket(AF_INET, SOCK_STREAM, 0)) == -1 ){ /* handle exception */ perror("Creating socket failed."); exit(1); } int opt = SO_REUSEADDR; setsockopt(listenfd, SOL_SOCKET, SO_REUSEADDR, &opt, sizeof(opt)); bzero(&server,sizeof(server)); server.sin_family=AF_INET; server.sin_port=htons(PORT); server.sin_addr.s_addr = htonl (INADDR_ANY); if( bind(listenfd, (struct sockaddr *)&server, sizeof(struct sockaddr)) == -1 ){ /* handle exception */ perror("Bind error."); exit(1); } if(listen(listenfd,BACKLOG) == -1){ /* calls listen() */ perror("listen() error\n"); exit(1); } sin_size = sizeof(struct sockaddr_in); /*initialize for select */ maxfd = listenfd; maxi = -1; for( i = 0; i<FD_SETSIZE; i++ ){ client[i].fd = -1; } FD_ZERO( &allset ); FD_SET( listenfd, &allset ); while(1){ struct sockaddr_in addr; rset = allset; nready = select( maxfd+1, &rset, NULL, NULL, NULL ); printf("select saw rset actions and the readfset num is %d. \n",nready ); if( FD_ISSET(listenfd, &rset) ){ /* new client connection */ /* Accept connection */ printf("accept a connection.\n"); if(( connectfd = accept(listenfd,(struct sockaddr *)&addr,(socklen_t *)&sin_size))==-1 ){ perror("accept() error\n"); continue; } /* Put new fd to client */ for( i = 0; i < FD_SETSIZE; i++ ){ if (client[i].fd < 0) { client[i].fd = connectfd; /* save descriptor */ client[i].addr = addr; client[i].data[0] = '\0'; printf("You got a connection from %s. ",inet_ntoa(client[i].addr.sin_addr) ); break; } } printf("add new connect fd.\n"); if(i == FD_SETSIZE ) printf("too many clients\n"); FD_SET( connectfd, &allset ); /* add new descriptor to set */ if( connectfd > maxfd ) maxfd = connectfd; if( i > maxi ) maxi = i; if( --nready <= 0 ) continue; /* no more readable descriptors */ } for( i = 0; i <= maxi; i++ ){ /* check all clients for data */ if( (sockfd = client[i].fd) < 0 ) continue; /* no more connected clients*/ if( FD_ISSET(sockfd, &rset) ){ printf( "recv occured for connect fd[%d].\n",i ); if( (n = recv(sockfd, recvbuf, MAXDATASIZE,0) ) == 0 ){ /*connection closed by client */ close(sockfd); printf("Client( %d ) closed connection.\n",client[i].fd ); FD_CLR(sockfd, &allset); client[i].fd = -1; } else{ process_cli( &client[i], recvbuf, n ); } if (--nready <= 0) break; /* no more readable descriptors */ } } } close(listenfd); /* close listenfd */ } void process_cli( CLIENT *client, char* recvbuf, int len ) { send( client->fd, recvbuf, len, 0); }Copy the code

\

 

 

Example: epoll use

#include <unistd.h>
#include <sys/types.h>       /* basic system data types */
#include <sys/socket.h>      /* basic socket definitions */
#include <netinet/in.h>      /* sockaddr_in{} and other Internet defns */
#include <arpa/inet.h>       /* inet(3) functions */
#include <sys/epoll.h>  /* epoll function */
#include <fcntl.h>           /* nonblocking */
#include <sys/resource.h>    /*setrlimit */
 
#include <stdlib.h>
#include <errno.h>
#include <stdio.h>
#include <string.h>#define MAXEPOLLSIZE 10000 #define MAXLINE 10240 int handle(int connfd); int setnonblocking( int sockfd ) { if( fcntl(sockfd, F_SETFL, fcntl(sockfd, F_GETFD, 0)|O_NONBLOCK) == -1) { return -1; } return 0; } int main(int argc, char **argv) { int servPort = 6888; int listenq = 1024; int listenfd, connfd, kdpfd, nfds, n, nread, curfds,acceptCount = 0; struct sockaddr_in servaddr, cliaddr; socklen_t socklen = sizeof(struct sockaddr_in); struct epoll_event ev; struct epoll_event events[MAXEPOLLSIZE]; struct rlimit rt; char buf[MAXLINE]; Rt. rlim_max = rt.rlim_cur = MAXEPOLLSIZE; if( setrlimit( RLIMIT_NOFILE, &rt ) == -1 ){ perror("setrlimit error"); return -1; } */ bzero(&servaddr, sizeof(servaddr)); servaddr.sin_family = AF_INET; servaddr.sin_addr.s_addr = htonl(INADDR_ANY); servaddr.sin_port = htons(servPort); listenfd = socket(AF_INET, SOCK_STREAM, 0); if (listenfd == -1) { perror("can't create socket file"); return -1; } int opt = 1; setsockopt(listenfd, SOL_SOCKET, SO_REUSEADDR, &opt, sizeof(opt)); if( setnonblocking(listenfd) < 0 ){ perror("setnonblock error"); } if( bind(listenfd, (struct sockaddr *) &servaddr, sizeof(struct sockaddr)) == -1 ){ perror("bind error"); return -1; } if( listen(listenfd, listenq) == -1 ){ perror("listen error"); return -1; KDPFD = epoll_create(MAXEPOLLSIZE); / / KDPFD = epoll_create(MAXEPOLLSIZE); ev.events = EPOLLIN | EPOLLET; ev.data.fd = listenfd; if( epoll_ctl( kdpfd, EPOLL_CTL_ADD, listenfd, &ev ) < 0){ fprintf(stderr, "epoll set insertion error: fd=%d\n", listenfd ); return -1; } curfds = 1; printf( "epollserver startup, port %d, max connection is %d, backlog is %d\n", servPort, MAXEPOLLSIZE, listenq ); for (;;) {/* Wait for an event to occur */ NFDS = epoll_wait(KDPFD, events, curfds, -1); if (nfds == -1){ perror("epoll_wait"); continue; } printf( "events happen %d\n", nfds ); /* Handle all events */ for(n = 0; n < nfds; ++n ){ if( events[n].data.fd == listenfd ){ connfd = accept(listenfd, (struct sockaddr *)&cliaddr, &socklen ); if (connfd < 0){ perror("accept error"); continue; } sprintf(buf, "accept form %s:%d\n", inet_ntoa(cliaddr.sin_addr), cliaddr.sin_port); printf("%d:%s", ++acceptCount, buf); if( curfds >= MAXEPOLLSIZE ){ fprintf(stderr, "too many connection, more than %d\n", MAXEPOLLSIZE); close(connfd); continue; } if( setnonblocking(connfd) < 0 ){ perror("setnonblocking error"); } ev.events = EPOLLIN | EPOLLET; ev.data.fd = connfd; if( epoll_ctl(kdpfd, EPOLL_CTL_ADD, connfd, &ev ) < 0 ){ fprintf(stderr, "add socket '%d' to epoll failed: %s\n", connfd, strerror(errno)); return -1; } curfds++; continue; If (handle(events[n].data.fd) < 0){epoll_ctl(KDPFD, EPOLL_CTL_DEL, events[n].data.fd, &ev);} // Handle (events[n].data.fd, &ev); curfds--; } } } close( listenfd ); return 0; } int handle( int connfd ) { int nread; char buf[MAXLINE]; nread = read(connfd, buf, MAXLINE); If (nread == 0){printf("client close the connection\n"); close(connfd); return -1; } if( nread < 0 ){ perror("read error"); close(connfd); return -1; } write( connfd, buf, nread ); // Return 0; }Copy the code


\