In the previous section we talked about the client-side programming (haven’t now, after add), client-side programming is simpler, fd is only one, but the server will connect a number of clients, so the server to the client sends request (IO), made a summary, a total of five, then we will have a good analysis.

2.1 Blocking IO model

The most popular IO model is blocking IO model, basically all functions are blocked by default, blocking programming is the simplest, but also many people learn network programming, are used in the method.When recvFROM is called by a user process, the kernel starts equiping data, which is usually not immediately available. The kernel waits for the data to arrive, and the application layer blocks the thread. When the kernel waits until the data is ready, it copies the data from the kernel state to the user state. The result is then returned, and the thread is unblocked and restarted.

2.1.1 Server model

With blocking TYPE IO server model, most are a request a response, if multiple customers do not request, the server also can not receive data, so this model is less used, for this model, there is an improved method: is to use multithreading.

The purpose of multithreading is to create a thread for each client connection fd to handle. Blocking also blocks the newly created thread without affecting the main thread, which continues to receive messages from clients.

Disadvantages: If there are hundreds or thousands of connection requests at the same time, no matter multi-threading or multi-process will seriously occupy system resources, reducing the efficiency of system response to the outside world, and threads and processes themselves are easier to enter the state of suspended animation.

You can consider using a thread pool, but there is a limit to what a pool can do, and when requests go well past that limit, the pool system doesn’t respond much better to the outside world than it would without a pool, so using a pool must consider the size of the response it faces and adjust the pool size accordingly.

Thread pool can relieve some of the pressure, but can not solve all the problems. In short, multi-threaded mode can solve small-scale service requests conveniently and efficiently, but in the face of large-scale service requests, multi-threaded model will also encounter bottlenecks.

Application:

//fcntl(listenfd, F_SETFL, O_NONBLOCK);
        threadPool_t *pool = threadPool_creat(8.200, threadpool_eventfd_type);
        if(pool == NULL) {
            printf("pool is null\n");
            return - 1;
        }

        while(1) {
            struct sockaddr_in client_addr ;
            memset(&client_addr, 0.sizeof(struct sockaddr_in));
            socklen_t client_len = {0};

            int clientfd = accept(listenfd, (struct sockaddr *)&client_addr, &client_len);
            if(clientfd <= 0)  continue;
 
           // fcntl(clientfd, F_SETFL, O_NONBLOCK);
            threadPool_add(pool, callback, (void*)&clientfd);

            usleep(100);
        }
Copy the code

This is pseudocode, a thread pool problem that can relate to another topic process/thread/coroutine

2.2 Non-blocking IO model

If there is a blocking IO model, there is a non-blocking IO model, so let’s talk about this non-blocking IO model.

We can set up the above function through a function into a non-blocking, non-blocking situation is that in the requested IO thread, not the current thread to sleep, but immediately return an error, until the kernel equipment of all data, application and copy the data from the kernel state to state, did not return to read the data.No data was returned from the first three calls to recvfrom, so the kernel immediately returns an EWOULDBLOCK error. The fourth call to redvFROM has a datagram ready, it is copied to the application process buffer, and recvfrom is returned.

Set non-blocking functions:

fcntl( fd, F_SETFL, O_NONBLOCK );
Copy the code

2.2.1 Server model

Most of these have a useful server model, where a thread is constantly querying recvFROM data to see if it is ready, which can consume a lot of CPU time, but this model is occasionally used, usually on a system that is dedicated to a particular function.

2.3 IO multiplexing model

This method is called event-driven IO. A single thread can process the IO of multiple network connections at the same time. The select/poll/epoll function polls all sockets and notifies users when data arrives on a socket.

When we use select, the entire thread is blocked, but this time it is a SELECT block, not a recvFROM block. During the select block, the kernel monitors all the sockets responsible for the select. When the data in any of the sockets is ready, Select is returned. The advantage of this model is that a single thread can monitor multiple FDS.

2.3.1 Server model

This IO model is used in many places, a thread is responsible for monitoring many FD, so it needs to be mastered.

Application:

 // select 
    fd_set rfds, rset;
    FD_ZERO(&rfds);
    FD_SET(listenfd, &rfds);

    int i = 0;
    int max_fd = listenfd;
    rset = rfds;

    while(1) {

        printf("aagag %d\n", max_fd);
        int nready = select(max_fd+1, &rset, NULL.NULL.NULL);
        if(nready <= 0) continue;

        if(FD_ISSET(listenfd, &rset)) {
            struct sockaddr_in client_addr ;
            memset(&client_addr, 0.sizeof(struct sockaddr_in));
            socklen_t client_len;

            int clientfd = accept(listenfd, (struct sockaddr *)&client_addr, &client_len);
            if(clientfd <= 0)  continue;

            FD_SET(clientfd, &rset);
            printf("clientfd %d\n", clientfd);
            if(clientfd > max_fd) max_fd = clientfd;

            if(--nready == 0) continue;
        }

        for(i = listenfd+1; i<=max_fd; i++)  {  
            if(FD_ISSET(i, &rset))
            {
                char buffer[1024] = {0};
                if( recv(i, buffer, 1024.0) > 0 ) {
                    printf("client fd %d %s\n", i, buffer);
                } else {   // The connection is broken
                    printf("diconnect %d\n", i);
                    FD_CLR(i, &rset);
                    close(i);
                }

                if(--nready == 0) break; }}}Copy the code

This program is the SELECT model

 int efd = epoll_create(1);
    if(efd <= 0)  return - 1;

    struct epoll_event ev.events[1024].

    ev.events = EPOLLIN;
    ev.data.fd = listenfd;

    epoll_ctl(efd, EPOLL_CTL_ADD, listenfd, &ev);

    while(1) {
        int nready = epoll_wait(efd, events, 1024.- 1);
        if(nready <=0 ) continue;

        for(int i=0; i<nready; i++) {
            if(events[i].data.fd == listenfd)  {
                struct sockaddr_in client_addr ;
                memset(&client_addr, 0.sizeof(struct sockaddr_in));
                socklen_t client_len;

                int clientfd = accept(listenfd, (struct sockaddr *)&client_addr, &client_len);
                if(clientfd <= 0)  continue;

                ev.events = EPOLLIN | EPOLLET;
                ev.data.fd = clientfd;
                epoll_ctl(efd, EPOLL_CTL_ADD, clientfd, &ev);
                printf("clientfd %d\n", clientfd);

                if(--nready == 0) continue;
            } else {
                int clientfd = events[i].data.fd;
                char buffer[1024] = {0};
                if( recv(clientfd, buffer, 1024.0) > 0 ) {
                    printf("client fd %d %s\n", clientfd, buffer);
                } else {   // The connection is broken
                    printf("diconnect %d\n", clientfd);
                    epoll_ctl(efd, EPOLL_CTL_DEL, clientfd, NULL);
                    close(clientfd);
                }

                if(--nready == 0) break; }}}Copy the code

This program is epoll

2.3.2 Comparison between SELECT and epoll

Advantages: Select model The event-driven model only uses a single thread to execute, consumes less resources, does not consume too much CPU, and can provide services for clients. Disadvantages: When the value of the handle to be probed is large, the SELECT () interface itself needs to consume a large number of events to poll each handle. The modified model will mix event detection and event response together. Once the execution volume of event response is too large, it will be disastrous to the whole model and reduce the timeliness of event detection to a great extent

Improvements: There are many efficient event-driven libraries that can override this difficulty, including Libevent and Libev

Select, poll, epoll, poll, poll, poll, poll, poll

Here is my summary of the mind map:

2.4 Signal-driven IO model

We can use signals directly, let the kernel finish processing, through the SIGIO signal to inform us, this model is signal-driven IO.

Start by turning on the socket’s signal-driven IO and install a signal-handling function through the SIGAction system call. The system call returns immediately and our process continues to work, meaning that our process is not blocked and does not have to poll for detection. When the datagram is ready to be read, the kernel generates a SIGIO signal of its own. We can receive data in the signal handler and tell the main thread to read it.

The advantage of this model is that instead of waiting for data and detecting data, you just wait for notifications from signal handlers and then read the data. (Server model will be fixed later)

2.5 Asynchronous I/O model

Asynchronous I/O in Linux is used for disk I/O reading and writing operations, not network I/O. It was introduced in kernel 2.6:Once the user process initiates the read operation, it can start doing other things immediately. On the other hand, from the kernel’s perspective, when it receives an asynchronous read, it first returns immediately, so no blocks are generated for the user process. The kernel then waits for the data to be ready and copies the data to the user’s memory. When this is done, the kernel sends a signal to the user process telling it that the read operation is complete.

The difference between signal-driven IO and asynchronous IO is as follows: Signal-driven IO: The kernel tells us that we can perform an IO operation, requiring the process to read the data. Asynchronous IO: The kernel notifies us when the IO operation is complete and copies it directly into user memory.

2.6 conclusion:

Synchronous I/O operation: blocks the request process until the I/O operation completes. Asynchronous I/O operation: Does not block the request process

Synchronous IO: blocking IO model, non-blocking IO model, IO multiplexing model and signal-driven IO model.

Asynchronous I/O: Asynchronous I/O model.